Developed by

TSSB book on
Trading a System Developed in TSSB

At this time we have not yet completed a simple and elegant method for real-time trading of a system developed in TSSB. However, there are several possible (though admittedly awkward) ways to trade these systems in the current incarnation of the program:

  1. If you are doing end-of-day trading for 'next day' moves and the training time of your system is not excessive (fast training time is the most common situation), then you would update the market history as of the end of the day, but with two additional ‘fake’ market records. Execute the TRAIN command, and execute the WRITE DATABASE command. This will produce a standard text file containing, among other things, the predicted market movement for the next day. The log file produced by training will list the thresholds for taking long and short positions. Compare the predicted market movement to these thresholds and take a position accordingly. This method is a nuisance because the user must append fake ‘tomorrow’ records to the market history file(s). But the advantage of this method is that the full power of all TSSB models and committees can be invoked in the trading decisions.

    About the need for two fake records...

    Assume that we are doing one-day-ahead predictions. (Adjust as needed for other targets.)
    TSSB predicts the change from tomorrow morning to the next morning. For example, suppose we have closed trading day 10. We predict the change from the open of day 11 to the open of day 12.
    Again, suppose we have just closed day 10. Then the most recent case in the database can be day 8, which has as its target the change from day 9 to day 10, and the most recent day we have is day 10.
    There is no way that day 9 could be in the database, because it would need the open of day 11 and we are not there yet.
    So at the close of day 10, we would need to append two fake records (just duplicate day 10) for day 11 and day 12.
    This way, the most recent record in the database will be for day 10, which will include the predicted day 11 to day 12 change based on history ending at day 10. This, of course, is what we need for realtime trading.
  2. If your trading system involves only indicators that can be computed in a program such as TradeStation (you imported them into TSSB, which is easy), and if your TSSB system involves only simple constructs such as linear regression, principal components, and average or constrained committees, it is very simple to program them into TradeStation in a short EasyLanguage script. The log file produced by TSSB provides all necessary weights and thresholds. In this way, simple trading systems developed in TSSB can be actively traded on more conventional platforms, though some busywork is required reading the TSSB log file and typing the appropriate figures into EasyLanguage or whatever other trading tool is desired.

These are the only two possibilities with the current version of TSSB. However, we are currently designing an easy-to-use TradeStation interface. The user will develop a trading system in TSSB and then export the entire set of rules (models, committees, thresholds, et cetera) in a single file. This file would be automatically read when TradeStation starts, and the user would then have access to a single indicator in TradeStation that takes the value +1 when a long position is to be opened, -1 when a short position is to be opened, and 0 when the trader is supposed to be neutral. A delivery date for this TradeStation interface is dependant on funding from TSSB users. We are able to furnish a quote for this enhancement. Interested parties should contact David Aronson via the contract page.

Include trading costs in model development and performance results

Trading costs can have a profound impact on the nature of optimized models, and their effect really should be included in reported performance figures. For example, significant trading costs will favor models that make fewer but more reliable trades compared to models developed without accounting for trading costs. Also, if a developed trading system makes numerous trades, slippage and commissions can easily convert a highly profitable system into a losing system.

Hidden Markov Models for regime classification

Expecting a single model to effectively handle many different market regimes (high versus low volatility, strong trends versus flat markets, et cetera) is unrealistic. The best prediction systems specialize in a single regime. Our current method of defining regimes (via Oracles, event triggering, and split linear models) employs a fixed threshold on a variable. This method, while respectable and useful, is not optimal. It would be much better to base regime definitions on multiple variables, with their correlation taken into account. Also, HMM models allow for transition probabilities, which discourages whipsaws on the boundary of different regimes. By employing optimally estimated probabilities that a regime will remain in effect or change to another, we can discourage rapid, repetitive shifting of in and out of regimes, a capability which TSSB does not currently possess.

Relative Strength Indicators Described by Gary Anderson in “The Janus Factor” (Bloomberg Press 2012)

Our initial explorations into this fascinating family of relative performance indicators shows considerable promise. We propose adding at least the most fundamental members of this family to the TSSB library. They would be a powerful enhancement for the development of trading strategies that are based on ranking sectors or individual issues within a stock universe.

Display confidence bands on plotted equity curves

It would be nice to overlay confidence bands on the equity curves that we plot. This would let the user visually assess the relevance of out-of-sample equity curves. For example, if the curve is impressive and the confidence bands are tight, the user would be encouraged. However, if the lower confidence band is close to flat, or even shows a loss, the user would not be nearly as impressed by a quickly rising equity curve.

Develop models based on benchmarked performance

Many developers believe that one should take advantage of long-term market trends when developing a trading system. For example, one might favor long positions when trading equity markets that have a long-term upward bias. However, many others believe that removing the position-biasing effects of secular trend reveals the true predictive power of models. Under this philosophy, models should be developed that maximize performance without taking advantage of trends. There are methods for separating the performance of a trading system into two components: that due to favoring positions that take advantage of the secular trend, and that due to true predictive power. Currently, TSSB bases its indicator selection as well as its optimized trading thresholds on the total of these two quantities. We propose adding the option of TSSB choosing indicators and trading thresholds based on true predictive power alone, uncontaminated by position bias due to trend. This will be done by optimizing performance relative to a benchmark that is based on the interaction between trend and position bias.

P-values for OOS performance based on equity curves

In order to properly assess the performance of a trading system, we need to compute two quantities: an unbiased estimate of future performance, and the probability (p-value) that a truly worthless system would have performed as well as our system did in back-testing. TSSB currently has several excellent algorithms for providing unbiased estimates of future performance. It also has several methods for computing p-values:

  1. A Monte-Carlo Permutation Test estimates p-values when the target looks ahead one day. This test is invalid for look-aheads greater than one day.
  2. The tapered-block bootstrap and stationary bootstrap in TSSB can theoretically handle any look-ahead, but in practice they are notoriously unreliable.
  3. Permutation training provides p-values for the entire historical dataset. But it is extremely slow, sometimes prohibitively slow. Also, because it includes historical data prior to the walkforward OOS period on which unbiased future performance estimates are based, it can be misleading. For example, suppose we want to develop our system using data from 1995-2012, and we want the walkforward test to start at 2005. We may find that the p-value is significant, and the OOS-based expected future performance is excellent. That sounds promising. But what if the significant p-value comes strictly from pre-2005 data? The data that provided the good p-value and the data that provided the good unbiased performance estimate do not overlap!

Thus, we see that none of TSSBs current methods for estimating p-values are ideal. We suggest adding another alternative: base p-values on the equity curve obtained in the OOS period. This will handle targets with any look-ahead distance, and it ensures that the p-values are based on the same time period that was used for unbiased estimates of future performance. As a final bonus, this will also handle OOS-type portfolios, although not as well as walkforward permutation described in the next section.

Walkforward testing with permutation

Our existing permutation training is a powerful way of estimating p-values for training-set performance. However, this decouples the p-values from the expected future performance produced by walkforward testing. This effect, described in “P-values for OOS performance based on equity curves” above, is problematic. In other words, permutation training computes p-values based on the entire available market history (training plus OOS periods), while walkforward testing estimates expected future performance based on only the OOS period. It is not good to have them be separate time periods. Ideally they should both cover the same time period to avoid a situation of a significant p-value being obtained strictly from activity that preceded the OOS period. A solution to this problem would be extending permutation to walkforward testing. This would directly link the unbiased estimate of future performance to p-values for it. Also, permutation training cannot compute p-values for portfolios that are selected based on out-of-sample performance of the component trading systems. Walkforward permutation would overcome this limitation by correctly and efficiently compensating for the selection bias inherent in portfolio construction. What is the advantage of walkforward permutation over computing p-values based on equity curves, as described above? Simply put, the p-values computed by walkforward permutation will in most cases be more accurate than those computed by means of equity curves. This difference can be substantial in some situations.

Note on “P-values for OOS performance based on equity curves” versus “Walkforward testing with permutation”

The two options described above do essentially the same things:

  1. They compute p-values for the OOS period, which the current version of TSSB cannot do well in a general sense..
  2. They take into account selection bias from OOS-type portfolios, which the current version of TSSB cannot do at all.

However, they perform these tasks in completely unrelated ways, and each has its own advantages and disadvantages:

  1. The equity-curve method will execute very much faster than the walkforward permutation method
  2. The equity-curve method facilitates plotting confidence bands on equity curves.
    (These two tasks share much code, so programming them simultaneously would be efficient.)
  3. In most situations, the permutation method will provide p-values that are considerably more accurate (less random error in their computation) than the equity-curve method, making them more valuable.

The bottom line difference between the two methods is a tradeoff between execution speed and quality of results.

Logistic and Ridge regression

These are ‘almost-linear’ models that share the benefits of ordinary linear regression (much less likely to overfit than most nonlinear models; easy interpretability) but that are more sophisticated in terms of their ability to handle less than ideal data (noisy targets and correlated predictors).

Improved OPSTRING model

Our current OPSTRING model can be greatly improved by eliminating mathematically pointless candidates before they go into the genetic population pool for evaluation and potential reproduction. This will improve the efficiency of the genetic optimization algorithm. For example, the current version of OPSTRINGs in TSSB may, by random bad luck, include a term such as “X>X+1" in a population. Obviously X can never exceed X plus one. This is a nonsense term because it is always false. It will eventually be weeded out of the gene pool, but until this happens, computational resources will be wasted dealing with it.

Open positions with limit orders

The targets available in the current TSSB library all assume that when a trade is signaled, it is immediately opened with a market order. We could add targets that respond to a trade signal by issuing a limit order which may or may not be executed.

Supercomputer performance on a PC via CUDA processing

Modern nVidia video display cards make their massive parallel processing power available to users via what they call a CUDA interface. The very best nonlinear models such as general regression neural networks can be extremely slow to train, making them impractical for very large problems. Programming CUDA implementations of the best models can speed training by a factor of hundreds, or even thousands, reducing training time from hours to seconds.

More performance statistics

TSSB currently computes and prints a limited set of performance statistics for developed trading systems. Other commercial products display a vast array of statistics. We could add more statistics to the program’s result file.

More optimization criteria for portfolios

TSSB currently selects portfolio members by maximizing the Sharpe Ratio. This is excellent, but many users would like to employ other optimization criteria, such as maximizing return-to-drawdown ratios.