Scientific approach to trading – algorithmic trading and testing on historical data
Countless books have been published (and sold) about psychological aspects of trading which need to be mastered by all traders who want to generate stable and long-term profits. Of course, you have to adopt a proper trading psychology, but it’ll be useless if you won’t have a good trading strategy.
As a scientist: Testing of hypothesis
How will your acquired psychological resistance help you when you don’t have a profitable trading strategy? It’s the same as if you were to kick a penalty and you would have nerves of steel, but you wouldn’t know how to kick the ball.
Investors must not let themselves get confused by such nonsense. They could get into a vicious circle of a constant work on increasing insignificant skills.
At our hedge fund QuantOn, we believe that trading has to be approached by the use of scientific methods. What do I mean? Scientists need proofs to verify a hypothesis. Similarly, an algorithmic trader needs to analyze and verify his trading strategy’s rules on a long series of historical price data. He needs to test the strategy’s profitability on a relevant statistical sample of past trades. This process is called backtesting.
Of course, similarly to science, it is absolutely necessary to backtest trading strategies properly, carefully, and thoroughly. Based on QuantOn’s long-term experience, we believe that each successful trading strategy has to have clearly defined entry and exit orders. You can’t find them without algorithmization, it means transformation of a trading strategy into a clearly defined program code. This approach is called systematic or algorithmic trading.
Keep control sample…
Compared to the most common approach where the trader combines a precisely defined trading strategy with a discretionary trading, algorithmic traders have a great advantage because they can apply exhaustive and accurate statistical analyses to their programmed automated trading systems. Thanks to the speed of today’s computers, many computational operations can literally be done in seconds.
The good news is that there are several commercial trading platforms available today where you perform automated backtests by the use of your programmed codes. However, the bad news is that if you aren’t skilled in the probability theory and advanced statistics, you can misunderstand the results and consider your strategy profitable because of your incorrect statistical interpretations. Many systematic traders (advanced ones) believe that verification of “out-of-sample” data is a sufficient tool for verifying the viability of trading strategies. The procedure is simple. You have data for ten years and you split it into two parts – for example 1:1. You build and program your trading strategy by the use of the first 5years’ data (this part of the sample is called the “in-sample” data). Therefore: you get an idea how a trading strategy could work and rewrite the idea into an applicable code and obtain an automated trading system. You test the system on first five years of historical data. You find out that the strategy has some potential but it is not profitable and stable enough. You come with another idea how to modify the code or how to additionally filter the data and you backtest the strategy again on the first five years. So repeat the procedure until you obtain satisfactory results. This process is called the optimization process or “curve-fitting”.
…and apply it at the right moment
However, there is a huge risk that you will “over-optimize” your strategy. You fine-tune your strategy, but only for the sample data you are working with, i.e. the in-sample data. That is why we have left the other half of the data aside. Now we can verify our trading system on these out-of-sample data.
The point is that when building a strategy on in-sample data, we may miss true predictive signals and implement only the “noise”. Unfortunately, most algorithmic traders aren’t familiar with the in-sample/out-of-sample testing or apply it incorrectly. Most systematic traders make the following mistakes:
- They build their strategies on a complete historical data sample and don’t divide the data to in-sample and out-of-sample sub-samples.
- They split the data, but they immediately verify each modification of their strategy on out-of-sample data. The basic rule is to optimize the strategy solely on in-sample data. Only after we are satisfied with the system’s performance, we can verified it on out-of-sample data.
However, even if the trader does everything right, it means they build the strategy on in-sample data and then verify its final form on the out-of-sample data, they haven’t won yet. You can never rule out the unpleasant possibility that the confirmation of the system’s stable profitability was just a coincidence. One out-of-sample test simply does not suffice for a statistically relevant conclusion that our trading strategy has predictive abilities and detects correct signals.
Fortunately, there are advanced statistical methods and tests that can help us to maximize the likelihood of a trading strategy’s intrinsic profitability on unknown data, i.e. in live trading. Such strategies – which we test and develop – are called “robust strategies”.
Of course, the basic assumption is that we test our strategy on relevant historical data from sufficiently liquid markets and that we use realistic trading cost estimates.
Would you like to know more?
You are welcome to visit my online webinar: https://aostrading.cz/en/courses/online-courses/online-building-winning-automated-trading-strategies-ats-robustness-testing-and-portfolio-composition/