Oversaturation of Trading Algorithms to Blame for Huge Market Rebound? Also, Could BERT Replace Word2Vec in 2019?

It is currently January 5th, 2019, which means winter break is coming to a close, the new year is starting to unfold, and we can finally reflect on the market madness that managed to surprise seemingly every investor alive last week. December was a notably bad one for the stock market, with sources reporting it to be the worst December since the Great Depression. This, coupled with anticipation of rising Fed Funds Rates and rising unemployment, led many to believe that the market would plunge through the final week of the year, setting a dim tone for the start of 2019.
spdr
Yet, to the dismay of put owners, the market rebounded right at the time when it was expected to crash, bringing prices back up to where they were a few weeks prior. Still, these last-minute antics did nothing to erase December’s collective losses, as this graph demonstrates:
spdr-december2018
Different explanations were offered as to why the stock market rose so suddenly, including a Pension Fund Re-Balancing where pension fund managers supposedly had to move ~$60 billion of assets into stock holdings by the end of 2018, while others claimed that the rebound was merely a large short squeeze.
Whatever the true root of the cause may be, this rebound also seems to have effectively calmed stocks down, as the market has behaved pretty normally for the first trading days of 2019. What’s more, essentially the entire market was up yesterday, with Dow Jones up 3.3%.
It’s still very early to jump to conclusions (or, if you held AAPL, drop 9% to conclusions), but it may feel as if we just dodged a huge bullet, as a decline in the final week of 2018 would certainly have negatively impacted projections for 2019. But, considering what this massive market rebound has taught us, my overarching hypothesis is that this stock decline was not avoided, nor was it postponed. It still exists, and should be arriving on time somewhere around April or May. Consider the relationship between raised Federal Funds Rates and stock market behavior, as I bring up in one of my previous posts:
1966-2014
Over the past 50+ years, there has been a clear correlation between raising of Federal Funds Rates and ensuing declines in the stock market (it’s the S&P in this graph, but the same is true for nearly all stocks/ETFs). This decline isn’t immediate, but it happens after a buffer of around six months. Take a look at what happened to S&P prices when the Fed gradually rose interest rates beginning in 2004, for reference.
Why is this relevant? Two weeks ago, the Federal Reserve raised rates for the ninth time since 2015, bringing the total interest rate to around 2.5%.  This specific raise of 0.25%  was nothing huge on its own, but considering the gradual raising of interest rates in 2015 which will continue in 2019, warning bells start to ring. I would again compare the current gradual raising of interest rates to the raising of interest rates beginning in 2004, or 1999 and 2001. In each of these periods, the magnitude of the raise in FFR directly relates to a subsequent market decline of proportional magnitude. On an unrelated yet interesting note, the S&P has become progressively more responsive to shifts in FFR over the past 25 years, as you can also see through the graph.
So, the December 19 raise in interest rates might not be the singular straw that breaks the camel’s back, but the pile of straw is still there, and more straw is piling on in 2019, as the Fed reported their goal of reaching 3% interest this year. When the economy does reach the point of people tightening on spending because of inflated prices and growing interest rates, this is when the market will slow to a halt and we will start moving into the recessive phase of the cycle.
It is not only Federal rates which point to impending decline, as GDP forecasts from the St.Louis Fed estimate 2019 growth to be slower than in 2018, and tensions continue to grow between U.S and China (though i.m.h.o, these tensions have been blown out of proportion by the media, as both countries will eventually need to settle disputes for trade).
I should also mention that I don’t think this market decline will be catastrophic. GDP is still growing and unemployment is still low, which isn’t enough to counter the negatives brought in by raised FFR and slowing GDP growth, but this still means that the economy is producing.
It’s not all dark either, as there’s always room for turning profit. Stock market behavior is most volatile at times of peak growth (just preceding the major decline), as you can see through the erratic up/down spikes at both the peaks in the red graph. I don’t think we are quite at that point of the cycle yet, but it feels like we’re starting to get there, as the turbulent past two weeks of decline/rebound indicate. It is at these turbulent times when huge price swings occur intraday, allowing for big returns on options.
These have all been fascinating developments, and I plan on using the string of open/closing prices of stocks for the past two weeks as the first predictive test for my Word2Vec trading algorithm. I hope to capture the market rebound through a series of price-shift embeddings, which I will then compare to all of my historical price-shift embeddings to see which event in my data maps most closely to the market behavior we’re seeing right now. If the theory behind my code works, the aftermath of this past event should model what will follow the current market action.
Image result for omega advisors
While reading about the past two weeks of trading, I came across another very interesting question, which are the consequences of an over-saturation of trading algorithms. While some blamed the stock rebound on pension funds and shorting, Leon Cooperman, founder of investment advisory firm Omega Advisers, called out trading algorithms for their trend-following tendencies. Cooperman insists that the SEC is doing an inadequate job monitoring trading algorithms’ effects on market behavior by refusing the reinstate their Uptick Rule. With so many algorithms now trading through world markets, there’s a growing concern that these algorithms’ pattern-seeking dispositions will cause increased volatility and unpredictable price shifts. According to skeptics like Mr. Cooperman, in addition to things being much better back in his day when trading was done through shoving and yelling, this over-saturation of trading algorithms can lead to extreme instances of herding and exacerbated price shifts. I think of this as a snowball effect, where one algorithm investing in a stock will prompt more algorithms to do the same, which will cause the price of the stock to rise, which will in turn incite even more different algorithms to start investing too (so on and so forth).
Related image
This is actually a rather mind-boggling concept to ponder: If these algos use real-time data, their projections are inspired by current market shifts. But, if around 60% of all trading in U.S equity markets is done by algorithms, the daily market shifts analysed by trading algos is mostly just the output from other trading algos. Essentially, trading algorithms of today analyse moves made by other algorithms, which also make their moves based on what is going on in the market, creating a sort of paradoxical back-and-forth interplay. As quantitative trading takes over, algorithms that analyse real-time data are just learning from the decisions made by other algorithms, which Cooperman and company seem to believe will ruin traditional trading and erase the efficient market theory kept in check by human investors.
I happen to strongly disagree with Cooperman’s claims for several reasons. First of all, trading algorithms use the same signals and statistical probabilities to make investments as humans do. I mean, humans are the ones writing these programs (for now), and humans tell the program which indicators to look out for when investing. So, there’s no difference if algorithms do the trading instead of humans, except maybe increased speed and consistency. Not only that, but the main goal of trading algorithms is to locate and profit from market inefficiencies. This is no different from the main goal of human traders. At the end of the day, human traders are just as prone to herding as their digital counterparts. Financial bubbles existed long before anyone knew what financial engineering meant, and the problem of crowd-think will probably never subside.
Finally, I can’t imagine a future where 100% of trading is done by algorithms with no human involvement. Even if all trading is eventually done through tools and software which use trading algorithms, there will still be humans programming these algorithms, changing their parameters, testing their performance, and inventing improved strategies. Quantitative trading is merely the next logical step in the evolution of finance, allowing for more efficiency and liquidity.
Right now, the focus should not be on questioning the integrity or consequences of our trading algorithms, it should be finding new AI-based algorithms which can better synthesize financial data. For instance, the next giant leap for trading are algorithms which learn the behaviors of other trading algorithms, so that they can make investments based on what they think other computers will do (and I’m sure this is already being investigated by secretive hedge funds and HFT funds).
I am extremely curious to see how the dynamic between trading algorithms plays out, and I would love to hear how others think trading will look like in the near future.
Image result for bert

The last thing I’ve been thinking about over winter break is BERT, which is a new technology fresh out of Google’s AI lab. BERT, an acronym for Bidirectional Encoder Representations from Transformers, is an improved NLP model which can learn bidirectionally and performs better on benchmark data sets than any other NLP system. Much like Word2Vec, BERT creates embeddings for words, but BERT’s embeddings are much more diverse and powerful, as they take into account the issue of homographs.
I will continue reporting on BERT in the following weeks as I do more research and improve my understanding, as I think this is an exciting breakthrough which has a lot of potential to be explored (it only released publicly 2 months ago), especially in the context of my project. One amazing piece of info Google shares is that BERT “also learns to model relationships between sentences by pre-training on a very simple task that can be generated from any text corpus.”

Apparently, BERT models can also be trained to recognize sentences which appear in similar contexts. Even more significant is the fact that the sentences used in the top left are related by causation, so BERT can recognize cause/effect relationships across sentences. What I mean is that ‘The man went to the store’, and therefore “He bought a gallon of milk”, which is one event which lead to another. BERT recognized that “He bought a gallon of milk” might be a likely follow-up to the sentence “The man went to the store”, which is a remarkable intuition for a self-taught computer.
This has the potential to work incredibly well for the task of recognizing what might happen to the price of a stock following a certain sequence of up/down price shifts. Rather than learning which sentence will lead to which other sentence, we could learn which stock market shifts led to which other stock market shifts. This is already what I have been trying to do with Word2Vec, but BERT proves even better equipped for the job because it can embed and map entire sentences, not just one word at a time. For instance, if you create embeddings for entire sentences, this is practically just creating one big embedding for a sequence of words. Translating this over to my project where words are single up/down price shifts for a stock on a single day, BERT could potentially create embeddings for sequences of stock activity, like a month, a week, or a quarter of stock trading. Then, we create an embedding for a week of real-time stock data, and see which historical week_embeddings match closest to our query. This would solve the issue of sequencing and Shazam Fingerprints which I discuss in my previous post, while also allowing us to extrapolate many more embeddings from the same amount of daily stock open/close data.
It’s really great to see all of these technological advancements and how they relate to finance, and also to see the growing interest in researching new trading strategies. I even got the chance to speak with a few undergrads at Columbia and Princeton over break, who confirmed that quantitative finance is the fastest growing major at their schools.
With all that said, if anyone has some interesting new ideas, opportunities, or people to talk to, please let me know, and I look forward to another great year of research and learning!

2 thoughts on “Oversaturation of Trading Algorithms to Blame for Huge Market Rebound? Also, Could BERT Replace Word2Vec in 2019?

Leave a Reply

Your email address will not be published. Required fields are marked *