Loud Silence: How Doing Nothing can Return the Most

A study done by professor Hendrik Bessembinder from earlier this year highlights the extreme disparity in net gains in the stock market: “When stated in terms of lifetime dollar wealth creation, the best-performing four percent of listed companies explain the net gain for the entire U.S. stock market since 1926, as other stocks collectively matched Treasury bills.” So, what Mr. Bessembinder states is that over the past ~92 years, the top returning four percent of publicly traded companies made up ALL (literally 100%) net gains for the ENTIRE stock market. To add insult to injury, the other 96% of companies on the market performed akin to U.S Treasury Bills. These Treasury Bills, for those who aren’t familiar, are widely heralded as the safest investments possible (least risk also means least reward), and only return about 4% profit on the year (in best cases).
Talk about living in Extremistan…
What surprises me most about this study is that this is not simply a snippet of the past 10-15 years, the time period which I (and many others, I’m sure) consider to be the most volatile and conducive to income inequality, but this hyper-concentration of wealth has shaped the market for the past century. Not to jump to unfounded conclusions, but if this has been going on for the past 100 years, then it is reasonable to assume that it will continue happening for the next 5, 10, 20 years as well.
I mention this because nowadays, the investing world (and the world in general) is becoming more automatized, more laden with competition, and more glutted with information. It is certainly easy to be intimidated by the surplus of data and potential opportunities, and portfolio managers, investment bankers who look ‘the busiest’ give the perception that they are the only ones who can truly grasp all that is going on. But, as with the stock data I’m training my neural networks on, this is mostly full of distracting, meaningless noise. Just because your portfolio is as diverse as possible, or has hundreds of different stocks from different sectors, or you change your portfolio on a daily basis, this doesn’t imply a superior investing strategy. You could invest into a thousand stocks that are all part of the bottom 96%, and break even, while your coworker holds onto only one stock that happens to be part of the 4%, and makes a killing over the next years.
One quote that really sums this up comes from Warren Buffet, who said: “I could improve your ultimate financial welfare by giving you a ticket with only twenty slots in it so that you had twenty punches – representing all the investments that you got to make in a lifetime. And once you’d punched through the card, you couldn’t make any more investments at all.”
Sometimes, it is best to stay vigilant and uninvolved, until you hone in on a single big investment opportunity. I visualize this as a day on a fishing boat: You can use all of your bait at once throwing lines in just to throw lines in, or you could sit with your single line until a big fish bites.
Now that I think about the algorithm I am building, this is definitely an interesting topic to keep in mind. My Word2Vec pattern mapping can’t possibly map similar stock events for all real-time data all the time. If my code could match similar stock events 24/7, then it means my code is broken. In my case, it would be best to run event-matching code all the time, but to only consider investing once the events match past a certain threshold of accuracy (for now, let’s call this Accuracy Threshold). If two events match with 100% similarity (which is extremely unlikely, I’m just using this as an example), then it means there is real potential that some recurring pattern will repeat. Once the code crosses the accuracy threshold, we then need to look at what happened after the data_embedding we matched with. In other words, if our real-time data maps closely to a data_embedding, and the data_embedding was followed by a price increase of 12%, then it is definitely worth investing. The matter is factoring-in percent similarity (which should translate to percent-likelihood that our real-time data will follow the same outcome of the data_embedding it maps to) with expected gain. A very high percent similarity with a high expected gain is best case scenario, while a low percent similarity with a high expected gain is more ambiguous.
So, my question for today is, how should I factor both these percentages into my algorithm to decide whether or not I should go for the investment? Because after all, I need to be able to identify the big fish when they are biting.
 

One thought on “Loud Silence: How Doing Nothing can Return the Most

Leave a Reply

Your email address will not be published. Required fields are marked *