Top Management College in Kolkata | PGDM College in India Praxis

While algorithms can easily pick better portfolios because they analyse data at scale, there’s always a trade-off between fairness and efficiency

The angles vs. algorithms debate has taken on a new turn as analyst firm Gartner has come out with a report that by 2025, more than three-fourths of venture capital and early-stage investors will use Artificial Intelligence (AI) to take investment decisions, replacing the fabled gut-feel of investors. Perhaps this will mean the end of pitch decks and complex financial charts.

Gartner is merely confirming a Harvard Business Review research carried out last year, which showed that an investment algorithm outperformed both novice investors (those with 10 investments) and experienced ones (with over 10 investments). The team built an investment algorithm and compared its performance with the returns of 255 angel investors. Utilizing state-of-the-art machine learning techniques, the algorithm was trained to select the most promising investment opportunities among 623 deals from one of the largest European angel networks.

The algorithm’s decisions were based on the same data that was available to the angel investors at the time, which included pitch material, social media profiles, websites, and so on. This data was to predict a start-up’s survival prospects — instead of measures such as valuation, which investors often favour — because it allowed the team to train the algorithm with a much larger and more reliable dataset.

According to the research, novice investors are easily outperformed by the algorithm — with their limited investment experience, they showed much higher signs of cognitive biases in their decision making. Experienced investors, however, faired far better. As such, our research shows how biases shape the decisions of human investors — and how working with algorithms might help produce better and fairer investment returns.

Research has shown that there are five biases: 1) local bias, which describes angel investors’ tendency to make investments that are in close geographic proximity to themselves; 2) loss aversion, meaning angel investors’ tendency to be more sensitive to potential losses than to potential gains; 3) overconfidence, when investors “overcommitted” and spent significantly more money on one start-up that they usually would; 4) gender bias; and 5) racial bias. Our data shows that all biases were present among the angel investors with overconfidence — which 91% fell prey to at least once — being the most frequent and strongest bias to affect investment returns.

Even the AI model is not always free from these biases as it often is influenced by the subconscious preferences of the data scientists building these models. HBR recommended a hybrid-model where human intelligence is paired with AI recommendations to arrive at the best decisions. It’s crucial that investors take a “hybrid approach” to AI-informed decision-making with humans in the loop, according to Harvard Business Review. While it’s true that algorithms can have an easier time picking out better portfolios because they analyse data at scale, potentially avoiding bad investments, there’s always a trade-off between fairness and efficiency.

Recent studies have conclusively shown that machine learning can help in the baseline screening to early-stage investors that are looking at potential investments with no relevant quantitative data or track record. Experiments in realistic settings demonstrated that a multi-class machine learning classifier can help to increase the success rate of an investor. From the perspective of a venture capitalist, a multi-class approach is much more useful than a binary one, which only gives “good” or “bad” scenarios. The nuanced information from the multi-class approach can be used to set up a detailed portfolio strategy.

“Managers and investors should consider that algorithms produce predictions about potential future outcomes rather than decisions. Depending on how predictions are intended to be used, they are based on human judgement that may (or may not) result in improved decision-making and action,” HBR wrote in its analysis. “In complex and uncertain decision environments, the central question is, thus, not whether human decision-making should be replaced, but rather how it should be augmented by combining the strengths of human and artificial intelligence.”

Leave a Reply

Your email address will not be published. Required fields are marked *