A Dose of Optimism: Investing in Healthcare AI
Updated: Apr 26, 2022
The author (Dr Sarah Qian) is a specialist doctor in endocrinology and current MBA candidate and Service and Society scholar at London Business School. She is interested in exploring AI solutions to improve healthcare systems and helping translate start-up innovation to clinical impact.
Photo by CHUTTERSNAP on Unsplash
In my last article, I explored the landscape of AI in healthcare, giving an overview of its broad applications across the value chain. In this article I want to focus on the investment landscape and considerations in risk management.
In researching this article, I spent time with Dr Ashish Patel (Managing Director at Numis) and Dr John Lee Allen (Managing Partner at RYSE Asset Management) who generously shared their time and experience as investors in this space.
Rapid Growth in Healthcare and AI
AI in healthcare is a fast-growing sub-sector in an expanding industry. According to various reports, the market has been valued at between $7-9B in 2020-2021 and is projected to grow at a CAGR of up to 48% to greater than $60-90B by 2030.
Investors have been quick to act. In the UK, total investment in healthcare is overtaking AI investment in other industries. However this is likely to reflect an increase in larger deals given it appears disproportionate to the increase in total deal number.
Globally, Pitchbook data shows ‘21 was a record year, with close to 17B$ raised in venture capital alone. We also see an increase in M+A activity, although much of this may reflect the recent SPAC listing vogue which could be considered as IPO activity.
And this doesn't show signs of letting up. Although significant events, including the Russia-Ukraine war, persistent supply chain shocks and rising inflation, have tempered expectations this year, the first quarter of '22 raised 3.3B$ to match investments in the same period last year.
AI ventures as a proportion of the total healthcare pie – representing 8.2% of total funds raised in 2018, increased to 11.4% in 2021. This increase is small, likely offset by the concurrent growth in the $10T healthcare market itself.
Which Way is the Exit?
At the other end, we are starting to see exits in Healthcare AI ramping up. In ‘21, there were 83 exits globally, led mostly by deals in the US. Excluding distress and liquidity, median exit size was $52.9M.
Currently, of all AI exits over $1B since 2013, only 1 has been in healthcare (Flatiron – an oncology service acquired by Roche in 2018). This makes sense, given AI has had a head start in other tech industries, and given healthcare ventures require a longer runway to market with more upfront investment and R+D.
Building on recent momentum, hopefully we will see this number rise.
That's Great, but What are the Risks?
In answering this question, we have to remember that healthcare is a broad sector comprising a range of industries. As a whole, the sector has defensive qualities that can add diversification to a general portfolio, however, there are inherent and unique risks. AI in healthcare, when used as a medical grade device or service, is comparable to biotech when considering risk.
Trying to quantify this, I wanted to know if early-stage ventures in this space were more likely to run into trouble. At a high level, we know that 90% of all start-ups and 75% of VC-backed start-ups fail. Diving deeper, most sources give similar failure rates across sectors, probably because many causes - poor product-market fit, lack of funds, founding team issues - are common to all start-ups.
More specifically, Nature Biotechnology recently published a review of University-based biotech start-ups in the US which reported a success rate of 23% for incorporated companies achieving IPO or acquisition. For many reasons, this group of companies are not likely to be generalisable to wider life-sciences start-ups, but certainly is food for thought, check it out here.
Nevertheless, the specific relevant risks to healthcare AI are as follows:
1. Highly regulated industry
2. Data privacy and responsibility
3. High burden of proof required for MVP
4. Front-loaded R+D costs and high attrition rate
I discuss these considerations further in the following themes.
1. Understand the AI
Before delving in, it’s worthwhile taking a step back to consider the bigger picture and asking the following:
What is the solution that the problem needs?
Is AI this solution? Do you need a system to interpret data, adapting and learning over time, or will automation (or another solution) suffice?
How will the AI improve current best practise?
This a16z post is a great read with more examples.
Next comes understanding the AI itself and getting a grasp on the validity of the technology, thinking through:
Inputs: Data is key (more on this later)
Process: How is the platform trained?
Output and review: is there visibility on the algorithm, what oversight can be provided?
Expert opinion may be needed here to understand the technology and how it is differentiated. On the other hand, for early-stage companies, the focus may be more on the team’s expertise, experience and network. Have the above factors have been considered and could the team overcome challenges in developing the technology?
2. Data, Data, Data
Sensitive data is critical to developing robust algorithms that are accurate with minimal bias and so it’s imperative to understand the data source, quality, protection and use.
Is the data anonymous? Or simply anonymised, and what is the risk of re-identification? Privacy is paramount given a data leak could significantly impact a person's insurance premiums, job prospects and relationships. Lack of clarity here is high risk and any breach could be devastating.
In order to gather the data required, solutions are being sought beyond traditional de-identification. Options on the demand side include synthetic data platforms (i.e. MDClone), which can convert original data to a form that can be shared without privacy concerns, and federated learning (i.e. Owkin), which bypasses the need to move data, leaving it in the hands of users.
On the supply side, big questions remain around data ownership. Could blockchain be the answer, allowing individuals to control ownership and potentially monetise their own healthcare data? The value of data to businesses and their IP is still being debated, but we are already seeing this being used in clinical trials and even commercially (i.e. Nebula Genomics). This is promising, but until digital identity is better defined, still far from widespread use.
3. Regulation and Market Strategy
Healthcare is a highly regulated space, with all medical grade standard devices subject to rigorous clinical standards. Regulation is a minefield, not just because of the high burden of proof required for approval, but also because there is considerable heterogeneity between countries. Does the start-up demonstrate an understanding of the challenges that lie ahead and do they have a strategy to tackle these?
In each country, regulatory hurdles can be considered across two layers.
Firstly, local regulatory approval. For example, the Medicines and Healthcare Product Regulatory Agency (MHRA) in the UK and the Food and Drug Administration (FDA) in the USA. These government bodies are responsible for granting a license to products with proven efficacy and safety following the appropriate clinical trials.
Secondly, payers need to be engaged. What this looks like depends very much on the healthcare system of the target market. In the UK, over 90% of healthcare is provided through the National Healthcare System (NHS). While this appears to be a consolidated system at first glance, it’s actually comprised of 4 separate entities governed by each of the devolved nations (NHS England, NHS Scotland etc). Moreover, these bodies only determine treatment for specialised services, while all other services are managed through local Clinical Commissioning Groups (CCGs). Of course, all of these have individual sale cycles to navigate. Gaining individual approval is effortful and broad guidance provided by the National Institute for Health and Care Excellence (NICE) is a boost, but approval here can also be a lengthy process and is not guaranteed. Of course, an entirely different process exists in the US, where both private and public payers need to be engaged …
These factors can clearly have a significant impact on financial projections, and should inform the chosen market, commercialisation strategy and timeline.
4. The Long Game - Or Is It?
Given these processes and the high burden of proof required to make it in the healthcare industry, it seems likely that Healthcare AI will share the long timelines seen in traditional biotech.
The potentially disastrous consequences of getting it wrong means forgoing the luxury of agile iteration, resulting in a costly and front loaded R+D. In short, it could be a while revenue or profits are realised and the life-stage of of the company should be considered in the context of the wider investment strategy and portfolio.
However, the exponential rate of evolution in AI is a wildcard that could disrupt this model. Developing technology can be used not only to directly improve health, but also to revolutionise supporting systems. For example, the ability to streamline data collection and trials could drastically reduce the time to clear regulatory hurdles.
Watch this Space
AI and healthcare together is a powerful combination for growth. And while there are risks and challenges to navigate as an investor in this space, the rapid pace of development is reason for optimism. Definitely an industry to keep a close eye on - exciting times ahead!