The odds of outrunning cancer are improving every day. Thanks to more effective screening methods, education campaigns, personalized medicine and new technologies like genetic testing and immunotherapy, medical science has made incredible progress in cancer care over the past few decades. According to the Canadian Cancer Society, mortality rates for all cancers combined peaked in 1988 and have been declining ever since. But that glimmer of good news has been overshadowed by the fact that cancer is still the leading cause of death in North America. Two out of every five Canadians will receive a cancer diagnosis in their lifetime; one in four will die as a result of the disease. And those statistics are likely only going to get worse. The biggest risk factor for cancer is age, and the Canadian population, as in many other developed countries, is getting undeniably greyer — we now have a greater proportion of seniors than at any other time in history. Accordingly, more of us will get cancer — 239,000 Canadians last year — a trend that’s expected to continue for at least the next decade.
Even more worrisome, it’s not just older Canadians. While still relatively uncommon, “early-onset” cancers — those diagnosed in patients who are between the ages of 18 and 49 — have also risen dramatically in recent years. Doctors and researchers still don’t know exactly what accounts for the shift, but speculation ranges from COVID-related treatment delays to the widespread use of “forever chemicals” to changes in our microbiomes. What is certain is that the fight against cancer is more urgent than ever.
Cancer is extraordinarily complex, however. For starters, it’s not just a single disease, but rather a group of more than 200 diseases. And, while all cancers are characterized by the uncontrolled proliferation of cells, each of these 200-odd diseases — from leukemia to prostate cancer — has its own unique traits and behaviours. Even cancers that originate in the same part of the body can have different molecular profiles. There are more than a dozen different kinds of breast cancer, for instance, and four main molecular subtypes; this differing genetic makeup can determine how fast a cancer will grow, whether it will recur, and how it will respond to treatment. While incidents of specific cancers (such as lung) are declining, others (cervical cancer, melanoma skin cancer) are heading in the opposite direction.
This complexity means that cancer research and treatment requires enormous amounts of time and money. It typically takes, for example, more than 10 years and between U.S.$1 and U.S.$2 billion to bring a single new cancer drug to market. With our healthcare system under increasing strain (see the aforementioned aging population), medical professionals and governments are looking for new ways to cut costs and wait times without sacrificing quality of care. Technology has emerged to meet that need in various ways, and promising developments in virtual reality, blockchain and wearable devices hold the promise of potentially thrilling solutions.
Artificial intelligence is arguably the most widely applicable, and profoundly transformative, of these technologies. According to a McKinsey report from February 2024, at full-scale deployment, AI could lower healthcare spending by about 4.5 to 8 percent a year. Machine learning (ML) is well-suited to tackle cancer’s specific complexity; it’s the subset of AI that allows algorithms to process enormously complex, dynamic masses of data to find patterns, produce insights and improve the outcomes of specific tasks. When deployed thoughtfully and with precise goals, machine learning can offer exceptional benefits, most evidently in helping researchers analyze data better than either human brains or already powerful computer systems. “As you start to deploy this in the real world,” says Amol Deshpande, the senior director of health sciences at MaRS, “the big improvement will be in allowing cancer researchers to do what they have been doing much more efficiently and quicker than ever before.”
Indeed, AI is currently impacting cancer care at all stages: accelerating drug development and discovery, reducing regulatory hurdles, improving clinical trials and screenings, and decreasing wait times for treatments. While adoption of the technology in the healthcare sector has been slower compared to other industries and some of the tools and developments in these areas are still in the early stages, and some remain theoretical, AI will undoubtedly soon supercharge the ways we’ve traditionally practiced oncology.
Here, several researchers and startup founders share their thoughts on the field’s most exciting new innovations.
Photo credit Linda Lifetech
Early cancer detection is critical: the earlier the disease is identified and intervention can begin, the better. At the moment, however, according to Deshpande, we’re “terrible” at it. Participation in timely routine screenings in Canada is generally considered too low, but even if patients are diligent, existing screenings and tools — colonoscopies, mammograms, ultrasound transducers — all have their shortcomings. The difference between cancerous and non-cancerous tissue, for example, can be so subtle as to be almost imperceptible; imaging can be inconclusive or too susceptible to interpretation. False positives and unnecessary biopsies are a regular risk.
Recognizing patterns and classifying different things is something AI is extremely good at, however. A number of researchers and startups have developed new ML and deep learning tools that can dramatically improve imaging analysis. Linda Lifetech, founded in Brazil and headquartered in Toronto, has created a mobile breast-screening tool that pairs thermography with algorithms trained on several thousand patient records. While Linda’s technology is not yet a substitute for a conventional mammogram, it’s hoped that these scans can be used for triage and to reduce waiting times. Other AI tools, such as Western University’s A14Path software, are being used to improve the classification of polyps biopsied in colonoscopies, and Cancer-Net, an open source initiative led by the University of Waterloo’s Alexander Wong, is using AI to, among other things, investigate and improve lesion identification in skin cancer.
Lincoln Stein, the head of adaptive oncology at the Ontario Institute for Cancer Research, is particularly enthusiastic about the potential for AI in ultrasound. It can be challenging, he says, for ultrasonographers to get precise images, whether they’re using a probe to place a biopsy needle or to guide the placement of radioactive seeds for radiation therapy. Bodies move and patients breathe, which means images can be blurry. Prostate biopsies are particularly difficult. But AI image analysis can create accurate 3D images of the biopsy needle, within a few millimetres. “It does two things,” Stein says. “It improves accuracy tremendously. There are far fewer misses in the biopsy, and it lets you do fewer of them — you’re getting into the tumour each time. And, the patient is on the table for much less time, there’s less risk of complication, pain and inconvenience. It’s a fantastic use of AI.”
Then there’s the fact that it could radically cut down the time needed to analyze these screenings. Ultrasonographers and other radiologists, like most healthcare workers, are overworked. Medical imaging makes up a staggering 90 percent of all healthcare data. Radiologists and other clinicians must sift through billions of images every year, with some radiologists estimating that their workload has, in recent years, increased tenfold (again, partly because of an aging population). Burnout is a growing problem. This past April, however, Bayer and Google Cloud announced a collaboration to develop AI-powered applications designed specifically to automate repetitive tasks and provide insights into massive data sets. Using such tools as Vertex AI, BigQuery, Healthcare API and Chronicle, Bayer’s platform aims to make it easier for companies to bring medical software products to market. (An initial version is expected to be ready for testing in the U.S. and Europe later this year.) If a virtual assistant can give radiologists a break, it’ll benefit us all.
Making cancer drugs is extremely difficult and laborious. It can take more than a decade and cost billions, and even then, the efficacy of a newly approved cancer drug can be minimal. While such drugs might halt tumour progression, they typically only extend patient survival by two or three months. At the same time, conventional anti-cancer therapies like chemotherapy are non-targeted, and can damage or kill healthy cells when eliminating cancer cells.
Integrating AI into drug discovery and development can help on both these fronts — it can speed up the search for effective novel compounds and proteins, and also enable researchers to develop new therapies that target specific cellular properties. “If you look today at the real live use case for AI, it’s in the drug discovery domain,” says MaRS’s Desphande. “Whether it’s taking a thousand molecules and telling me which of those to move forward (which is known as discovery to hit) or which ones are most likely to be the right therapeutic for this particular target (known as hit to lead).”
One of the biggest breakthroughs in AI drug discovery and development occurred in 2020, when the Google DeepMind research lab unveiled the second version of AlphaFold, a neural network–based model that predicted protein structures for the entire human genome. Proteins are, of course, the building blocks of life; understanding their structure allows us to understand how they function (or don’t). Two years later, international researchers led in part by the University of Toronto’s Acceleration Consortium, published a paper in which they detailed how they were the first to successfully use AlphaFold to accelerate the design of a new drug for liver cancer. Developing a new treatment pathway, and a molecule that can bind to that target, can take years using conventional trial-and-error in chemistry. But by applying AlphaFold to other proprietary AI-assisted drug discovery platforms, including Chemistry42, a generative chemistry engine, researchers were able to efficiently discover a novel target and hit molecule in just 30 days after target selection and after synthesizing only seven compounds. In a second round of compound generation, again using AI, they discovered an even more potent molecule. At the time, Stanford’s Michael Levitt, a Nobel Prize laureate and one of the researchers, mused: “It’s possible we’re on the cusp of a new era of AI-powered drug discovery.”
The authors of the paper also described their research as an “important first step.” That cautious optimism is shared by people like Aled Edwards, the CEO of the Structural Genomics Consortium. He points out that AlphaFold didn’t happen overnight — its transformation of structural biology represented more than 50 years of well-funded research and experimentation starting in 1971, when the scientific community created the so-called protein data bank. Edwards expects that similarly revolutionary work in AI will take several more years. “It doesn’t take a rocket scientist to know that AI is going to have a big impact. It’s going to happen. The problem is: How are we going to get from where we are today to the point where it can have an impact? And that is the cool part.”
As Edwards notes, that process will require radically improving the amount and quality of the data collected for use in drug discovery. To develop ChatGPT, for example, AI researchers had the entire internet at their disposal; in chemistry, similarly vast amounts of data have never been gathered systematically or formatted consistently (AlphaFold was trained on just 100,000 protein structures that scientists had confirmed through decades of experimentation). In an effort to expedite the journey, in 2021, the SGC launched the CACHE Challenge, a competition to predict small molecules that bind to disease-associated protein targets. Results from the first challenge — finding hits for a protein associated with Parkinson’s — were released in January. “Our next five years is about recapitulating exactly what happened in the structural field,” Edwards says. “Hopefully not in 50 years, but in 10 or five.”
It is possible to leverage AI in drug discovery without the need to comb through reams of data. The key is to use physics-based methods to computationally simulate protein structures. ProteinQure, a Toronto biotechnology startup that emerged from the Creative Destruction Lab in 2017, is currently doing exactly that. Its Protein Studio platform essentially combines deep-learning architecture with atomistic simulation to model the 3D structure and predict the properties of novel peptides. (Peptides, which are effectively small proteins, can be more precisely targeted; Ozempic is the most famous peptide medication du jour.) As to how one uncovers potentially promising peptides, ProteinQure CEO Lucas Siow offers a helpful analogy: If drug discovery can be like trying to find a key for a lock — the lock being a disease — then his company’s Protein Studio essentially assesses a bunch of different keys and tries to simulate how they might work with that lock. The small volume of experimental data that the platform does have is used to guide that process and make predictions — Does this key make the shape they believe it does? Does that key shape interact with the lock the way they want it to? “The core challenge is to take these computational simulations and use them to make predictions about all the different properties of a drug you might care about,” says Siow.
ProteinQure is currently working on a number of potential therapeutics; some key leads target brain and kidney diseases, but the most promising candidate is a medication for triple-negative breast cancer, the most deadly form of the disease. The company is currently completing pre-clinical validation with the Princess Margaret Cancer Centre and will begin Phase 1 clinical trials in roughly a year and half. “We have the drug and we have the evidence it works,” Siow says. “The next 18 months are the regulatory process. But it’s super exciting. We’re pretty energized by the opportunity.”
One major factor in the protracted regulatory process is that the industry doesn’t learn from its mistakes. At least that’s the view of Amin Osmani, the founder and CEO of Toronto startup Cedience. In 2018, Osmani, then a business development manager, was working with a team of clinical researchers who’d spent several million dollars developing a medical device. When they brought the device to the FDA for approval, they were rejected based on mistakes others had made in the past — ones about which these particular researchers happened to be unaware. “It was completely avoidable,” Osmani says.
The problem was that those past mistakes were hidden in thickets of regulatory documentation. While no researcher publicizes their failures, and the highly competitive drug development field is understandably shrouded in secrecy, the regulatory bodies, be they Health Canada or the FDA, always release a significant amount of information whenever they approve a medication — they are required to explain the basis of their decision to the public. That means interested parties can access thousands of pages of documentation detailing the evaluation of each product, any lingering concerns on the part of regulators, how those concerns were resolved, and so on. Embedded within this information, therefore, are all the mistakes and failures that had to be overcome on the road to approval.
To be sure, the sheer volume of this information is overwhelming. Researchers looking for a specific answer aren’t just looking for a needle in a haystack, they have to look for that needle in all the haystacks. Enter AI and Cedience. When Osmani first conceived of the company in 2018, he was barely familiar with the latest AI developments, but once his team began dissecting, labelling and enriching available data, it soon became obvious the enormous benefits to be gained from training a large language model on this data. “The vision is: how can we make decisions in this space predictable,” Osmani says. “How can we predict how the FDA will approve or not approve a product given a set of data points that have been obtained from trials and different studies?”
Cedience currently works with some of the largest biotechs — Roche Pharmaceuticals, Vertex Pharmaceuticals — but also much smaller players, too. Osmani isn’t interested in making it easier for Big Pharma to leverage what is essentially publicly available knowledge, but to make it available to all researchers, no matter their size or resources. “Let’s liberate it,” he says of the data. “Let’s make it available.” Its platform is already helping companies of all sizes. For instance, a major pharma company used it to reassess its development program by comparing it to related clinical studies that supported past FDA approvals; it has helped a mid-sized biotech firm develop a strategy to reapply for FDA approval after its first attempt failed; another company relied on the platform to establish its target product profile based on the current standard of care. The ultimate goal, Osmani says, is to provide every biotech company with a dashboard that will allow them to gauge their likelihood of product approval. “We’ve seen a lot of companies relying on AI technology in the discovery and design of their molecules,” says Osmani. “The last missing piece, I believe, is the regulatory piece. How do we make sure what is coming through this pipeline makes the cut in terms of effectiveness and safety? We should be able to learn from each other and do things the right way the first time.”
The most expensive and time-consuming aspect of creating a new cancer drug is the clinical trial. Even then, of the drugs that make it to a phase one clinical trial, only 10 percent will eventually be approved. But there are two ways that AI can improve these odds: It can better match eligible patients to clinical trials, and it can make those trials more efficacious.
In the past, clinical trials tended to focus on a specific genetic mutation in a tumour. A single genetic test would reveal if a patient had that mutation and was eligible for that trial. In the last decade, because of rapid developments in genome technologies, testing has broadened significantly. Now it’s not just one gene, it’s all 18,000 genes. And it’s not just one mutation; genes can be undermined in many different ways. Sometimes, the same gene can be altered in multiple types of cancer. The number of trials, and the number of drugs being tested in these trials, has accordingly exploded.
But still, only a small portion of eligible oncology patients participate in clinical trials. Even at Toronto’s Princess Margaret Hospital, the country’s leading cancer treatment centre, less than 25 percent of patients participate in clinical trials. This in large part because, as clinical testing has become broader and more complex, matching cancer patients — who themselves are highly complex, with long histories of tests, treatments and tumor changes — to those trials has become too resource-intensive and time-consuming. “It’s grown to the point that an oncologist can’t keep it all in their heads,” says Trevor Pugh, a molecular geneticist and researcher at the Ontario Institute for Cancer Research.
Pugh and his team, however, have built a software system so that oncologists don’t have to. Called PMATCH, it uses ML to structure clinical and genomic eligibility criteria, along with a patient’s specific medical data, and match that patient to the best clinical trials for which they are eligible. In a pilot project, they manually abstracted more than 80 clinical trials and matched those to the genomic records and clinical data from 3,000 patients. While other tools like this exist, Pugh says, what distinguishes PMATCH is its detail and comprehensiveness; it includes as much exclusionary data and edge cases as possible. Next steps include expanding the pilot to the B.C. Cancer Agency, and if it works there, according to Pugh, they will move on to the next stage, in which the system is fully automated. If all goes well, that should occur next year. “We want to make sure that the AI is giving results that mirror the manual process,” Pugh says. “The dream would be that it finds matches we didn’t contemplate — which is very likely.”
Meanwhile, the Toronto startup Altis Labs is trying to transform clinical trials through the use of digital twins (simply put, digital models of the human body). Through partnerships with, among others, Trillium Health Partners and Sunnybrook Hospital, the firm has gathered historical imaging data of all kinds — CT, ultrasound, MRI — and trained a neural network on this longitudinal data. The firm refers to this network as the “world’s largest imaging database,” and what the network can now provide is unique insight into the still little-known relationships between the very many features in those images. What role does fat and muscle mass, for example, play in drug response? If measuring the size of a lesion has traditionally given clinicians information about a drug’s efficacy, how much more information can be gleaned from the location of, shape of, and tissue around those lesions? “These types of models provide a much more comprehensive and rich understanding of the true disease burden,” says founder and CEO Felix Baldauf-Lenschen. “It’s not just a tissue sample taken in a biopsy.”
The firm’s Nota system allows clients (which currently include Bayer and AstraZeneca) to cross-reference their own clinical trial data against this imaging data and better predict the potential efficacy of a novel therapeutic. “We’re significantly outperforming tumour-size measurement in terms of predicting survival, which is a really big deal — it’s why Bayer and Astra-Zeneca are incorporating this into their clinical trials,” says Baldauf-Lenschen. Having a better sense of efficacy earlier in the process means, ideally, that pharmaceutical companies can prioritize their most promising candidates and avoid pursuing expensive trials that go nowhere. It also means companies don’t need as large of a trial, which could speed up development, reduce the late-stage failure rate and lower the overall cost of bringing a new drug to market.
“We don’t see that automating human tasks for human understanding is going to yield the greatest potential,” says Baldauf-Lenschen. “Especially in medicine, where we actually have a very limited understanding of biological mechanisms. For us, the promise of AI is really going beyond our current understanding and helping us generate new understanding.”
Completely curing cancer might be forever out of our reach. But as the data we gather and analyze grows, as AI improves and we’re able to more routinely deploy it in detection, we’ll be able to catch more cancers earlier and cure those. At the same time, eventually AI will also allow us to convert those cancers we catch late into chronic diseases, like diabetes or heart disease. The new understanding that Baldauf-Lenschen speaks of will, in the near future, give many people a new lease on life.
Adopting breakthrough tech solutions means staying ahead of the competition. Become a MaRS Corporate Innovation member and gain exclusive access to our ecosystem.