Canada is a bit of a prodigy when it comes to artificial intelligence. Geoffrey Hinton and Yoshua Bengio’s game-changing research on artificial neural networks in the early 2000s helped establish the field, and in 2017, Canada became the first nation to launch a national AI strategy. The country now has one of the world’s most robust AI ecosystems: Canada tops the global list in terms of expanding its talent pool over the past half-decade. It’s also a leader among G7 countries when it comes to funding and innovation.
Even so, the country has been slow to adopt AI technology on a broader scale. News stories about deep fakes and nefarious applications of ChatGPT have made many Canadians wary; in one recent study, 91 percent of respondents expressed fear that the technology could manipulate public opinion. This is one reason why public participation in the development and adoption of large language models and other forms of AI is so crucial. Not only is it important to ensure the public has accurate, adequate information, but, as Elena Yunusov points out, technology needs to account for “humans in the stack” — that is, factor in the people behind the data. Yunusov heads up the Human Feedback Foundation, a non-profit dedicated to bringing more transparency to AI. “If we are integrating this technology into fundamentally human systems like finance, like healthcare, like education, you’re shooting yourself in the foot if you’re not involving humans.”
The good news? There are plenty of developments that show the real benefits of taking a human-centred approach to AI.
With so many headlines about LLMs going rogue (and potentially veering into nuclear territory), it’s no wonder most people are calling for guardrails. According to a recent report, more than three-quarters of Canadians believe AI should be subject to government regulation. While broader societal policies are still in flux, a number of solutions have emerged within the private sector. Toronto-based Armilla AI, for instance, works with companies to assess their AI tools to head off crises (such as sensitive info leaks or easily manipulated chatbots) at the pass. CEO Karthik Ramakrishnan recognized the need for such insurance after seeing how easily data could be mined. “If we want to unlock wider adoption of artificial intelligence and get its benefits, then we need to minimize its risks and build trust in these models,” he says. “We do that by taking an engineering-based approach and stress-testing systems in every situation and format possible.”
Meanwhile, Radical Ventures assesses AI ethics from a different POV. The Toronto-based firm helps investors evaluate startups to make responsible decisions. As senior director Leah Morris explains, the company’s framework is rooted in principles from the non-profit sector, where organizations must determine whether a project’s net benefits outweigh possible detrimental effects. “We identify potential risks, the likelihood of them occurring and the impact if they do occur,” she says. While these guidelines are tailored toward VCs, whose influence can be crucial in helping founders develop products at an early stage, Morris says they’re relevant for anyone engaging with AI tools. “You can get as superficial or as deep as you’d like with those three questions: What’s the benefit to me? What are the risks to me? Do benefits outweigh the risks?”
For more on AI and venture capital, check out the Impact AI panel on Feb. 22 at 11:35 a.m.
Although the prospect of robots who behave and “think” like we do may seem alarming to anyone familiar with, say, the Terminator films, those qualities are precisely what Sanctuary.ai is working to harness. The Vancouver company’s robotic assistants are meant to work in tandem with people to perform essential tasks with humanlike precision — tackling labour shortages and bolstering workplace safety by ensuring no real human is placed in harm’s way. The key is ensuring that this technology is developed and maintained with careful oversight.
Combining AI and robotics can also be a tremendous boon for those with disabilities. The University Health Network is exploring many different AI applications, including various rehab tools and, through its KITE program, the possibility of integrating bionic limbs and smart glasses to assist people with visual and physical impairments navigate space.
For more about robotics and AI, check out the Impact AI fireside chat on Feb. 22 at 4:55 p.m.
It may seem unlikely, but AI can play an integral role in bolstering biodiversity. Montreal’s Whale Seeker uses non-invasive technology to accurately monitor marine species, allowing commercial vessels to navigate waterways without harming aquatic life, as well as providing important data about protected species and the impacts environmental changes are having on them. Meanwhile, on land, Montreal-based startup Nectar promises to “give bees a voice,” tackling the looming problem of apian extinction by using sensors, a solar-powered wireless network and data-based prediction to monitor hives and gather data for optimal beekeeping results.
The next green challenge: AI gobbles up a tremendous amount of energy, which has significant ramifications for the climate and overall health of the planet. How can we work to build a more sustainable infrastructure? Organizations like Untether AI are working to streamline AI operations so that companies can operate their neural networks with a (literally) cooler and more efficient system.
For more on AI and sustainability, check out the Impact AI panel on Feb. 22 at 2:35 p.m.
Canadian healthcare needs a wellness check. Across the country, providers are struggling to do more with less — and the lack of resources has had a profound impact on those who need care. The desire to relieve burned-out clinicians by relieving them of non-essential tasks prompted Mahshid Yassaei and her colleagues to establish Tali.AI in 2020. As she describes it, the company’s “ambient scribe” — which records and transcribes consultations — is like “having a magic notebook that automatically writes down everything important without anyone having to hold a pen.” The tool allows doctors to be fully present — which means patients feel more seen and heard — and frees up time for actual diagnostic care. In line with Canada’s robust regulations, the platform has been developed with a keen sense of safeguarding patient privacy: Audio and other identifying details from consultations is not stored, and the company is committed to minimal data collection. And Yassaei notes that it has many additional applications, including predictive analytics, diagnostic support and remote monitoring.
AI can also fill other gaps in the healthcare system. In 2023, Kids Help Phone announced it was incorporating tech — developed in partnership with the Vector Institute — to help address the overwhelming demand for its services. Similarly, after receiving an influx of queries, the founder of the trans-inclusive clothing line Rubies developed a chatbot to help support transgender youth and their parents. (Gender-affirming care is especially difficult to find, and for trans youth, it can be the difference between life and death.)
For more on AI in healthcare, check out the Impact AI panel on Feb. 22 at 4:30 p.m.
As Elena Yunusov notes, one of the biggest challenges in ensuring technology is developed for positive impact is that people often trust a computer rather than their own gut. “There’s machine bias in the sense that humans prefer machines over humans,” she says, “and in AI, that’s no longer a viable paradigm.” Working with “the human in the stack” will require resources and creative problem-solving, but “the outcome will be technology that is usable and safer and, hopefully, good for society.”
If you’re interested in learning more about this technology and its applications, check out the MaRS Impact AI conference on February 22.
Photo Illustration: Stephen Gregory; Photos: Shutterstock and Unsplash