In Canada, financial institutions and the banking sector more broadly are known for being risk averse — companies working in this sector are typically reluctant to rush into adopting novel or emerging technologies. A boon during times of global economic crises, this quality has earned the country the reputation of being the safest G7 nation when it comes to banking stability.
While this approach may have prevailed in the past, Canadian financial institutions are now moving to embrace new digital tools driven by artificial intelligence (AI) and Large Language Models (LLMs). This shift is part of maintaining a competitive edge, as these technologies can help banks and other financial companies create new opportunities and improve on their core business.
Artificial Intelligence (AI): Technology that gives computers the ability to calculate solutions, acquire knowledge and perform other functions that involve human-like reasoning.
Generative AI (Gen AI): An application of AI whereby machines draw on accumulated information to produce content (such as images and responses to questions) using algorithms.
Large Language Models (LLMs): A form of Gen AI that uses massive data sets to “learn,” interpret inputs, predict outcomes and generate text-based outputs.
Open- and closed-source software: Open-source refers to programs where the code has been made accessible to the public to access and use for modification. Closed-source refers to programs where the code remains proprietary to the creator/s and is not available to the public.
Open banking: Secure systems that allow customers to share their financial data with fintech companies who can use that data to provide personalized products and information.
Although AI and LLMs are trending across industries, this technology is not new — even within the financial sector. Banks have used chatbots since at least 2017 to solve problems, provide instant answers to common queries and direct customers to appropriate pages, products or solutions. Delegating basic administrative tasks to digital assistants allows employees to focus on more complicated or sophisticated interactions.
Earlier iterations of these “dumb bots” were constrained in their ability to engage with users, drawing on a limited pool of programmed responses to frequently asked questions. Results were mixed. Customers didn’t and still don’t quite trust chatbots, finding them frustrating at best and misleading at worst. However, when leveraged correctly, AI — specifically generative AI and LLMs — can improve internal processes, enhance customer service, develop and manage net new financial products, and optimize and expand current product offerings.
Rob Baldassare, a senior fintech advisor at MaRS, urges institutions to be strategic as they move to implement this technology. “It’s a bit like the dot-coms of 1999,” he says, likening the rush to incorporate AI into operations by any means possible to the premillennial tech boom when funders were frantic to support any e-commerce prospect they could find, even if the idea wasn’t backed by a solid business plan.
While generative AI is often seen as having the potential to improve user experiences, Baldassarre says it can serve to separate companies from their customers. “Now we don’t need to have a call centre or customer service operation,” he notes. Instead, clients are directed to a corporate website, where a chatbot is standing by to receive their feedback — which, for anyone still nursing the sting of dumb-bot frustration, rarely results in client satisfaction. Moreover, this tech does not (yet) have a human-calibre ability to recognize and course-correct when it is providing inaccurate or misleading information — and the recent judgment against Air Canada over its chatbot’s incorrect advice to travellers suggests that corporations may be liable for the shortcomings of their tools.
While Baldassare has reservations, he concedes that chatbots will improve over time. Moreover, he says, AI is a powerful technology that has tangible benefits and can provide solutions to real problems that are otherwise very difficult to solve. An example of one such opportunity is open banking, where financial institutions give external service providers secure access to customer data through third-party applications. AI can quickly analyze large data sets and find the best services and products for clients.
To further understand how strategically implementing these new technologies can improve the bottom line or change the core business, it can be helpful to assess the landscape as a whole.
As with any new technology, the speed of development and usage often outpaces government regulations and guidelines. There is currently no definitive framework in Canada, but the federal government is in the process of implementing guidelines around AI and its usage. Part of Bill C-27, which passed its second reading in the House of Commons last year and is under review, the Artificial Intelligence and Data Act (AIDA) outlines some basic tenets to guide the “responsible design, development and deployment of AI systems that impact the lives of Canadians.” This proposed legislation is expected to move toward a third HoC reading and commence the Senate review process in 2024. With AIDA, any AI system deployed in Canada would have to ensure that it is safe for users and does not discriminate against anyone, especially marginalized groups. This act would also compel businesses to be accountable for how they develop and use AI technology.
Provincial changes may also be imminent. The Office of the Superintendent of Financial Institutions (OSFI) recently completed its 2023 Questionnaire on Artificial Intelligence / Machine Learning and Quantum Computing and conducted a public consultation on Guideline E-23: Model Risk Management. Put simply, Guideline E-23 maps out a risk model that encompasses economic forecasting, pricing products and services, estimating potential financial losses and optimizing business strategies. It has been revised to include risk-assessment models used in non-financial sectors such as climate, digital innovation and technology.
Peter Carrescia, the co-founder of Confirm, a platform that allows users to create a verified digital ID that can be used to bolster the security of online transactions, is enthusiastic about the potential for AI and LLMs to remove friction in the banking process. An early investor in the accounting software company Wave, which now offers a range of online financial services for small businesses, Carrescia recognizes that proactively incorporating specialized tools — to remove in-house bottlenecks and to help customers, for instance — can accelerate the adoption of new technologies. Take Know Your Client (KYC) procedures: Financial institutions must ensure their clients are who they claim to be and aren’t misusing their company’s products. This mandatory process identifies and verifies a client’s identity (using government-issued documents and credit information — and/or tools such as Confirm) when an account is opened; that data is revisited periodically over time.
As Carrescia explains, while digital banks are now equipped to complete the full onboarding process online, many traditional banks still require new clients to present their physical ID at a bricks-and-mortar branch before opening an account. In some cases, clients will complete the first step, but they never make it to the branch, so they never close the loop on opening an account. This is where tools such as Confirm’s credentials come in handy. Once a user sets up their portable ID (which involves submitting a photo and government-certified documentation to a secure verification service), they can easily use it across multiple platforms and devices without having to create a new username and password each time. “It speeds up onboarding, reduces fraud and makes it easier for institutions to secure the customer they want,” Carrescia explains.
At Wealthsimple, which provides online investment, trading and tax platforms to consumers, before any AI or LLM-related product is deployed, employees have an opportunity to test it out. Sam Talasila, the company’s senior data science manager of large language models and data products, says this approach generates recommendations for best practices and helps staff understand how these tools can be incorporated into their daily workflow. As he puts it, the greatest ideas don’t come from the data team, but from the people who would be using these technologies daily.
In the summer of 2023, he adds, the Wealthsimple team built and open-sourced what he describes as a “large language model gateway.” The concept behind this project was, effectively, to enhance the security and reliability of LLM applications. “We’ve had thousands of individuals — internally and a few externally — download and contribute to it,” he says. The aim was to ensure that staff could efficiently and safely access large language models.
As more members of the team embrace new technologies, Wealthsimple is able to gain a better understanding of how users engage with these tools. Internal feedback is leveraged to develop better processes for operations or customer support.
One example of how the company draws on AI externally is by leveraging the formidable capacity of LLMs to glean information from massive data sets to streamline the onboarding process. Because these models can deftly and effectively make connections (such as, say, discerning a certain contact within a specific department within a multinational bank based on just an account number), a request that would previously have taken weeks to process can now be completed in as few as five days. Talasila says it benefits the company to make these transactions as seamless as possible. “We categorize the request and route it to the team with the expertise to solve it.” says Talasila. By increasing client satisfaction, Wealthsimple effectively transforms customers into brand ambassadors.
Workshopping new tools internally is also an element of Mastercard’s approach. Darrell MacMullin, the company’s senior vice president of products and solutions in Canada, says team members strive to fully assess and bug-check new tech before it can be deployed externally — or, as he puts it: “Eat your own cooking first.” Most business leaders, he adds, would likely find “a ton” of processes within their own organizations that could be optimized with generative AI.
By MacMullin’s count, Mastercard has at least 687 different products — and each of those products is accompanied by hundreds, even thousands of pages of documentation. “Each one has scenarios A, B and C, based on implementations A, B or C, which requires a lot of hand-holding and a lot of architects to deliver the solutions,” he explains. By adding a generative AI layer, users with specific questions no longer have to flip through pages and pages of text to find answers. “It has all of the data; it has all of the documentation; and it can start piecing things together, so the conversation between implementation engineers, developers and product people becomes a generative AI experience versus ‘here’s an API and a thousand-page implementation guide.’”
AI and LLMs can be used to drive other products and processes.
Open banking draws on AI to give third-party providers access to and some control over financial and personal data. When setting up an account at a financial institution, customers may provide consent to share their information. While open banking is still in the early stages of deployment across Canada, and much remains to be determined in terms of privacy, data rights, connectivity and formatting, it has been implemented with success in other parts of the world. For example, when Bank of Scotland clients log in to the bank’s mobile app, they’re given the option to add their accounts from different institutions, which they can then integrate and manage in a single feed.
By strategically analyzing customer data, AI holds great potential to be harnessed in a commerce context. Take, for instance, a scenario in which generative AI is used in travel plans. There are obvious applications, such as route optimization and creating an itinerary. Still, as Mastercard’s MacMullin notes, the tool is especially powerful when one moves from basic information to commerce augmentation: Gen AI can capture and look at a user’s loyalty information, preferences and family situation, and find offers and deals that are tailored to those parameters.
Talasila says Wealthsimple is leveraging these tools to boost internal productivity and help provide customers with a more personalized experience. “We’re a lot more in touch with who the client is and what their past interactions have been,” he says. Using AI and LLMs, institutions can assess profile data to determine the most effective touchpoints; this informs which new products clients may be offered.
This technology can also be used to address clients’ desire for solid financial advice. Gen Z and other demographics are already integrating AI to help them achieve their financial goals as well as relying on guidance from friends, family and authoritative resources (such as those provided by employers). A recent Charles Schwab survey (of 1000-plus U.S. residents between 21 and 70) found that Gen Z respondents said they plan to retire by age 61, and 75 percent of the 21- to 26-year-olds polled expressed comfort with the notion of AI-assisted financial planning. For example, they would consider asking ChatGPT how to tackle debt, manage current expenses and catch up on retirement goals.
And it’s not exclusively younger customers who are open to new tools. Although uptake is still low, 49 percent of respondents across all age groups would consider incorporating AI into their financial planning processes, such as creating budgets based on inputs. It is important to note that human input is still an important piece of the puzzle: When it comes to investment advice, those who consult AI for guidance would want any recommendations to be vetted by a financial advisor.
As RBC’s chief science officer Foteini Agrafioti explains, finding the balance between innovative tech and invaluable human resources is crucial in the process of integrating these tools. While AI and LLMs can often be seamlessly adopted to tackle basic issues (such as a forgotten password or a faulty fraud alert), more complicated questions (such as determining the tax implications of wiring money or weighing the risks of a time-sensitive investment opportunity) typically require more nuanced interpretation. “This may depend on the geography of where a client is calling in from,” says Agrafioti, “and an agent in the background sometimes has to navigate complex policies and procedures to resolve the concern.” This is where she sees an opportunity. Off-the-shelf LLMs may not have the capacity to effectively parse a dense, context-contingent procedural document, she notes, but by fine-tuning these models and training them with domain-specific semantic searches, they can be part of generating accurate and efficient solutions. “Because it has that human oversight,” Agrafioti says, “it is a strong use case for this kind of technology.”
While evolution and innovation are necessary for businesses to survive and succeed in a rapidly changing landscape, proceeding with some caution is advisable. Generative AI is an effective tool to streamline operations and expand insights into client profiles — but those insights and efficiencies can also be leveraged for nefarious purposes, as one multinational firm recently discovered. In December of last year, the company lost U.S.$25.6 million after an employee at its Hong Kong office was hoodwinked by a deepfaked recreation of the company’s COO and other staff members, who “appeared” in a video and requested the transfer of funds.
The potential to use machine-learning mechanisms to produce alarmingly accurate vehicles for fraud is part of why there has been a push for updated regulations in Canada. A recent report produced by the Financial Services and Technology Group of the law firm Gowling WLG provided an overview of developments expected to unfold in this area throughout 2024, such as some necessary changes to the country’s existing legal framework in response to tech-assisted financial crimes.
As stated in the report, Gowling WLG, which focuses on business law, intellectual property services and litigation in Canadian industries such as financial services, government infrastructures and technology, anticipates that the Financial Transactions and Reports Analysis Centre of Canada (FINTRAC) will take on an expanded role to act as financial intelligence unit and provide supervision over programs tackling money laundering and the financing of terrorist activities.
From Rob Baldassare’s perspective, the good news is that the good guys have an advantage. “Basically, it’s an arms race,” he says. “And the financial institutions and multinationals will be able to win because they have far more access to computing power than the criminals.”
Given the speed of new developments in this sector, businesses must be diligent about staying ahead of evolving threats, especially when implementing AI and LLMs. Wealthsimple’s Sam Talasila says his company’s LLM department works very closely with the security team and senior privacy and legal counsel to build and update its internal processes.
Vendor approvals, for instance, now involve an additional layer of oversight. As Talasila notes, the company works with many contractors who have incorporated AI elements into their products. “Traditional security folks are very good at analyzing threats, but when it comes to AI, the field is evolving so quickly that it becomes very difficult to keep track.” As a result, every time Wealthsimple is considering a vendor whose wares include an AI component, there is an AI subject-matter expert at the table walking through the risks and mitigations in the vendor’s proposal.
A 2023 Leger survey found that Canadians are deeply wary when it comes to the prospect of AI offering assistance with significant activities — say, driving cars without human oversight or teaching children. These concerns are rooted in the belief that the technology lacks the empathy to make good decisions and that it is susceptible to fraud and hacking. Many Canadians are also unfamiliar with generative AI products such as ChatGPT.
To earn public trust, it may help to create and maintain a code of ethics that can be used in the creation of AI-powered applications. This code might include a commitment to safeguarding users’ rights, protecting against harm, verifying that the data used to train deep-learning models is free from conscious and unconscious racial and gender bias and ensuring that the needs of underrepresented and marginalized groups are not ignored.
This is covered in Canada’s AIDA, says Talasila. “The government is quite prescriptive about where you can and cannot use automated decision making.”
Wealthsimple has an internal risk review process geared toward fraud models (which analyze the information provided to determine if a transaction is legitimate or fraudulent). As Talasila says, “the designer needs to answer a strict set of questions to prove to the decision makers,” who include members of the company’s regulatory and legal teams, “that their models adhere to the rules.”
At Mastercard’s Global Intelligence and Cyber Centre of Excellence, various stakeholders from throughout the chain of development (from data scientists to product specialists) are brought together to collaboratively establish standards and frameworks to protect data and privacy and to ensure information is being ethically used.
MacMullin sees great value in operating as a network. “We work with banks, we work with fintechs, we work with merchants, we work with consumers, we work with governments, we work with law enforcement — all across the board,” he says. Facilitating communication allows for more robust and comprehensively informed governance.
In terms of leading the real-world AI/LLM charge, there seems to be a consensus that when it comes to transformative technology Canada always starts well but inevitably slows down. As Rob Baldassare puts it, “It’s a glass half-full, glass half-empty situation.” Despite the country’s solid pedigree and a legacy of being at the forefront of AI development, when it comes to commercialization and deploying actual systems to benefit citizens, consumers and companies, Canada just isn’t at the head of the pack, he says. Keeping Canadian IP in the country is a core issue: Compared to the U.S., there just isn’t adequate funding.
This is reflected in Deloitte’s 2023 report Impact and opportunities: Canada’s AI ecosystem, which found that Canadian VC investment in AI totalled $8.64 billion — placing the country third per capita among the G7, behind the United States and the United Kingdom. Another recent assessment, by the consulting firm KPMG, found that Canadian companies are lagging woefully behind the U.S. in AI integration. Only 35 percent of Canadian businesses reported using AI in their current operations, compared to a staggering 72 percent in the U.S.
Talasila says promising developments in AI are happening at the University of Toronto, McGill and the Vector Institute for Artificial Intelligence. And, he adds, “We’re seeing a move by some U.S. companies to establish engineering offices in Canada.” Those companies include RobinHood, a digital platform that facilitates the trading of stock, cryptocurrencies and exchange-traded funds (ETFs), which has been hiring engineers in Canada, and Nvidia, which recently signed a letter of intent with the aim of helping the country build its own AI infrastructure.
The move to adopt AI and LLMs may mean the financial space feels a bit like the Wild West in the imminent future, says Rob Baldassare, but he’s still sanguine about the opportunities. One area ripe with potential, in his opinion, is financial advising, where a user can share details about their financial history and receive AI-powered recommendations. “It’s going to tell me: ‘Rob, you don’t have enough insurance. Rob, you have some savings and you’re doing the wrong thing — here’s what to do. Rob, you have to cancel three of your five credit cards. Rob, I’m analyzing your paycheque and you’re not setting aside enough funds for retirement.’”
Mastercard’s Darrell MacMullin thinks AI will become more layered into everyday life as a tool that can provide algorithmically powered insights and save valuable time. Both merchants and financial institutions, he adds, should recognize how these technologies can change their relationships with customers. At the grocery store, for example, AI-powered recommendations could serve as “your real-time personal-shopping assistant,” he says, noting that if the front-end experience is augmented to become tailored to an individual customer, “the checkout almost becomes invisible.”
People are particularly inclined to shell out for that personalized cartful of goodies without a second thought when transactions are reduced to a simple tap — or better yet, a wave. Autonomous shopping, for instance, uses biometrics to link a shopper’s identity to their account (through specific anatomical touchpoint), so payment can be sent via a quick palm or face scan. Early-stage applications of this approach can be seen at Whole Foods stores in the United States, where customers are able to pay by passing their hands over a device powered by Amazon One. (Amazon acquired Whole Foods in 2017.) For Amazon Prime subscribers who link their accounts, any savings will be automatically applied.
In Canada, Loblaw Digital, the grocery chain’s e-commerce arm, is testing a range of AI applications. Among the initiatives in play are an internal Gen AI tool similar to Wealthsimple’s in-house LLM gateway, interactive recipe planners and personalized offers, and a targeted ad program known as Advance, which draws on PC Optimum data to deliver customized ad content directly to shoppers.
Although the financial landscape is still very much in flux when it comes to the potential uses of AI and LLMs — particularly as regulatory bodies work to create concrete guidelines that will ensure secure, ethical integration of new technology — it is clear these tools hold tremendous potential for new business opportunities. For any growth-minded institution, the way forward means finding your footing.
Forget reinventing the wheel; instead, develop a thorough understanding of how AI applications might be integrated into existing offerings, construct platforms from the inside out — literally, by engaging staff as users, bug-checkers and quality-assurance officers — and determine how these models can streamline and optimize internal processes. As Talasila notes, the commoditization of AIs and LLMs means this sector will look very different a decade from now. “I’m excited to be in the space to build toward it,” he adds.
But as with any building, the first step is creating a solid foundation.