Skip to content

AWS director wants Canada's AI legislation to mesh with other countries

TORONTO — Amazon Web Services’ director of global artificial intelligence is encouraging Canada not to go it alone when it comes to regulating the technology.
f2ba3dd856c6bcfa673533558dae6f5575dec2e649ae724b0c1dea5b05cb3341
An Amazon Fulfillment centre is pictured in Delta, B.C., Friday, May. 31, 2024. Amazon Web Services’ head of public policy for AI is encouraging Canada not to go it alone when it comes to regulating the technology. THE CANADIAN PRESS/Ethan Cairns

TORONTO — Amazon Web Services’ director of global artificial intelligence is encouraging Canada not to go it alone when it comes to regulating the technology.

Canada should settle on AI legislation that is “interoperable” with guardrails other countries will wind up using or many burgeoning companies could wind up having trouble, Nicole Foster warned Tuesday.

“A lot of our startups are wonderfully ambitious and have ambitions to be able to sell and do business around the world, but having bespoke, unique rules for Canada is going to be an extremely limiting factor,” Foster said during a talk at the Elevate tech conference in Toronto.

Canada is working on an Artificial Intelligence and Data Act that is meant to design how the country will design, develop and deploy the technology.

The legislation is still winding its way through the House of Commons and isn't expected to come into effect until at least next year but is being watched intensely in the technology sector and beyond.

Many worry the legislation could curtail innovation and push companies to flee for other countries that are more hospitable toward AI, but most agree that the technology industry cannot be left to decide on its own guardrails.

They reason the sector needs some parameters to protect people from systems perpetuating bias, spreading misinformation and causing violence or harm.

With the European Union, Canada, the U.S. and several other countries all charting their own paths toward guardrails, some in the tech community have called for collaboration.

Foster says there’s some “really promising signs” it could come to fruition based on what she’s seen from the G7 countries.

“Everybody is saying the right things. Everybody thinks interoperability is important,” she said.

“But saying it's important and doing it are two different things.”

Canada’s industry minister François-Philippe Champagne is largely responsible for whatever approach the country takes to AI.

Last summer, he told attendees at another tech conference in Toronto, Collision, that he feels Canada is “ahead of the curve” with its approach to artificial intelligence, beating even the European Union.

“Canada is likely to be the first country in the world to have a digital charter where we’re going to have a chapter on responsible AI because we want AI to happen here,” he said.

His government has said it would ban “reckless and malicious” AI use, establish oversight by a commissioner and the industry minister and impose financial penalties.

Whatever Canada settles on, Foster said it has to be “conscious of the cost of regulation” because asking companies to undergo evaluations to ensure their software is safe can often be time-consuming and much of that work is already being done.

She feels the best regulatory model will identify high-risk AI systems and ensure there are steps in place to mitigate any harms they could cause but won’t regulate things that shouldn’t be regulated.

Among the AI systems she thinks can go without regulation are “mundane” systems like those that get baggage to travellers at an airport faster.

“I think (it’s about) being focused on the risks that we need to address and then really kind of not getting in the way of really valuable technology that's going to make our lives better,” Foster said.

In a separate panel, Adobe’s head of global AI strategy Emily McReynolds also mentioned that there’s a role for companies to play in the conversation around regulation, too.

Adobe, she said, has committed to not mining the web for data it uses in its AI systems and instead opted to license information. She positioned the move as one that brings transparency to the company’s work but also ensure it is “really respecting creators,” who tend to use the company’s software.

She said Adobe had chosen to take a proactive approach to issues like data and told other businesses “it's really important to understand that building AI responsibly is not something that comes after.”

This report by The Canadian Press was first published Oct. 2, 2024.

Tara Deschamps, The Canadian Press

push icon
Be the first to read breaking stories. Enable push notifications on your device. Disable anytime.
No thanks