Â鶹ÊÓƵ

Skip to content

New Toronto company can alert firms to unintended consequences in their AI

TORONTO — Rarely a week goes by without Toronto tech worker Karthik Ramakrishnan seeing another example of artificial intelligence gone wrong.
2021102019108-441fd5d6be107261a199998a6affb7d71ff9c669734135437c622e8c93d44f29

TORONTO — Rarely a week goes by without Toronto tech worker Karthik Ramakrishnan seeing another example of artificial intelligence gone wrong.

Systems programmed with the technology have led to a French medical chatbot suggesting someone commit suicide, another created by Microsoft tweeting a 9/11 conspiracy theory and an Amazon.com Inc. recruiting tool downgrading resumes with references to women.

But Ramakrishnan is convinced this pattern can be eased and many of the problemsstemming from AI — machine-based technologies that learn from data — can be prevented.

That's why he, Dan Adamson and Rahm Hafiz co-founded Armilla AI, which launched Thursday with $1.5 million in financial backing from investors including AI godfather Yoshua Bengio and Two Small Fish Ventures, a fund run by Wattpad's Alan and Eva Lau.

Armilla is behind a new quality assurance platform that analyzes systems to detect faulty AI and predict its consequences — before troubles arise.

"No system is perfect, but our objective is to make them as perfect as possible," said Ramakrishnan, Armilla's chief business officer.

Armilla's platform delves into the systems clients have created, the data that trained their software and their modelling and outcomes to conduct about 50 tests looking for issues with compliance, gender or ethics biases and other unintended consequences.

For example, Armilla used its platform with a public data set filled with information about credit lending in Germany.

The bank behind the data set didn't want its AI to discriminate against new immigrants, so it removed a data line collecting immigration statuses.

Armilla found the system discriminated against immigrants anyway because the bank included information on housing and residence in multi-tenant apartments correlated so strongly with immigration that it was causing bias.

"This is how faults creep into systems, not intentionally, but there are these unintended consequences with the way we run our systems and the second-order correlations that we miss are the kinds of things that Armilla's platform is designed to surface," Ramakrishnan said.

The entire process is a time saver, he said, because while large and sophisticated companies keep teams in the hundredsjust to run their systems through a growing number of scenarios, that work is often done manually and either sporadically or on a fixed schedule.

"Banking has been doing models for 20-plus years," he said. "However, that process done manually takes anywhere between six months to a year for a single model and an average-sized bank has about 400-plus models and they're only growing."

Armilla's platform can quickly learn the sensitivities and riskiest parts of any system, so a company can run its tests repeatedly and uncover any blind spots not built into traditional models.

But the goal really isn't speed; it's safety and ethics.

Both have become pressing issues as organizations in every sector turn to the technology, according to a September report from the University of Toronto's Rotman School of Management.

"Technology and AI systems are not neutral or objective but exist in a social and historical context that can marginalize certain groups, including women, racialized and low-income communities," said the report called "An Equity Lens on Artificial Intelligence."

It found that AI-based systems are a "double-edged sword" because they often help but are only as neutral as the data and algorithms their technology is based on.

For example, it pointed to a situation where an AI system for detecting cancerous skin lesions was less likely to pick up cancers in dark-skinned people because it had been developed from a database comprised of mostly light-skinned populations.

Armilla is hoping to expose such issues and avoid "catastrophic errors."

"There's so many things that could happen in a complex system," Ramakrishnan said.

"We want to ensure we can catch the big things as much as possible."

This report by The Canadian Press was first published Oct. 21, 2021.

Tara Deschamps, The Canadian Press

push icon
Be the first to read breaking stories. Enable push notifications on your device. Disable anytime.
No thanks