One of the godfathers of modern artificial intelligence suggested Thursday that more attention needs to be paid to the moral framework that governs the use of the burgeoning technology.
“In the case of AI, we have scientific leadership. We are trying to also push forward with the AI economy, but we also need to have moral leadership to make sure the technology is used for good for most people,” Yoshua Bengio said during a panel discussion at the Public Policy Forum’s annual Canada Growth Summit.
“It’s really important that we do this right, but I’m not sure that we’re doing the right moves.”
Bengio’s words carry significant weight in the AI community: The University of Montreal professor is a pioneer in the field and was a recent co-winner of the Turing Award for his work developing deep learning technology.
Artificial intelligence is already allowing business to drive efficiency and deliver innovative new services, but lately its potential dangers are becoming an increasingly prominent part of the discussion.
Tim O’Brien, Microsoft’s general manager of AI programs, also spoke at the conference about ethical AI and the risks posed by the technology.
Artificial intelligence generally uses large amounts of data to create algorithms that make automated decisions, and the technology is at the heart of speech recognition, facial recognition, autonomous vehicles and many more emerging technologies.
O’Brien explained that one of the fundamental problems is that the data is biased, and so that bias reproduces itself in the decisions AI systems make.
“We have lots and lots of data to work with, more data than we’ve ever seen. It costs nothing to acquire, it costs next to nothing to store,” he said.
“The problem is this all comes from us. Everything that we do, the places we go … the movies we stream on the internet, our travel patterns, all of our biases are laid bare through the digital exhaust that we leave behind every single day.”
The potential dangers associated with AI goes beyond just bias though; there are also concerns with mass surveillance using the technology and potential military applications.
O’Brien said that the rampant excitement around AI has given way to a bit more caution.
“I think the unbridled enthusiasm was part of the risk,” he said.
“So the skepticism, I would describe as healthy skepticism, because you need to have a balance between the opportunity to change the world in a positive way, and the risk that it could lead us toward a world that we don’t want.”
O’Brien said Microsoft has adopted a set of AI principles requiring the company to consider fairness; reliability and safety, privacy and security, inclusiveness, accountability and transparency.
In particular, transparency is a challenge with deep learning.
“There’s an inverse relationship between accuracy and transparency (with deep learning). So the more accurate these models are, the blacker the black box becomes,” he said.
“In a business context, that transparency is important, just for no reason other than being able to explain why you’re operating how you’re operating as a business.”
O’Brien said that there are ways to make AI algorithms more transparent though, and that’s something customers will demand more in the future.
“I think the market will decide what’s acceptable, and I think it’s changing toward an environment in which transparency will be demanded, and I think that’s a good thing,” he said.