UA law professor Jane Bambauer says the current state of artificial intelligence is like the Wild West — and that's why she finds it so intriguing. (Photo: Bob Demers/UANews)
UA law professor Jane Bambauer says the current state of artificial intelligence is like the Wild West — and that's why she finds it so intriguing. (Photo: Bob Demers/UANews)

AI Presents Thorny Regulatory Questions for the Future

UA law professor Jane Bambauer thinks a measured approach will be best in taming the "Wild West" of artificial intelligence.
Feb. 20, 2018
Extra Info: 
  • Jan. 22: "Problem Solving With Algorithms," Stephen Kobourov, UA professor of computer science
  • Jan. 29: "The Minds of Machines," Mihai Surdeanu, UA associate professor of computer science
  • Feb. 5: "Working Alongside Thinking Machines," Nirav Merchant, director, UA Data Science Institute
  • Feb. 12: "What Humans Do That Machines Cannot," Luis von Ahn, CEO and co-founder, Duolingo; professor of computer science, Carnegie Mellon University
  • Feb. 26: "There Is No Such Thing as Big Data," Vincent J. Del Casino Jr., vice president, UA Academic Initiatives and Student Success; UA professor, School of Geography and Development
What: 
College of Science lecture series
When: 
7 p.m. Monday, Feb. 26 (final lecture)
Where: 
Centennial Hall
Jane Bambauer: "My impulse is to let companies figure out what's working and what isn't, before we regulate." (Photo: Bob Demers/UANews)
Jane Bambauer: "My impulse is to let companies figure out what's working and what isn't, before we regulate." (Photo: Bob Demers/UANews)
After her lecture, Jane Bambauer fields questions from science teachers and students in an informal gathering at the Social Sciences Building. (Photo: Bob Demers/UANews)
After her lecture, Jane Bambauer fields questions from science teachers and students in an informal gathering at the Social Sciences Building. (Photo: Bob Demers/UANews)

A self-driving car experiences sudden brake failure as it approaches a crosswalk.

If it continues in its lane, it will plow into several pedestrians. If it swerves into the adjacent lane, it will ram into a concrete barrier. In either case, injuries and even loss of life are likely.

Although the aforementioned scenario currently lives only on Moral Machine, a platform designed by the Massachusetts Institute of Technology to "gather a human perspective on moral decisions made by machine intelligence," it might be coming soon to a street near you. And that has Jane Bambauer both fascinated and concerned by what she describes as the current "Wild West" of artificial intelligence.

"The coordination of a world with both driven cars and driverless cars will be incredibly complicated," Bambauer, a University of Arizona law professor, told a group of science teachers and graduate students after her lecture Monday night on "Machine Influencers and Decision Makers" at Centennial Hall. The lecture was the fifth in the College of Science series on "Humans, Data and Machines," which has focused on the convergence of the digital, physical and biological worlds.

Bambauer said the transportation industry will be turned upside down in the same way that it was when automobiles disrupted a world of horse-drawn carriages, forcing both modes to share the road. She said it will be tempting to rush in and regulate — but much better to go slowly.

"My default position with new technology is not to do much heavy-handed regulation," she told the teachers and students.

"I'm a bit of a contrarian in my field. My impulse is to let companies figure out what's working and what isn't, before we regulate. There are instances where, when trying to regulate in advance, you end up missing on innovations (that follow)," she said, citing the early World Wide Web as an example.

Bambauer told her lecture audience that it's useless trying to fight the onslaught of algorithms. They're pervasive, they're not going away, and they're assessing our credit scores, career interests, health care and more.

"We interact with machine-learning algorithms almost any time we do anything on the internet," she said.

It's more prudent, then, to ask probing questions such as: Is an algorithm biased? Is it manipulative? Are there hidden moral or political issues?

An example of bias, Bambauer said, is seen in how scores from the computer program COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) are used to create a risk scale for criminal recidivism. Although the program's 137 variables do not include race or ZIP code, the scale still has bias built into its computations, as Bambauer demonstrated through a series of bar graphs.

Even the perception of bias can turn people away from trust in our institutions, she said, showing how basic Google searches can be interpreted by some as an indication of the tech giant's political leanings.

Manipulation can surface in something as innocuous as food reviews or as insidious as "fake news." Facebook's news-feed algorithm "has incredible power," Bambauer said, adding that "the filter bubble is limiting you" in terms of big-picture perspective.

In any event, algorithms are, well, complicated to untangle.

"All of the problems (with algorithms) are interconnected," she said. "Accuracy might increase bias. Data gathering might affect privacy. The problems are with setting priorities among competing goals."

Echoing what previous speakers in the series had said, she noted, "It's easy to blame the algorithms, but algorithms do what we ask."