CONTACT

Can AI Be Trusted?

2018-12-11T17:14:07+00:00July 2nd, 2018|Artificial Intelligence, Trill Takes|

Uber immediately suspended testing of driverless cars in North America after a woman in Tempe, Arizona was struck and killed by an Uber self-driving car earlier this year. That accident instantly begs a few questions. Mainly: when we give technology complete control and it fails, who is responsible? How comfortable are we with autonomous, AI-driven technology? And how much do we trust it?

Who is responsible?

When technology is solely being operated by people, the answer to this question is much more obvious. But, who is responsible once people transfer complete operational control to technology?

Ultimately, the simplest answer to this question depends on who agrees to take responsibility. Contracts will likely be drawn up between the separate parties, specifying who will be held responsible in the instance of failure or malfunction. This means that culpability could fall on either party — depending on who agrees to take the blame. Some contracts will be drawn up with liability on the firm providing the technology, others will be drawn up with liability falling on the human-overseer.

Initially however, it is likely that in high-risk industries like healthcare, finance and automotive, responsibility will fall on the shoulders of human-overseers. But, as technology improves and shows itself less-risky, it’s not outlandish to believe that technology firms will begin to take on some of the responsibility. The argument here could be made that the more we trust technology in our lives, the more accountable technology must be for its performance and functionality.

The Uber self-driving car is just the start to a bigger conversation on the future of AI-driven technology. Eventually, the discussion of responsibility will include the active presence of AI in operating rooms and much more.

How comfortable are we with independent, autonomous AI technology?

When it comes to technology, humans have always been pretty apprehensive to begin with. Such was the case when personal, in-home computers gained popularity in the late 1970s and early 80s. It’s no surprise that the same apprehension is felt with the rise and prevalence of artificial intelligence.

The difficulty with relying on artificial intelligence to perform human tasks is simply that machines act differently than humans in practice, despite being modeled to mimic human behavior. For example, a machine may perform a task with only one percent error, while a human may perform that same task at five percent error. The dissonance arises when the one percent of errors made by machines are errors that humans might only make approximately 0.1 percent of the time. A human-overseer could likely foresee the possibility of that error, but it would be difficult to guarantee.

How much do we trust artificial intelligence?

The Uber self-driving car crash played out the worst-case scenario of what can happen when people transfer total responsibility to technology. But, we cannot allow worst-case scenarios to stop us from evaluating, improving and fine-tuning technology. If that were the case, we would have never have been able to travel via airplane or made it back to the moon after the Challenger explosion. Terrible failures happen, but those failures can also teach us how to improve, how to pivot and how to avoid worst-case scenarios in the future.

There is great promise and likelihood for success with artificial intelligence. Ultimately, trust will come with time and with more data that proves the safety of AI-driven technology. Uber’s AI failure is also a reminder that human intelligence is necessary and needed. While our technology capabilities continue to rise, our need for human intelligence and intervention is still vitally important.

Image: Unsplash

Leave A Comment