KANTAR HEALTH BLOG

kh_blog_hero_1600px

Coexisting Safely with Artificial Intelligence

by Jessica Santos | May 6, 2019

Artificial Intelligence (AI) is all around us! We talk to Alexa-like devices, increasingly rely on automation, and will soon be driven around by driverless cars. In healthcare, AI is helping to discover new medicines and delivering a host of patient-centered technologies -- such biometric monitors, remote physician consultations and adherence assistance applications. Most importantly, the possibilities for continued AI innovation are limitless.

However, as machines become increasingly "intelligent" and better than humans at designing even smarter machines, the key question is what will this mean for mankind? And, critically, what are we doing to ensure a safe and worthwhile coexistence with such machines?

Accountability

In healthcare market research, the inability to explain decisions made by AI programs is a major problem for data quality. This inability to understand how AI does what it does also stops AI from being further deployed in areas such as law, healthcare and within enterprises that handle sensitive customer data. Understanding how data is handled, how AI has reached a certain decision, and ultimately who is accountable are major unsolved challenges.

Furthermore, AI trained on wrong or unfiltered data can certainly make bad decisions. However, worse than that, current deep learning systems can sometimes give us confidently wrong answers, and provide limited insight into why they have come to specific decisions. This is what concerns me most as a healthcare market researcher. It’s okay to be wrong, but it’s not okay to be confidently wrong. The key to solving this dilemma is how we deal with uncertainty – the uncertainty of messy and missing data, and the uncertainty of predicting what might happen next. Uncertainty is not a good thing, as it’s something we debate endlessly and can’t fight by ignoring it. The entity who ultimately makes decision on uncertainty will be held most accountable, but who will that be?

Ethics

To address this, we must determine who is telling AI its narratives? Whose stories, and which stories, will inform how AI interacts with the world? Which novels are being chosen to "teach" AI morality? What kind of writers are being enlisted to script AI–human interaction? If we can create more diverse literary and cinematic AI narratives, this can enhance the research and improve the language and data that feeds into actual AI systems. By paying closer attention to what stories are doing and how they are doing it, it doesn’t destroy the power they have – it helps us understand and appreciate that power even more. For example, imagine if we want AI to handle resource allocation decisions in our health system. It might accomplish this more fairly and efficiently than humans, with immense benefits for patients and taxpayers. However, to be successful we’d need to specify its goals correctly.

Sensibility

The biggest promise of AI is self-learning. But how can we be sure that AI will learn good and sensibly ONLY? That's because self learning also means learning bias, selfish demands, unending desires, and lack of happiness. A society driven by consumerism, celebrity worship, video games and social media gossip, and with indifference to massive social problems, creates human bias that contaminates AI systems. How do we ensure that AI learns only the good from us instead of everything?

 

There are two big problems with this utopian vision. One, is how do we get the machines started on this journey? And two, what would it mean to reach this destination? The "getting started" problem is that we need to tell the machines what they’re looking for with sufficient clarity and precision that we can be confident that they will find it – whatever "it" actually turns out to be. This is a daunting challenge, given that we are confused and conflicted about the ideals ourselves, and different communities might have different views. The "destination" problem is that, in putting ourselves in the hands of these moral guides and gatekeepers, we might be sacrificing our own autonomy – an important part of what makes us human.

Human Exceptionalism

 Maybe what we are wishing for is privacy, security, safety, transparency, reliability and ultimately trust in the AI development, as well as the convenience and better quality of life that it can bring to us. If we are to give general intelligence to machines, we’ll need to give them moral authority too. That means a radical end to human exceptionalism.

 

In our latest white paper, "ARTIFICIAL INTELLIGENCE: The End of the World or the Dawn of Limitless Possibility", we explore potential routes in which AI may progress and how to best harness AI's unbridled power. I encourage you to take some time to review this piece, or as always, please feel free to reach out to me directly at Jessica.Santos@kantarhealth.com to discuss healthcare market research and how Kantar Health can help you improve data quality in your organization.

Leave a comment

More than one Google Analytics scripts are registered. Please verify your pages and templates.