Be first to read the latest tech news, Industry Leader's Insights, and CIO interviews of medium and large enterprises exclusively from Media & Entertainment Tech Outlook
THANK YOU FOR SUBSCRIBING
Cameron Davies, SVP, Corporate Decision Sciences, NBCUniversal Media
What does it mean to behave “ethically” as an individual? A universally accepted answer has been elusive for as long as humans have been sentient.Now we are faced with an even more daunting challenge,attempting to clearly define Ethical Machine Behavior.Determining one globally recognized standard probably isn’t achievable but there may be some principles we can apply to help us begin to consider and appropriately frame the issue.
Principle 1:Ethical AI ORIGINATES with Ethical Human Behavior
By far, the most common issue I have encountered in my career is the issue of model bias. This is a subject garnering a great deal of attention in the AI space. However, I have generally concluded it is less a technology concern than an evolutionarily one. All humans are biased, these biases can take many forms but not all of them lead to prejudices or discrimination. The same is true of AI. If weuse censored, skewed, or manipulated data the result will beabiased algorithm.This bias is a direct result of unethical or incompetent human behavior, not some esoteric algorithmic evil.We call these “Input Driven Biases” and we have in essence corrupted our model. The human developer taught inappropriate behavior and the model has no innate way of acting differently. However, it is important to recognize that these types of biases are not always “unethical”.At Disney, we developed a sophisticated system called Customer Centric Revenue Management designed to optimize offers at the customer call center. The underlying recommendation engine balanced Desirability (customer desires) with Profitability (Disney’s desires).We built the system with the ability to manipulate this balance, sometimes skewing toward Desirability and sometimes toward Profitability. We purposely introduced biases into the algorithm for the purpose of manipulating outcomes. It wasn’t unethical, it was pragmatic.
The question isn’t whether your model has bias, it does because humans do and the world does. The question is whether these intrinsic biases will result in decisions that “unfairly” harm an individual or group. That answer can and will evolve so we need to stay vigilant and we need to train modelers and end users on the scientific and societal issues at play.
Principle2:AI commonly REPLICATES Human Behavior
Sometimes we do everything right in model development and still have potential issues. This is particularly true with specific types of learning and reinforcement models.Quite often the result of a complex modeling exercise is the development of a highly descriptive and predictive construct of human behavior. We may not like that behavior.
we may not want to accelerate it, but the issue isn’t the data or the math. One of the most public examples of this was Tay, the Microsoft AI Twitter Chatbot released in 2016. Sixteen hours into production, Microsoft was forced to shut the system down when it began posting highly inappropriate and inflammatory racial tweets.
Mathematically, it is impressive how quickly the system began to learn and “optimally” respond.
Socially and ethically, it’s deeply concerning how easily internet trolls “brainwashed” and manipulated the system. Inflammatory click-bate headlines aside, Microsoft did NOT build a racist chat bot. They built impressive technology that was misused. That is often the risk with scientific advancement. Technology that allows me to efficiently navigate the globe is the same technology guiding ballistic missiles. The answer can’t be a fullstop on evolving technology, but it does require we be even more thoughtful about applications and corresponding implications.
Principle3:Human Intelligence must help ELUCIDATE AI Behavior
Fifty years of Behavioral Economics research has taught us that most humans consistently underperform at making rational decisions. However, we are very good at rationalizing and developing ex post defacto narratives. Said differently, we consistently make irrational decisions but we are very good at justifying them. Currently, AI systems don’t have the ability formulate creative, self-justifying narratives.
"It is time for key industry stakeholders to come together and decide what a Responsible AI Structure could really look like"
If an autonomous vehicle swerves to avoid a small animal, instead severely injuring a child, there will be no penitent figure on the witness stand pleading their case to a jury of empathetic “peers”.
When Apple Card offered higher credit limits to men than women, public outrage was immediate and strong.To date, neither Apple nor Goldman Sachs has offered a clear explanation of why it happened. We don’t know if there were valid reasons or something more nefarious and evidently neither do they, or maybe they are unwilling to say.What we do know is that societal patience is waning.Consumers, legislatures, and business leaders are demanding ever more transparency into the machine’s decision making processes. In contrast, our mathematical and technological approaches are getting more complex, making clarity even more difficult. The need for transparency pre and post execution will be key to acceptance and scale for these systems.We must get better at elucidating their results.
Many people in this space want to engage in deep theological discussions around the singularity and other esoteric ethical paradoxes. As entertaining as these discussions may be, they don’t help us pragmatically solve the real business issues of today. These three principles can help us begin to establish a framework. Building on these principles, we need a fuller, more pragmatic frame work.It is time for key industry stakeholders to come together and decide what a Responsible AI Structure could really look like.
As a starting point, we would propose and integrated structure to address 5 Key areas. Only then can we get truly pragmatic with our ethical machine discussions.
1. Data Quality and Associated Data Rights
2. Model Clarity and Interpretability
3. Model Robustness and Stability
4. Model Bias and Fairness
5. Regulatory and Compliance Risks.
Read Also
Copyright © 2024 Media and Entertainment Tech Outlook. All rights reserved | Sitemap | About Us
However, if you would like to share the information in this article, you may use the link below:
cinema-tech.mediaentertainmenttechoutlook.com/cxoinsight/pragmatically-ethical-ai-nwid-1040.html