Humans have many kinds of biases. To name just a few, we suffer from confirmation bias, which means that we tend to focus on information that confirms our preconceptions about a topic; from anchoring bias, where we make decisions mostly relying on the first piece of information we receive on that subject; and from gender bias, where we tend to associate women with certain traits, activities, or professions, and men with others. When we make decisions, these types of biases often creep in unconsciously, resulting in decisions that are ultimately unfair and unobjective.
These same types of bias can show up in artificial intelligence (AI), especially when using machine learning techniques to program an AI system. A commonly-used technique called “supervised machine learning” requires that AI systems be trained with a large number of examples of problems and solutions. For example, if we want to build an AI system that can decide when to accept or reject a loan application, we would train it with many examples of loan applications, and for each application, we would give it the correct decision (either accept or reject the application).
The AI system would find useful correlations in such examples and use them to make (hopefully correct) decisions on new loan applications. After the training phase, a test phase on another set of examples checks that the system is accurate enough and ready for deployment. However, if the training dataset is not balanced, inclusive, or representative enough of the dimensions of the problem we want to solve, the AI system may become biased. For example, if all accepted loan applications in the training dataset are related to men and all rejected ones are related to women, then the system will pick up the correlation between gender and acceptability as a form of bias and will use this bias when making decisions on new applications going forward.
Another example of how bias creeps into AI training datasets happens when we include many more data points for one group compared to another. In this case, the AI system’s accuracy will probably be different for the two groups, since the AI could learn better (by exploiting more information) for one of the two groups. In domains with high-stakes decisions — such as in the financial sector, healthcare, or the judicial domain — using an AI system with bias can lead to decisions that favor one group of people over another one. This is not acceptable, especially when the decisions may significantly impact lives.
Currently, there are algorithms that can detect and mitigate bias in AI systems. However, the AI bias space is incredibly complex, and different data types (images, text, speech, structured data) require different techniques for detecting bias in the training dataset. Bias can also be injected in other phases of the AI development pipeline, not just in the training dataset. For example, consider an AI system that’s supposed to identify the main reason for a loan request — such as buying a house, paying for school fees, and paying for legal fees — with the goal being to prioritize some of those categories above others, as determined by the developers. If the developers omit one of the reasons why people apply for a loan, people with that motivation would be penalized.
So, what can we do to fix this growing challenge? Here’s what the top tech companies have done to advance fairer, more transparent, and more accurate AI:
- Create an effective AI ethics board. The top priority is the ethical considerations of the technologies we bring into the world. We believe that to make true and lasting changes on this critical issue, we and others must support holistic organizational and cultural change. For example, IBM has put in place a centralized and multi-dimensional AI governance framework, centered around the IBM internal AI ethics board, which supports both technical and non-technical initiatives to operationalize the IBM principles of trust and transparency. The board is also responsible for advance efforts internally under the umbrella of Trusted AI that seeks to tackle multiple dimensions of this concept, including fairness, explainability, robustness, privacy, and transparency.
- Clearly define the company policies around AI. Most of the tech companies working on AI have release their versions of the principles, trust and transparency guides with the policy approaches to AI in ways that promote responsibility, including our view on Regulation of AI. These principles outline the commitment to using AI to augment human intelligence, the level of commitment to a data policy that protects clients’ data and insights gleaned from their data, and a commitment to a focus on transparency and explainability to build a system of trust in AI. The regulation policies recommend that policy makers only regulate high-risk AI applications, after a careful analysis of the technology used and its impact on people.
- Work with trusted partners. It is also necessary to establish multiple multi-stakeholder relationships with external partners over the years to advance ethics in AI. For Example, IBM became one of the first signatories on the Vatican’s “Rome Call for AI Ethics.” in February 2020, this initiative in partnership with the Vatican focuses on advancing more human-centric AI that aligns with core human values, such as focusing more attention on vulnerable parts of the population. Another initiative IBM joined is the European Commission’s (EC) High-Level Expert Group on AI, designed to deliver ethical guidelines for trustworthy AI in Europe. They’re now being used extensively in Europe and beyond to guide possible future regulations and standards for AI.
- Contribute open-source toolkits to the pillars of AI trust. Beyond defining principles, policy, governance, and collaboration, the tech companies also prioritize the research and release of tangible tools that can move the needle on AI trust. Many companies are continuously releasing open-source toolkit that allows developers to share and receive state-of-the-art codes and datasets related to AI bias detection and mitigation. These toolkits also allow the developer community to collaborate with one another and discuss various notions of bias, so they can collectively understand best practices for detecting and mitigating AI bias.
These efforts, born at various companies, have also led to innovative business solutions for clients. While solutions like these helps, they alone are not enough to assure that deployed AI systems do not have unwanted bias built into them. Often, developers are not even aware of the kind of bias their models have, and they may not have the knowledge to identify what’s fair and appropriate for a certain scenario.
To tackle this, there are multiple initiatives businesses can and should focus on:
- Devote resources to education and awareness initiatives for designers, developers, and managers;
- Ensure diverse team composition;
- Be sure to include consultations with relevant social organizations and the impacted communities to identify the most appropriate definition of fairness for the scenarios where the AI system will be deployed, as well as the best way to resolve intersectionality issues — various notions of bias (such as gender, age, and racial bias) that impact on overlapping parts of the population, where mitigating one can increase the other one;
- Define methodology, adoption, and governance frameworks to help developers correctly revise their AI pipeline in a sustainable way. New steps (for example to detect and mitigate bias) need to be added in the usual AI development processes; a clear methodology needs to be defined to integrate such steps and effort needs to be made to make the adoption of such methodology as easy as possible. A governance framework also needs to be used to evaluate, facilitate, enforce, and scale adoption; and
- Build transparency and explainability tools to recognize the presence of bias and its impact on the AI system’s decisions.
Overall, only a multi-dimensional and multi-stakeholder approach can truly address AI bias by defining a values-driven approach, where values such as fairness, transparency, and trust are the center of creation and decision-making around AI. By doing so, not only can we avoid creating AI that replicates or amplifies our own biases, but we can also use AI to help humans themselves be more fair. The ultimate goal, of course, is not to advance AI, per se, but to advance human beings and our values through the use of technologies including AI.
0 Comments