Picture this scenario – you are the owner of a call center. To serve your customers, you want to shorten the wait times. You have your IT department run reports, dig into years’ worth of data, and develop a list of area codes with the most sales in the past three years. Your team of developers rigs up your system using this list and AI to place incoming calls into the wait queue based on past results. Sounds great! Your top customers will get priority service.
This a common and innocent scenario. What you have done, though, is introduce bias into the decision process. On the surface, you are trying to serve your best customers first. These are the people that are responsible for more revenue, correct? Makes sense. However, you could miss great customers from less populated areas or areas you have not previously served. There could be a net new customer ready to purchase lots of product that hangs up because of the long wait. And what about cell phones? People move and keep their old cell number and the old area code. In these scenarios, you could miss calls from good potential and existing customers.
When you dig in, though, the potential bias of AI starts showing up. Let’s assume that most of the company’s sales come from large urban areas. You are now neglecting customers from outside of the city. The nature of your business may favor wealthy area codes. You’re now disproportionally underserving whole communities. As you can see, in the most innocent scenarios, you decrease your service level to some customers. In the worst case, the ranking process is discriminatory. The US Congress is considering a bill called the Algorithmic Accountability Act that will open companies to civil litigation in such cases. Many states have passed or are considering similar bills.
What can we do to avoid bias? The first step is understanding that all AI systems will have some bias built in. It’s unavoidable. It can be mitigated, however. An AI Ethics board should thoroughly review all machine learning and AI algorithms. This board should have members from different company areas with diverse backgrounds. A common mistake is to have only technical resources on the board. There needs to be a periodic review of decisions made by algorithms. Unforeseen biases can be proactively discovered. All decisions made by AI need to have a human override capability. This step ensures proper supervision over automated decisions and helps avoid bias.
AI is a groundbreaking technology that has and will continue to influence our lives daily. Ever feel like your smartphone is tracking you? It is. Have you ever purchased something extra because Amazon told you that “people who bought X also purchased Y”? AI and machine learning drove that. It’s here to stay. With deliberate efforts to avoid bias, it will continue to benefit all of us.
Addendum: There are far more factors in mitigating or avoiding bias in AI that can be covered in a short blog. A couple of additional resources that I found helpful are the US Department of Commerce document “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence” and SAP’s AI Ethics Handbook.
View our LinkedIn, here.