Published Date : 17/07/2025
AI systems are extremely powerful and are reshaping our lives. They are turning out to be extremely useful to us. AI systems are being used everywhere, from customer support for online retailers in the form of chatbots to social media platforms that suggest audio or video content to users. They are also being used by banks to sanction loans and by private and government agencies to hire employees. However, these systems also pose significant dangers.
For instance, Amazon abandoned an AI system after discovering that it was unfairly discriminating against potential employees based on gender and social background. Whenever AI systems fail, it is extremely difficult to pinpoint the root cause of the problem. AI systems are like a black box, and their inner workings are difficult to understand, even for the algorithm developer who created them.
So, why is it so difficult to blame an AI system when it fails?
Accountability Case for Software Systems
Consider a software program built to calculate electricity charges for a user who is a customer of an electricity distribution company. Due to faulty logic in the software program, the electricity consumption calculation was reported incorrectly. After realizing the mistake, the electricity distribution company approached the developer who built the software program. The developer investigated the matter immediately. It is a well-known fact that the problem could be found either in the data or in the business logic.
The developer first searched the database for a specific line item and found that the data looked correct. So, they investigated the source code and discovered the problem. The data was saved in the database as a floating-point number up to two decimal points. For instance, the consumption was 100.20 units of energy consumption for the month. However, in the source code, the calculation was done as a whole number with no decimal point. So, the source code was calculating energy consumption as 100 units instead of 100.20 units. Once the problem was identified, it was fixed immediately. The source code was changed to accommodate floating-point numbers for energy consumption.
Why AI Systems are Black Boxes?
But what about an AI system? In an AI system, there is no data. An AI system is trained on data and then tested with testing data. Once training and testing are complete, the AI system performs computations based on the memory saved after training and testing. If this AI system were asked to calculate energy consumption, it would do so using its trained memory, not any data saved in a database. Suppose the AI system is doing incorrect energy consumption calculations. How can you find where the problem is? It is almost impossible to do so. The issue could be with the training or testing data or the algorithm itself. A difficult nut to crack, indeed!
How to Establish Transparency in AI Systems?
Transparency in a software system is visible, making it easier to establish accountability. However, this is not the case with AI systems. The question is, how can transparency be established in an AI system to ensure accountability?
The first step to achieving transparency is to examine the data used to train and test the AI model. If the data is not clean or contains bias, the AI model will generate incorrect results. The next step is to review the algorithm used to develop the AI model. Understanding the algorithm is challenging, even for developers. Therefore, good documentation written in easy-to-understand language must be maintained so that even non-technical people can understand how the AI model performs calculations or makes decisions.
Explainability is crucial for all AI models. If documentation is inadequate, it will be difficult to trace the root cause of any incorrect decisions or calculations made by an AI model. Only when an AI model has 100% transparency can accountability be established.
In conclusion, while AI systems offer immense benefits, their black-box nature presents significant challenges in terms of accountability. Establishing transparency through thorough data examination and clear documentation is essential to ensure that AI systems can be trusted and held accountable for their actions.
Q: What are AI systems used for?
A: AI systems are used in various sectors, including customer support, social media content recommendation, loan sanctioning by banks, and hiring processes by private and government agencies.
Q: Why is it difficult to fix accountability in AI systems?
A: AI systems are like black boxes, making it difficult to understand their inner workings and pinpoint the root cause of issues, even for the developers who created them.
Q: How can transparency be established in AI systems?
A: Transparency in AI systems can be established by thoroughly examining the training and testing data, maintaining clear documentation of the algorithm, and ensuring explainability of the AI model's decisions.
Q: What is the importance of explainability in AI models?
A: Explainability is crucial for ensuring that AI models can be understood and trusted, making it easier to trace the root cause of any incorrect decisions or calculations.
Q: What are the consequences of biased data in AI systems?
A: Biased data in AI systems can lead to incorrect results and unfair decisions, such as discriminatory hiring practices or incorrect financial calculations.