Linking KPI and Big Data
9 April 2021Artificial intelligence in industry
23 April 2021We increasingly entrust our safety and health to artificial intelligence, which we are unable to understand – the black box of AI. The problem of the black box is due to the way we train artificial intelligence systems. Most artificial intelligence systems are trained using reverse propagation.
The black box problem
One problem is due to reverse propagation because we cannot explain what actually represents values inside the array.
The problem is that modern artificial intelligence makes the source code – whether transparent or not – less relevant than other factors in the algorithm function. In particular, machine learning algorithms – and especially deep learning algorithms – are typically built on only a few hundred lines of code. The logic of algorithms is most often learned from training data and is rarely reflected in its source code. Some of the most effective algorithms today are often the most obscure.
We know that the black box works by measuring the accuracy of the test data, but we cannot explain how it works. This is the reason why artificial intelligence does not explain its reasoning.
Why the black box AI exist?
So, what exactly cause the black box problem AI? The most common tools facing the black box are those that use artificial neural networks and deep learning.
Artificial neural networks consist of hidden layers of nodes. Each of these nodes processes input data and transmits its output to the next layer of nodes. Deep learning is a huge artificial neural network with many hidden layers that “learn” themselves by recognizing the patterns. This can be infinitely complicated. We do not see the output between layers, just the application. Therefore, we cannot know how nodes analyze data and we are dealing with AI black box.
Trust in the black box model means that you trust not only the model equations but also the entire database from which it was built. Each relatively complex dataset we have seen contains imperfections. These could include huge amounts of missing data or immeasurable mistakes, systematic mistakes in the collection, data collection problems that make the distribution of data different from what we thought originally.
One such common problem with black models is data leaking, where some label information y enters the variables x in a way you may not suspect when looking at the titles and descriptions of the variables. Thinking that something is expected in the future, but what has happened in the past is detected. By predicting the results, the device can collect information in notes that reveal the results before they are officially recorded and therefore incorrectly consider them as successful forecasts.
In an attempt to rely on widespread concern about the opacity of black box models, some scientists have tried to explain them, hypotheses about why they make the decisions they make. Such explanations usually try to mimic the predictions of a black box using a completely different model or provide other statistics that provide incomplete information on how to calculate a black box.
Technical transparency
Technical transparency, i.e. disclosure of source code, algorithm input and output, can build trust in many situations. However, most algorithms in today’s world are developed and managed by profit-oriented companies, which consider their algorithms to be a very valuable form of intellectual property, which must remain hidden. Some suggested a compromise by implying that the source code be disclosed to regulators in the event of a serious problem, and they will ensure that the process is fair to consumers.
This approach is simply a shift of the burden of faith from the algorithm itself to the regulatory authorities. But in a world where large and small personal and social decisions are passed on to algorithms, this becomes less acceptable.
Another problem related to technical clarity is that algorithms are susceptible to fraud if we know the source code.
Explainable AI – resolving AI black box problem
Because the problem of the black box of artificial intelligence is becoming increasingly a problem, artificial intelligence developers are now paying attention to solving it.
The answer is in Explainable AI, or XAI in short. XAI is a set of tools and structures that will help you understand and interpret forecasts created by machine learning models. It allows you to debug and improve model performance, as well as help others understand the behavior of models.
Explainable artificial intelligence (XAI) is artificial intelligence designed to describe its purpose, justification and decision-making in a way that is understandable to the average person. XAI is often discussed in relation to deep learning and plays an important role in the FAT ML model (integrity, accountability and transparency in machine learning).
XAI provides general information on how AI makes a decision by disclosing:
- Strengths and weaknesses of the program.
- The specific criteria based on which the program makes a decision.
- Why does the program make a specific decision as opposed to alternatives.
- A level of trust appropriate to the different types of decisions.
- What types of errors the program is susceptible to.
- How to correct errors.
XAI’s important goal is to provide for algorithmic responsibility. Until recently, AI systems were essentially black boxes. Even if the input and output data are known, the algorithms used to make the decision are often reserved or not easy to understand, even though the internal programming functions are open and made available free of charge.
More and more companies are embedding artificial intelligence and advanced analysis in the business process and automating decisions, we need transparency about how these models make ever-increasing decisions.
Because artificial intelligence is becoming more and more common, it is more important than ever to disclose how the issue of prejudice and trust is addressed. For example, the EU General Data Protection Regulation (GDPR) contains the right to an explanatory clause.
In the meantime, it is worth remembering that building confidence in machine learning and the analyst will require a relationship system. A good balance of transparency for auditors relative to users may influence the acceptance of such a solution.