E-commerce store customization strategies in the times of coronavirus10 April 2020
Statistics and Data Science1 May 2020
Advesarial Machine Learning (AML) examples are inputs to machine learning models which an attacker intentionally designed to cause an error in the model. How examples of AML work on different media and why can it be difficult to protect systems against them?
Advesarial Machine Learning (AML) examples are inputs to machine learning mAdvesarial Machine Learning is a technique used in machine learning to cheat or mislead a model with malignant input. While AML can be used in a variety of applications, this technique is most commonly used to attack or crash a machine learning system. The same attack instance can easily be changed to run on multiple models of different datasets or architectures.
AML can be divided into attacks where the attacker knows the inner workings of the model being used, and attacks where the attacker knows only the model’s output. Machine learning models are trained using large datasets of the subject being studied. A malignant attack such as anti-machine learning can be used against this machine learning algorithm by using the algorithms’ inputs to misinterpret this data, causing the entire system to then misidentify stop signs when implemented in any practice or production.
Types of competing attacks on machine learning
AML attacks can be classified as misclassified data or data poisoning. Misclassification input is a more common variant where attackers hide malicious content in the machine learning algorithm filters. The purpose of this attack is to misclassify a specific set of data by the system. Post-deployment backdoor Trojan attacks can be used for this purpose. Data poisoning occurs when an attacker tries to modify the machine learning process by placing inaccurate data in a dataset, making the output less accurate. The aim of this type of attack is to inhibit the machine learning process and minimize the usefulness of the algorithm. Traditional techniques for increasing the reliability of machine learning models generally do not provide a practical defense. So far, only two methods have provided significant defense.
Adversary Training: This is a brute force solution where we just generate a lot of opponents and clearly train the model not to get fooled by each of them. The implementation and use of open source opponent training is illustrated in the following tutorial.
Defensive Distillation: This is a strategy where we teach a model to derive the probabilities of different classes, not the hard decisions of which class to derive. The probabilities are provided by an earlier model trained for the same task using class hard labels. This creates a model whose surface is smoothed in the directions which the opponent usually tries to use, making it difficult for them to spot minor input adjustments that lead to incorrect categorization. However, even these specialized algorithms can be easily broken, giving the attacker more computing power.
Defense against AML
Currently, there is no specific defense against AML, but there are several techniques which can prevent this type of attack. These techniques include opponent training and defensive distillation. Opponent training is a process by which instances of opponent performance are entered into the model and marked as threatening. This process can be useful in preventing further machine learning attacks, but requires a lot of maintenance.
This is to make the machine learning algorithm more flexible as one model predicts the outcomes of another model which has been previously trained. This approach can identify unknown threats. The idea is similar to generative competing networks (GANs), which form two neural networks together to accelerate machine learning processes – assuming that two machine learning models are used together.
The security of artificial intelligence and why defend against the examples of the opponent?
When we think about AI security research, we usually think about some of the toughest issues in the field – how can we ensure that sophisticated learning amplifiers that are much smarter than humans behave as their designers purposeful?
Opposite examples show us that even simple, modern algorithms, for both supervised learning and reinforcement learning, can already behave in surprising ways that we do not intend to.
It is difficult to defend yourself because it is hard to build a theoretical model for the process of creating an example AML. They are solutions to an optimization problem which is non-linear and non-convex for many ML models, including neural networks. Since we do not have a good theoretical tools to describe solutions to these complex optimization problems, it is very tough to make a theoretical argument that defense will exclude a set of sample attacks. It is also difficult to defend itself as it requires machine learning models to produce good results for every possible input. Most of the time, machine learning models work very well, but only work on a very small number of all possible inputs they may encounter. An important area of research is designing a defense which can protect against powerful, adaptive attackers.
We looked at the different types of attacks as well as how to defend against these fits. This is definitely something to keep in mind when implementing machine learning models. Rather than blindly trusting the models to deliver the correct results, we must guard against these attacks and always think twice before accepting the decisions made by those models.
AML shows that many modern machine learning algorithms can be cracked in surprising ways. These machine learning failures show that even simple algorithms can behave quite differently from what their designers intended. Machine learning researchers are encouraged to get involved and develop prevention methods to bridge the gap between what designers intend to do and how algorithms behave.