Artificial Intelligence in e-commerce5 November 2018
WORDPRESS 5.018 December 2018
Neural networks (ANN) are used almost everywhere where there is a need for heuristics to solve the problem.
History of neural networks
It all began when Warren McCulloch and Walter Pitts created the first NN model in 1943. Their model was based only on mathematics and algorithms and could not be tested due to the lack of computing resources. Later, in 1958, Frank Rosenblatt created the first-ever model that could recognize patterns. The real model still cannot be tested.
The first neural networks, which can be tested and have many layers, were published by Alexey Ivakhnenko and Lapa in 1965. Later, NN research has stagnated due to the high popularity and efficiency of machine learning models. This was accomplished by Marvin Minsky and Seymour Papert in 1969. This stagnation, however, was relatively short, because six years later in 1975 Paul Werbos invented backward propagation that solved the XOR problem and generally made teaching neural networks more efficient. In 1992, a new method was introduced. Max-pooling helped in the recognition of 3D objects by recognizing the slightest change in immutability and tolerance to deformation. In 2009-2012, repeated projects of neural networks created by the research group Jürgen Schmidhuber won 8 international prizes in pattern recognition competitions and machine learning competitions. In 2011, deep neural networks and, consequently, convolution layers with maximum layers were started, the result of which was then transferred to several completely connected layers followed by the initial layer. They are called Convolutional Neural Networks.
What is a neural network?
It is a technique of developing a computer program that learns from data. It relies very loosely on how we think about how the human brain works. First of all, a set of software neurons is created and combined together, allowing you to send messages to each other. The network is then asked to solve the problem that it is trying to do over and over again, each time strengthening connections that lead to success and reducing those that lead to failure.
Neural network is a combination of computational systems. Computation systems are procedural, and the program starts with the first line of code, executes it and goes to the next one, following the instructions in a linear manner. A real neural network does not follow a linear path. Rather, information is processed collectively, in parallel across the network of nodes.
There are a few parts that make up the architecture of the basic neural network:
- Units/neurons – The least important of the three parts of neural network architecture. These are functions that contain weight and errors and wait for the data to come. After receiving the data, they perform some calculations and then use the activation function to limit the data to the range.
- Scales/parameters/connections – The most important part of the network, they are numbers that NN must learn to generalize the problem.
- Bindings – These numbers represent what NN “thinks” that it should add the weights by multiplying the data. Of course, they are always wrong, but NN also learns optimal errors.
- Hyper-parameters – These are the values that you must set manually
- Activation functions – They are also known as mapping functions. They introduce some input data on X-axis and send the value to a limited extent. They are used to convert large results from units to a smaller value – most times – and promote nonlinearity in NN. The choice of the activation function can drastically improve or hinder the operation of NN.
- Layers – They help networks get complexity in every problem. Increasing the layers (with units) can increase the non-linearity of the output of neural networks.
All the above elements are needed to build the backbone architecture of the neural network.
What happens when the neural network is learning?
The most common way to teach neural networks and to generalize a problem is to use the simple gradient method. In combination with this method, another common way to teach neural networks is to use backward propagation. Using this method, the error in the output layer of the neural network is propagated backwards using the string rule from the differential calculus. There are many different reservations in network training.
What is the difference between neural networks and conventional calculations?
To better understand artificial neural calculations, it is important to first learn how a conventional computer and it’s software process information. The serial computer has a central processor that can address an array of memory locations in which data and instructions are stored. The calculations are performed by the processor reading the instruction, as well as any data required by the instruction from the memory addresses, the instruction is then executed, and the results are recorded in the specified memory location, as required. In a serial system, calculation steps are deterministic, sequential and logical, and the state of a given variable can be traced from one operation to another.
For comparison, neural networks are not sequential or necessarily deterministic. There are no complex central processors, and there are many simple CPUs that usually do nothing more than the weighted sum of their input data from other processors. They do not carry out programmed instructions; they react in parallel to the schematic of input data presented to him. There are also no separate memory addresses for storing data. Instead, the information is included in the general “state” of network activation. “Knowledge” is thus represented by the network itself, which is literally more than the sum of individual components.
What applications should be used in neural networks?
Neural networks are universal approximations and work best if the system you use for modelling has a high tolerance for errors. Therefore, it is not recommended to use a neural network to balance your chequebook! However, they work very well for:
- intercepting associations or discovering patterns within a set of patterns;
- where the volume, number of variables or diversity of data is very large;
- relationships between variables are vaguely understood;
- relationships are difficult to describe in a manner adequate to conventional approaches.
What are their limitations?
There are many advantages and limitations associated with the analysis of neural networks and to properly discuss this topic, we would have to look at specific types of networks, which is not necessary for this general discussion. However, with respect to the backward propagation network, there are some specific problems that potential users should be aware of.
- Retropropagation neural networks (and many other types of networks) are in a sense the ultimate “black boxes”. In addition to defining the general architecture of the network and perhaps the initial assumption of it using random numbers, the user has no other role than to provide it to enter and observe how it trains and waits for the exit. Some software available for free software packages allow the user to try to progress the network at regular intervals, but the learning process itself progresses. The ultimate product of this activity is a trained network that does not provide any equations or coefficients that define a relationship beyond its own internal mathematics.
- Reverse propagation networks are also slower in training than other types of networks, and sometimes require thousands of ages. If it works on a truly parallel computer system, this problem is not a problem, but if it is simulated on a standard serial machine, the training can take some time. This is because the computer’s processor must separately calculate the function of each node and connection, which can be problematic in very large networks with large amounts of data. However, the speed of most current machines is such that this is usually not a major problem.
What are their advantages over conventional techniques?
Depending on the nature of the application and the strength of internal data patterns, it can generally be expected that the network trains well. This applies to problems in which relationships can be quite dynamic or non-linear. Neural networks are an analytical alternative to conventional techniques that are often limited by strict assumptions of normality, linearity, variable independence, etc. Because neural networks can capture many types of relationships, it allows the user to quickly and relatively easily model phenomena that could otherwise be very difficult or impossible to explain otherwise.
It is said that today neural networks are experiencing their renaissance, mainly due to their use in the analysis of ever larger data sets and the constant flow of new variables that programs usually do not cope with. Neural networks have also found application in many areas of life in which people would be overwhelmed by too much data, mainly in decision-making processes. Who knows – maybe soon we will employ advisors, analysts or fairies with artificial intelligence based on neural networks.