How the development of artificial intelligence affects SEO
3 October 20187 bad Content Marketing practices that reduce visibility
18 October 2018Interest in machine learning from year to year is increasing, and exploded about a decade ago. About Machine Learning can be heard everywhere starting from IT programs, through industry conferences, and ending with regular newspapers. Basically it uses machine learning algorithms to extract information from the raw data and represent them in a model. We use this model to apply for other data, not yet modeled. <! – more ->
The beginnings of deep learning
In the mid-eighties and early nineties there have been many important advances in neural networks architecture. However, the amount of time and data needed to get good results was a bit daunting. In early 2000, the computing power grew exponentially and the industry potential in these solutions. Deep learning emerged as a serious contender in this field, winning many important competitions in the field of machine learning. Today there is no talk about ML without mentioning the subject of deep learning.
The scope of artificial intelligence is wide, and deep learning is a subset of the machine learning field, which is the subarea of artificial intelligence.
Basic network architectures
Deep learning can be defined as neural networks with a large number of parameters and layers in one of the four basic network architectures:
- Unsupervised pre-training is a method that can sometimes help in both optimization and pre-training problems. It has been shown that initial training of neural networks works as a control technique, improving efficiency and reducing variance of the model.
- Convolutional Neural Network is simply a standard neural network that has been extended in space using common weights. CNN is designed to recognize images by wrapping inside that see the edges of the object being recognized in the image
- Recurrent Neural Network is intended to recognize sequences, e.g. speech or text. It has cycles inside, which means the presence of a short memory in the network.
- Recursive Neural Network , this neural network is more similar to a hierarchical network, which is not really the time for the aspect of the input sequence, but the input data must be processed in a hierarchical manner xyloid.
Deep learning methods
Below are previews of methods relating to all of the above architectures:
- Backpropagation – a basic learning algorithm for supervised multi-tier, unidirectional neural networks. By using a specific method of propagating network learning errors created at its output, i.e. sending them from the output layer to the input layer, the backward propagation algorithm has become one of the most effective network learning algorithms.
- Gradient descent . The intuitive way of thinking about the simple gradient method is to imagine the way of the river that comes from the top of the mountain. The purpose of the gradient is exactly what the river wants to do – namely, reach the lowest point at the foot of the mountain.
- Learning Rate Decay. Adjusting the learning speed to the gradient descent optimization procedure can increase productivity and reduce training time. Sometimes this is called annealing speed of learning or adaptive learning indicators. The simplest and perhaps the most commonly used adaptation of learning speed during training are techniques that reduce the speed of learning over time. They have the advantage of making big changes at the beginning of the training procedure when higher learning speeds are used and the learning speed is reduced, so that a smaller rate, and therefore less training updates, are introduced to the weights later in the training procedure. The result of this is the early early learning of good weights and subsequent fine-tuning of them.
- Dropout. Deep neural networks with a large number of parameters are very efficient machine learning systems. Howeever, Overfitting is a serious problem in such networks. Large networks are also slow to use, which makes it difficult to deal with pre-training by combining the predictions of many different large neural networks during the test. Dropout is a technique to solve this problem.
- Max Pooling is based on samples of the process of digitization. The goal is to sample the input representation (image, hidden layer output matrix, etc.), Reduce its dimensions and allow assumptions about objects contained in sub-regions.
- Batch Normalization helps with fine tuning of deep networks for weight initialization and learning parameters.
- Long Short-Term Memory. LSTM network has the following three aspects that distinguish it from the usual neuron in a recursive neural network: It has control over when to let go into the neuron. He has control over when to remember what has been calculated in the previous time step. He has control over the decision when to pass the output to the next time stamp. The beauty of LSTM is that it decides it all based on the current input data.
- Skip-gram.The purpose of the word-embedding models is to learn a multidimensional dense representation for each term of the dictionary in which the similarity between the embedding vectors shows semantical or syntactical similarity between the respective words. Skip-gram is a model for learning algorithms for embedding words. The main idea of the skip-gram model (and many other models of word embedding) is as follows: two dictionary terms are similar if they have a similar context.
- Transfer Learning. The general idea is to use the knowledge that the model learned from the task, which is available in a number of labeled training data to a new job, in which we do not have too much data. Instead of starting from scratch learning, you start from the patterns that you have learned to solve related tasks.
Deep Learning is strongly focused on technology. There are not many specific explanations for each of the new ideas. Most new ideas appeared along with the experimental results to prove that they work.