in this tutorial, I will show you Introduction to Deep Learning with RaspberryPi. The RaspberryPi is a low cost, **credit-card sized computer** that you can plug into a computer monitor or TV, and works a standard keyboard and mouse.

**≡ What is Deep learning**

Introduction to Deep Learning with RaspberryPi, Deep learning is a subfield of machine learning, which is, in turn, a subfield of artificial intelligence (AI). For a graphical depiction of this relationship,

The basic object of AI is to give a set of algorithms and techniques that can be applied to solve problems that humans perform intuitively and near automatically.

An example of such a class of AI problems is reading and understanding the contents of an image – this task is something that a human can do with little-to-no effort, but it has shown to be extremely difficult for machines to accomplish.

While AI includes a large, diverse set of work linked to automatic machine reasoning the machine learning subfield tends to be especially interested in pattern recognition and learning from data.

Artificial Neural Networks (ANNs) are a class of machine learning algorithms that learn from data and specialize in pattern recognition, inspired by the structure and function of the human brain. deep-learning goes to the family of ANN algorithms, and in most cases, the two terms can be utilized conversely.

you may be surprised to learn that the deep learning field has been around for more than 60 years, running by different names and incarnations based on research trends, available hardware and datasets, and popular options of leading researchers at the time.

**≡ History of Neural Networks with Deep Learning**

Introduction to Deep Learning with RaspberryPi, The history of neural networks and deep learning is very long. It may amaze you to know that “deep learning” has existed since the 1940s sharing various name changes, including cybernetics, connectionism, and the most familiar, Artificial Neural Networks (ANNs).

While motivated by the human brain and how its neurons interact with each other, ANNs are not meant to be realistic models of the brain. they are an inspiration, allowing us to draw parallels between a very primary model of the brain and how we can mimic some of this behavior through artificial neural networks. The first neural network model came from McCulloch and Pitts in 1943.

This network was a binary classifier, able of recognizing two different categories based on some input. The problem was that the weights utilized to determine the class label for a provided input required to be manually tuned by a human – this type of model clearly does not scale well if a human operator is needed to intervene.

Then, in the 1950s the primary Perceptron algorithm was published by Rosenblatt – this model could automatically learn the weights needed to classify an input .

An example of the Perceptron architecture can be seen in Figure.

In fact, this automatic training system formed the basis of Stochastic Gradient Descent (SGD) which is still applied to train very deep neural networks today. During this time period, Perceptron-based techniques were all the rage in the neural network community. 1969 publication by Minsky and Papert completely stagnated neural network research for nearly a decade.

work showed that a Perceptron with a linear activation function (regardless of depth) was merely a linear classifier, weak to solve nonlinear problems.

Take a second now to convince yourself that it is impossible to try a single line that can divide the blue stars from the red circles. writers argued that we did not have the computational resources wanted to construct large, deep neural networks. This single paper alone almost destroyed neural network research. Luckily, the backpropagation algorithm and the research by Werbos (1974), Rumelhart (1986), and LeCun (1998) were able to resuscitate neural networks from what could have been an early demise.

Their research in the backpropagation algorithm enabled multi-layer feedforward neural networks to be trained . Joined with nonlinear activation functions, researchers could now learn nonlinear functions and solve the XOR problem, opening the gates to an entirely new area of research in neural networks.

Further research demonstrated that neural networks are universal approximators, capable of approximating any constant function. The backpropagation algorithm is the base of modern-day neural networks allowing us to efficiently train neural networks and “teach” them to learn from their mistakes. But even so, at this time, due to slow computers and lack of large, labeled training sets, researchers were unable to (reliably) train neural networks that had more than two hidden layers – it was only computationally infeasible.

the latest incarnation of neural networks as we know it is called deep learning. What sets deep learning apart from its previous matters is that we have faster, specialized hardware with more available training data. We can now train networks with many more hidden layers that are capable of learning where simple ideas are learned in the lower layers and more abstract patterns in the higher layers of the network.

Perhaps the quintessential example of applied deep learning to feature learning is the Convolutional Neural Network (LeCun 1988) applied to handwritten character recognition which automatically learns discriminating patterns from images by sequentially stacking layers on top of each other. Filters in lower levels of the network represent edges and corners, while higher-level layers utilize the edges and corners to learn more abstract concepts useful for discriminating between image classes.

more tutorial with Deep Learning RaspberryPi

Visit My Blog

133 total views, 1 views today