You are on page 1of 24

Simple Guide to Neural Networks and Deep Learning in Python... http://blog.hackerearth.com/simple-guide-neural-networks-deep...

(http://www.blog.hackerearth.com/)

BE A BETTER PROGRAMMER (HTTPS://WWW.HACKEREARTH.COM/?UTM_SOURCE=BLOG&UTM_MEDIUM=HEADER&


UTM_CAMPAIGN=LINK)

Simple Guide to Neural


Networks and Deep
Learning in Python
Machine Learning (http://blog.hackerearth.com/machine-learning)
! March 7, 2017

Introduction
Deep Learning algorithm is one of the most powerful learning
algorithms of the digital era. It has found a unique place in
various industrial applications such as fraud detection in credit
approval, automated bank loan approval, stock price prediction
etc. Some of the more recent uses of neural networks are

1 of 24 1/19/18, 11:58 PM
Simple Guide to Neural Networks and Deep Learning in Python... http://blog.hackerearth.com/simple-guide-neural-networks-deep...

image
+ 1 recognition
# $and
3 speech
% 12 recognition.
(https:// (https:// (http:// (https://
plus.go
In twitter.
fact, you'd www.fa www.lin
be amazed to know that google incorporates
ogle.co com
neural networkscebook. kedin.c
into its image search and voice applications.
m /share? com om
Furthermore,
/share? originalthe/share.
first successful
/cws/sh deep learning model for
speech
url=htt recognition
_referer php?u= madeare?url
by Microsoft is now used in Cortana.
p = http =http
%3A%2
This /&text=
article %3A%2
provides %3A%2
an introduction to deep learning methods
F Simple+ F F
in detail.
%2Fblo Guide+t %2Fblo %2Fblo
g.hacke o+Neur g.hacke g.hacke
The first section of the article presents a detailed introduction
rearth.c al+Net rearth.c rearth.c
of the perceptron
om%2F works+ om%2F model om%2F
and a python implementation of the
simple- and+De
algorithm. simple-
Although simple-
the perceptron model is a linear classifier
guide- ep+Lea guide- guide-
and has limited applications, it forms the building block of
neural- rning+i neural- neural-
multi-layered
networ n+Pythneural networ network.
networHence, it is imperative for you to
learn.
ks- on& ks- ks-
deep- url=htt deep- deep-
learnin
In the second p: section,
learnina rather
learnin complicated extension of the
g- //blog.h g- g-
perceptron model called Deep Learning Network (also known
python) ackerea python) python)
as Multilayered
rth.com Neural Network) is introduced along with its
mathematical/simpleand algorithmic development. Additionally, a task
-guide-
of intelligent decision-making has been implemented using
neural-
Deep Learning.
networ
ks-
Note: This article is best suited for people having a concrete
deep-
understandinglearnin of mathematical concepts such as
g-
differentiation, matrix multiplication etc.
python)

What is Perceptron Algorithm ?
The human brain is astonishingly smart and powerful. It is
capable of recognizing the underlying patterns in massive and
noisy world's information, memorizing those patterns and
generalizing the knowledge for making informed decision
making.

Perceptron model is an artificial neural network inspired by

2 of 24 1/19/18, 11:58 PM
Simple Guide to Neural Networks and Deep Learning in Python... http://blog.hackerearth.com/simple-guide-neural-networks-deep...

biological neural networks and is used to approximate


functions that are generally unknown. They are inspired by the
behavior of neurons and they convey electrical signals between
input (such as from the eyes or nerve endings in the hand),
processing, and output from the brain (such as reacting to
light, touch, or heat). A biological neuron generates an
electrical signal if the strength of the input signal is greater
than a threshold value. The biological neural network is a layer
of interconnected neurons which receives an external stimulus
(such as a sensation of heat) and information propagates all the
way to the brain, which in turn generates a response signal
which again travels through the neural network.

(http://blog.hackerearth.com/wp-content/uploads/2017/03
/Capture-2.png)

The Perceptron model was an attempt to replicate


human learning inside computers by Frank Rosenblatt in 1957.
Here, the neural computational units are called perceptrons.
The model is composed of two layers: an input and output
layer. The output layer is composed of perceptron units. Each
output unit is connected to all the inputs through a weight.
Perceptron does the weighted summation of the input signals
and processes this sum through a function called "activation

3 of 24 1/19/18, 11:58 PM
Simple Guide to Neural Networks and Deep Learning in Python... http://blog.hackerearth.com/simple-guide-neural-networks-deep...

function". For a given activation function "f(x)", if the weighted


summation of input is z then output produced by the
perceptron will be "f(z)".

The activation function of a perceptron is a Heaviside step


function "(H(Z))".

(http://blog.hackerearth.com/wp-content/uploads/2017/01
/Capture-2.png)

Here, z is the weighted summation of the input vector, Wij is


weight connecting ith perceptron to jth input node and yi is the
output of ith perceptron.

The figure above depicts an input vector with components 1,


x1 and x2 connected to a single perceptron unit through
weights w0, w1, and w2, respectively. The physical
interpretation of weights could be related to the strength of
the connection. A weight with a very high value will prejudice
the perceptron’s response toward the signal from its
corresponding input node and vice versa. Here input node with
value 1 is called a bias node. The role and significance of the
bias node will be explained later in this section, and for now,
let's consider bias as a regular input node.

Let’s do an analysis with the figure above by setting x2 = 0 and


w0, w1 and w2 equal 0.5, 1 and 3.

4 of 24 1/19/18, 11:58 PM
Simple Guide to Neural Networks and Deep Learning in Python... http://blog.hackerearth.com/simple-guide-neural-networks-deep...

(http://blog.hackerearth.com

/wp-content/uploads/2017/02/Capture-6.png)

So, here it's evident that the response of the perceptron is


heavily prejudiced toward the values of x1, i.e, there exist values
of x1<  x2 for which the perceptron generates a positive signal.

Let's consider a task of character recognition. Identification of


a character from its image is a complicated intelligent task. It's
almost impossible to explicitly code a bot to identify different
alphabets. Everytime we write an alphabet say "a" it's different
from all previous times from the perspective of scale position,
color and orientation. A more sophisticated and intelligent
model would be one which itself infer the underlying
significance of the character from the training data.

Let’s say you are given a bunch of images of hand-written


characters “a” and “b” along with their labels. A label "1"
corresponds to an image of character "a" and "0" for "b".Now
using this we need to train the perceptron network to classify
these images into different sets, one set of character "a" and
another of character "b". Basically, any digital image is a matrix
of pixels values, which determines the color at a specific point
on the screen. This matrix of pixel values could be converted
into an array of pixels, and further this array could be used as
an input for the network.

5 of 24 1/19/18, 11:58 PM
Simple Guide to Neural Networks and Deep Learning in Python... http://blog.hackerearth.com/simple-guide-neural-networks-deep...

(http://blog.hackerearth.com/wp-content/uploads/2017/01
/Capture-4.png)

Now to train the model such that whenever an image of


character "a" is given it returns 1, and returns 0 if image of
character "b" is given. For a given single input vector and its
output, there exist many solutions of weights but the values of
weights have to be updated to a value which gives the desired
output for all the examples in the training set. In doing so, the
common underlying feature embedded in the examples is
captured and higher values of weights is assigned to them.
Noisy distinctive features which don’t tend to appear in every
example are weighted negligibly. So, after training, when a new
image is fed to the model, it will extensively look for the heavily
weighted features in the signal and judge its class.

Further, a methodology has to be devised to tune the values of


weights for appropriate decision making. Since the output of a
perceptron could be +1 or 0, the error function E of ith
perceptron for mth training example could be +1,-1, or 0.

Since, 

where ,  and are error, target value (actual desired


output), and output value of ith perceptron for mth training
example. If , then weights associated with that
perceptron are appropriate and don’t need to be changed. But
if , this means output produced is 0 when +1 was desired,
hence weights associated with that perceptron need an ascent.
Each weight is increased by a fraction of the corresponding
input node and similarly, the weights are reduced if .
The equation below could do the required updates.

6 of 24 1/19/18, 11:58 PM
Simple Guide to Neural Networks and Deep Learning in Python... http://blog.hackerearth.com/simple-guide-neural-networks-deep...

(http://blog.hackerearth.com/wp-content/uploads/2017/01
/Capture-5.png)

Here, determines learning rate, for higher values of alpha,


the model tends to learn faster but has lesser chances of
procuring an optimum solution, whereas, for lower alpha, the
model learns slowly but has better chances of procuring an
optimum solution. Consequently, a moderate choice of learning
rate is made ranging from 0.1 to 0.8. However, the best strategy
to determine alpha is trial and error.

The above strategy to update the weights might fail if the value
of all the input nodes is 0 because then the second term on the
right-hand side of equation (1) will be zero. So, in order to
avoid such complication biasing is introduced.

Biasing: an extra input node is acquainted whose value is ±1


and this node is called bias node. Bias node ensures that
weights get updated when needed to be, even when all the
input nodes of a vector are 0. Suppose input (0,0) has target
value 0 and output given by model is 1 hence its required for
weights to be updated but using (1) it could be easily seen that
none of the weights will get updated if there is no biasing.

Algorithmic Development:

Initialization
set all weights Wij to a random initial value between 0
to 1.

Training
for T iterations
for all:m training examples
compute activation of each neuron i using

7 of 24 1/19/18, 11:58 PM
Simple Guide to Neural Networks and Deep Learning in Python... http://blog.hackerearth.com/simple-guide-neural-networks-deep...

activation function H(z)

                                  

(http://blog.hackerearth.com/wp-

content/uploads/2017/01/Capture-7.png)

                                          update weights using:

                                                                      

(http://blog.hackerearth.com

/wp-content/uploads/2017/01/Capture-8.png)

Implementation
Perceptron neural networks can be used for the classification
of a logical function. A logical function takes two input
parameters In 1 and In 2. The input of the logical function is
digital i.e. input values could be either 0 or 1. There are
different types of logical function, and following is a
classification of Binary OR logical function.

Binary OR

(http://blog.hackerearth.com/wp-content/uploads/2017/01
/Capture-9.png)In 1 and In 2 are components of input vectors
and In 1 OR In2 are their corresponding output.

8 of 24 1/19/18, 11:58 PM
Simple Guide to Neural Networks and Deep Learning in Python... http://blog.hackerearth.com/simple-guide-neural-networks-deep...

(http://blog.hackerearth.com

/wp-content/uploads/2017/03/Capture.png)

There are two different types of outputs of the binary OR


function, 1 marked with the dot and 0 marked with a plus sign
in the above figure.

Using Perceptron model we will try to determine the boundary


of classification of this two different types of output.

Step 1: Import libraries necessary for the project.

1 import numpy as numpy


2 from numpy import*
3 import random

Step 2: Initialize weights to random values.

1 W= [random.random() for i in range(3)]

Step 3: Define input feature vectors and corresponding targets


in arrays X and t, and set the value of bias node to -1.

1 X= [[-1,1,1],[-1,1,0],[-1,0,1],[-1,0,0]]
2 t= [1,1,1,0]

Step 4: Train the model. Here, the sum is an array containing


weighted summation of each input vector and y is an array
containing activation output of perceptron corresponding to
each input. Weighted summation for each perceptron can be
calculated using the dot method of numpy where dot does the
matrix multiplication.

9 of 24 1/19/18, 11:58 PM
Simple Guide to Neural Networks and Deep Learning in Python... http://blog.hackerearth.com/simple-guide-neural-networks-deep...

(http://blog.hackerearth.com/wp-content/uploads/2017/01
/Capture-11.png)

(http://blog.hackerearth.com/wp-content/uploads/2017/01
/Capture-12.png)

1 for i in range(10):
2 sum= dot(x,transpose(w))
3 #Activation
4 y= 0.5*(numpy.sign(sum)+1)
5 #weights update
6 w+=0.5*dot(transpose(x),(t-y))
7
8

Output

(http://blog.hackerearth.com

/wp-content/uploads/2017/01/Capture-13.png)

A linear classification boundary separating two different types


of examples will be obtained.

10 of 24 1/19/18, 11:58 PM
Simple Guide to Neural Networks and Deep Learning in Python... http://blog.hackerearth.com/simple-guide-neural-networks-deep...

(http://blog.hackerearth.com

/wp-content/uploads/2017/01/linear.png)

(http://blog.hackerearth.com

/wp-content/uploads/2017/01/non-linear.png)

Since all the operations in the model are linear, the extent of


the model is limited to linear classification. For the extension
of the model to exhibit non-linear classification, novel
improvisations have been introduced. The method is known as
the multi-layer perceptron model or deep learning neural
network.

What are Deep Learning Neural Networks ? 


In contrast to perceptron network, the activation function of
the Deep learning neural networks is non-linear, enabling it to
learn complex and nonlinear features of the system. In addition
to input and output layers deep learning architecture has a
stack of hidden layers between the input and output layer.
Deep learning neural networks are capable of extracting deep

11 of 24 1/19/18, 11:58 PM
Simple Guide to Neural Networks and Deep Learning in Python... http://blog.hackerearth.com/simple-guide-neural-networks-deep...

features out of the data; hence the name Deep Learning.

There are several different types of neural networks.

1. Feed Forward neural network:  It was the first and arguably
most simple type of artificial neural network devised. In this
network, the information moves in only one direction —
forward. From the input nodes data goes through the hidden
nodes (if any) and to the output nodes. There are no cycles or
loops in the network.

2. Radial basis function networks: It is used to perform


interpolation in multidimensional space. An RBF is a function
which has built into it a distance criterion with respect to a
center. RBF networks have two layers of processing. In first,
the input is mapped onto each RBF in the hidden layer. The
activation function chosen in RBF network is a Gaussian
function contrary to Feed forward networks where activation
function is sigmoidal.

3. Recurrent neural networks (RNNs): These are


networks comprising of bi-directional data flow i.e.
information in the network flows from later processing stages
to earlier stages. RNNs can be used as general sequence
processors. Hopfield network (like similar attractor-based
networks) is of historic interest although it is not a general
RNN as it is not designed to process sequences of patterns.
Instead, it requires stationary inputs. It is an RNN in which all
connections are symmetric, and it was invented by John
Hopfield in 1982.

This section presents a basic yet powerful form of deep


learning neural network — Feed Forward Neural network.

The key idea of the model is that each feature of an input


vector is connected to all the neurons of the subsequent

12 of 24 1/19/18, 11:58 PM
Simple Guide to Neural Networks and Deep Learning in Python... http://blog.hackerearth.com/simple-guide-neural-networks-deep...

hidden layer and all the neurons of this layer are further


connected to the next hidden layer and so on. The last hidden
layer is connected to the output neurons; each connection
has a weight associated with it. Output neurons generate an
output signal depicting information about the corresponding
input signal.

(http://blog.hackerearth.com/wp-content/uploads/2017/01
/Capture-14.png)

In the perceptron model, each single processing unit of a


neuron does the weighted summation of signals arriving from
the neurons of the previous layer and processes an output
through Heaviside step function “H(z)”. But here the point of
demarcation from the perceptron model is that the activation
function of a neuron is sigmoid “g(z)” instead of a Heaviside
step function. The advantage of choosing a sigmoid over step
function is that sigmoid is very close to step function. And, it’s
also differentiable, which turns out to be an essential criterion
for parameter tuning in neural networks.

13 of 24 1/19/18, 11:58 PM
Simple Guide to Neural Networks and Deep Learning in Python... http://blog.hackerearth.com/simple-guide-neural-networks-deep...

(http://blog.hackerearth.com/wp-content/uploads/2017/01
/Capture-15.png)

(http://blog.hackerearth.com/wp-

content/uploads/2017/01/Capture-16.png)

where yi is the output of ith neuron and wij is the weight


connecting ith neuron to jth input node.

Another difference from the perceptron model is that here,


multiple hidden layers are stacked between input and output
layers and each of these layers acts as input for the subsequent
layer. As the signal progresses through each layer, more
detailed and fine structures are recovered, hence
improvisation of hidden layers has enabled the penetration into
deeply embedded features in data. This has made learning
more comprehensive.

Further sigmoid activation adds non-linearity to the model,


which enables it to learn non-linear discrepancies between
different classes, making it a better and more complex brain
for the machines.

Learning in neural networks progresses in a similar fashion as

14 of 24 1/19/18, 11:58 PM
Simple Guide to Neural Networks and Deep Learning in Python... http://blog.hackerearth.com/simple-guide-neural-networks-deep...

in the perceptron model. First, training examples are fed to the


network and the input signal propagates through each layer,
producing an output. This mechanism is known as “Feed
Forward.” Then, weights are tuned such that the output
produced by networks matches the target values of the
corresponding training example. This tuning is done by an
algorithm called “Back Propagation.”

Let’s consider a network with n hidden layers and let Xim


denote the ith component of the mth training example. In the
context, this framework, Feed Forward and Back Propagation
are illustrated below.

Feed Forward:

(http://blog.hackerearth.com/wp-

content/uploads/2017/01/Capture-17.png)

where YiL is the output of ith hidden neuron in Lth hidden layer
and WijL are the weights connecting ith hidden neuron of Lth
hidden layer to jth node of the previous layer.

(http://blog.hackerearth.com/wp-content/uploads/2017/01
/Capture-18.png)

where  is the output of ith output neuron for mth training
example and since there are n hidden layers in the network,
represents the output of jth neuron in the last hidden layer.

15 of 24 1/19/18, 11:58 PM
Simple Guide to Neural Networks and Deep Learning in Python... http://blog.hackerearth.com/simple-guide-neural-networks-deep...

Back Propagation: The backward propagation of errors or back


propagation is a common method of training artificial neural
networks and used in conjunction with gradient descent
optimization. Optimization objective of back propagation is the
cost function "J".

(http://blog.hackerearth.com

/wp-content/uploads/2017/01/Capture-19.png)

Gradient Descent is an iterative procedure to optimize an


objective. For the optimization problem above

(http://blog.hackerearth.com/wp-

content/uploads/2017/01/Capture-20.png)

where alpha determines the learning rate and functions as in


perceptron model.

There are two aspects of incorporating gradient descent in


back propagation depending on the choice of optimization
objective "J."

Bash Gradient Descent (BGD):  

Stochastic Gradient Descent (SGD):  

Here is target value of ith output neuron for mth training


example.

Since the cost function in BGD is the sum of squared error over
all training examples the solution converges at local minima,
whereas in SGD since weights are updated independently for
each training example solution doesn’t converge at local
minima, instead it oscillates near the optimum regime.

16 of 24 1/19/18, 11:58 PM
Simple Guide to Neural Networks and Deep Learning in Python... http://blog.hackerearth.com/simple-guide-neural-networks-deep...

However, SGD is preferred over BGD because of its lower time


complexity.

To tune the weights to produce desired output, cost J has to be


optimized.

         (chain Rule)  (1)

where ; therefore

                       (2)

                          (3)

                     (4)

from equation (4)

           (5)

                     (6)

              (7)

where,  

In back propagation, gradient descent starts from the output


layer and propagates all the way back to the input layer.  
computed in Lth layer is used in computing in L-1th layer.

Algorithm:
n= number of hidden layers.

for T iterations:
obtain and using (6) and (7).

17 of 24 1/19/18, 11:58 PM
Simple Guide to Neural Networks and Deep Learning in Python... http://blog.hackerearth.com/simple-guide-neural-networks-deep...

for k=1 to n
using calculated in previous iteration
obtain using equation (5).

from above result calculate                                      


                                

end for

L  {0,1,.....n}
update WijL

end for

Iterating Back Propagation step for several iterations


converges the weights to the optimum solution.

Implementation:
Following is an implementation of non-linear multiclass
classification of the Iris data set written in Python using the
Keras library. It’s a deterministic task to identify the class of
iris flower from its physical dimensions.

(http://blog.hackerearth.com/wp-content/uploads/2017/01
/Capture-21.png)

Attributes: petal length, petal width, sepal length, sepal width

Classes: Iris is categorized into three differed classes, “Iris-

18 of 24 1/19/18, 11:58 PM
Simple Guide to Neural Networks and Deep Learning in Python... http://blog.hackerearth.com/simple-guide-neural-networks-deep...

setosa,” “Iris-versicolor,” and “Iris-virginica.”

Data-Source: https://archive.ics.uci.edu/ml/datasets/Iris
(https://archive.ics.uci.edu/ml/datasets/Iris)

Neural Network Architecture: One hidden layer consists of


five neural nodes.

Step 1: Randomly segregate the data into two different parts,


training set with 120 examples and testing set with 30
examples.

Step 2: Import required libraries.

1 from numpy import*


2 import numpy as numpy
3 import keras
4 from keras.layers import Dense
5 from keras.models import Sequential
6 from keras.utils import np_utils
7 from sklearn.preprocessing import LabelEncoder

Step 3: Load data from the training set.

1 X= numpy.genfromtxt("Iris_Data.txt",delimiter= ",", usecols=(0,1,2,3))


2 t= numpy.genfromtxt("Iris_Data.txt",delimiter= ",", usecols= (4), dtype

Here, X is an array of input feature vectors and t is an array


containing their corresponding target values. dtype= None
changes the default data type of numpy array (i.e float) to
contents of each column, individually.

Step 4: Since target values t are in string format, it has to be


assigned numerical labels and this could be done by using
scikit's LabelEncoder class.

1 encode= LabelEncoder()
2 encode.fit(t)
3 encoded_t= encode.transform(t)

transform method will assign a distinct numerical label to each


distinct values in t.

1 Iris-setosa = 0 Iris-versicolor = 1        Iris-virginica = 2.

19 of 24 1/19/18, 11:58 PM
Simple Guide to Neural Networks and Deep Learning in Python... http://blog.hackerearth.com/simple-guide-neural-networks-deep...

Step 5: When modeling multi-class classification problems


using neural networks, it is good practice to reshape the
output attributes to binary Euclidean basis vectors and this
could be easily done using to_categorical method from np_util
class of keras library.

1 “Iris-setosa”“Iris-versicolor”    “Iris-virginica”
2      1                 0                    0
3      0                 1                    0
4      0                 0                    1

1 vec_t= np_utils.to_categorical(encoded_t)

vec_t is array of target vectors in binary Euclidean basis


format.

Step 6: Define model of the network using Sequential class. An


object of layers package is added to the model and since our
model is fully connected dense network, a Dense method will
be added to the model.

1 model= Sequential()
2 model.add(Dense(3, input_dim=4, init= "uniform", activation= "sigmoid")
3 model.add(Dense(3,init= "uniform",activation= "sigmoid"))

A dense network consisting of a hidden layer with five nodes


and an output layer with three nodes with sigmoid activation
and uniform random weights initialization is defined above.

Step 7: Compile and train the model using compile and fit
methods. Set loss (cost function) to mean squared error (mse)
and optimizer to sgd and train the model for 500 epoch.

1 model.compile(loss='mse', optimizer='sgd', metrics=['accuracy'])


2 model.fit(X,vec_t,nb_epoch= 500, batch_size= 1)

Training with an accuracy of 98% could be achieved.

Step 8: Predict test data using the predict method

1 X_test= numpy.genfromtxt("test.txt",delimiter= ",",usecols= (0,1,2,3))


2 t_test= numpy.genfromtxt("test.txt", delimiter= ",", usecols= (4), dtype
3 predictions= model.predict(X_test)

20 of 24 1/19/18, 11:58 PM
Simple Guide to Neural Networks and Deep Learning in Python... http://blog.hackerearth.com/simple-guide-neural-networks-deep...

A value of prediction basically quantifies certainty of an input


belonging to a particular class. Choose the one with maximum
certainty. Prediction with an accuracy of around 90% is
achieved.

Step 9: The best architecture for prediction could be


determined using trial and error.

Above implementation signifies the remarkable performance of


Neural Networks. Although neural networks have been
overlooked ever since its inception but the multilayer
extension to the model and recent developments in
computational technologies has made it one of the best
learning algorithms. The neural network has outperformed
every Machine learning algorithm in task image recognition
with an accuracy level of above 95%. Inspired by collective
underlying behavior and performance of different types of
biological neurons many insightful architectures and variations
in this model have been improvised.

Summary
1. Perceptron model is a neural network capable of linear
classification.

2.  Deep Learning neural networks are capable of learning non


linear decision boundaries.

3.  Typically activation function used in a feed forward neural


network is a sigmoid function.

4. The cost function (mean squared difference of desired and


the predicted value) is optimized to determine appropriate
weights.

5.  A gradient based optimization algorithm called back


propagation is used to optimize the cost function.

21 of 24 1/19/18, 11:58 PM
Simple Guide to Neural Networks and Deep Learning in Python... http://blog.hackerearth.com/simple-guide-neural-networks-deep...

ABOUT +
THE1 AUTHOR
# $3 % 12
(https:// (https:// (http:// (https://
plus.go twitter. www.fa www.lin
ogle.co com
Rahulbansal
cebook. (Http://blog.hackerearth.com/author
kedin.c
m /share?
/rahulbansal?post)
com om
/share? original I am MS/share.
student at IISER Mohali, majoring in physics and carrying out a
/cws/sh
url=htt _referer php?u= are?url Learning. My pedagogical philosophy is,
thesis research on Machine
p ="difficulties fades away
http =httpat the end of a long day".
%3A%2 /&text= %3A%2 %3A%2
F Simple+ F F
%2Fblo Guide+t %2Fblo %2Fblo
AUTHOR g.hacke POST o+Neur g.hacke g.hacke
rearth.c al+Net rearth.c rearth.c
om%2F works+ om%2F om%2F
simple- and+De simple- simple-
guide- ep+Lea guide- guide-
neural- rning+i neural- neural-
networ n+Pyth networ networ
ks- on& ks- ks-
deep- url=htt deep- deep-
learnin p: learnin learnin
g- //blog.h g- g-
python) ackerea python)
(http://blog.hackerearth.com/simple-guide- python)
rth.com
neural-networks-deep-learning-python)
/simple
M A C H I N E L E A -guide-
RNING
( H T T P : / / B L O Gneural-
.HACKEREARTH.COM
/ M A C H I N E - L E Anetwor
RNING)
Simple Guide to ks- Neural Networks and
Deep Learning deep- in Python
(http://blog.hackerearth.com/simple-
learnin
guide-neural-networks-deep-learning-
g-
python) python)
Mar 7, 2017

22 of 24 1/19/18, 11:58 PM
Simple Guide to Neural Networks and Deep Learning in Python... http://blog.hackerearth.com/simple-guide-neural-networks-deep...

0 Comments HackerEarth !
1 Login

# Recommend Sort by Best


⤤ Share

Start the discussion…

LOG IN WITH OR SIGN UP WITH DISQUS ?

Name

Be the first to comment.

ALSO ON HACKEREARTH

13 Free Self-Study Books on Mathematics, EmpowHER: Journey to the Top


Machine Learning & Deep Learning 4 comments • a month ago
1 comment • a year ago Aditi Warhadpande Srivastava — Thanks for
Mrunal Khatri — Nice article here. Keep your query Rupali! It is indeed interesting to
sharing such nice work. It is very much note that, research suggests that increasing
informative and very well written here. Thanks the presence and responsibility of women is
a lot.

Why study data structures and List of top C & C++ books for
algorithms? programming enthusiasts
1 comment • a year ago 2 comments • 10 months ago
Ashish Yadav — Thanks for sharing the Vikas Kalwani — Some amazing list of books
information, I’m learning DSA from here you've shared.You could include more
https://coderforevers.com/d... , and coding curated resources - https://hackr.io
tutorials from here -> /tutorials/...

ABOUT US TOP CATEGORIES

Blog (http://blog.hackerearth.com/) Talent Assessment (http://blog.hackerearth.com/talent-


Engineering Blog (http://engineering.hackerearth.com/) assessment)
Updates & Releases (http://news.hackerearth.com/) Placements (http://blog.hackerearth.com/placements)
Team (https://www.hackerearth.com/team/) Hackathons (http://blog.hackerearth.com/innovation-
Careers (https://www.hackerearth.com/careers/) management/hackathons)
In the Press (https://www.hackerearth.com/in-media/) Community (http://blog.hackerearth.com/community)
Competitive Programming (http://blog.hackerearth.com
/competitive-programming)
Culture (http://blog.hackerearth.com/culture)

23 of 24 1/19/18, 11:58 PM
Simple Guide to Neural Networks and Deep Learning in Python... http://blog.hackerearth.com/simple-guide-neural-networks-deep...

RESOURCES FOR COMPANIES

Webinars (http://blog.hackerearth.com/upcoming- Recruit (https://www.hackerearth.com/recruit/)


webinar) Assessment (https://www.hackerearth.com
Podcasts (http://blog.hackerearth.com/podcast) /assessment/)
CodeTable (https://code.hackerearth.com/) Sourcing (https://www.hackerearth.com/recruit/source/)
Hackathon Handbook (https://www.hackerearth.com Host Hackathons (https://www.hackerearth.com
/hackathon-handbook/) /sprints/)
Complete Reference to Competitive Programming Interview (https://www.hackerearth.com/interview/)
(https://www.hackerearth.com/getstarted-competitive-
programming/)
How to get started with Open Source
(https://www.hackerearth.com/getstarted-opensource/)

© 2017 HackerEarth

24 of 24 1/19/18, 11:58 PM

You might also like