Professional Documents
Culture Documents
9 October 2017
untuk NLP
9 October 2017
computational linguistics for several years now, but 2015
seems like the year when the full force of the tsunami hit the
major Natural Language Processing (NLP) conferences.”
-Dr. Christopher D. Manning, Dec 2015
9 October 2017
yang lainnya, disarankan untuk kita mempelajari beberapa
topik berikut:
• Gradient Descent/Ascent
3
Referensi/Bacaan
• Andrej Karpathy’ Blog
9 October 2017
• http://karpathy.github.io/2015/05/21/rnn-effectiveness/
• Colah’s Blog
• http://karpathy.github.io/2015/05/21/rnn-effectiveness/
4
Deep Learning vs Machine Learning
• Deep Learning adalah bagian dari isu Machine Learning
9 October 2017
• Machine Learning adalah bagian dari isu Artificial Intelligence
Deep Learning
5
Machine Learning : Dirancang manusia
(rule-based) : Di-inferotomatis
9 October 2017
Predicted label: positive
if contains(‘menarik’):
return positive
...
6
“Buku ini sangat menarikdan penuh manfaat”
Machine Learning : Dirancang manusia
9 October 2017
ouput.
Predicted label: positive
Feature Engineering!
Hand-designedFeatureExtractor:
Contoh: Menggunakan TF-IDF, Representation
informasi syntax dengan POS Tagger, dsb.
7
“Buku ini sangat menarikdan penuh manfaat”
Machine Learning : Dirancang manusia
9 October 2017
Predicted label: positive
8
“Buku ini sangat menarikdan penuh manfaat”
Machine Learning : Dirancang manusia
DeepLearninglearnsFeatures!
9 October 2017
Predicted label: positive
Fitur Kompleks/High-Level
9
“Buku ini sangat menarikdan penuh manfaat”
Sejarah
10
9 October 2017
• Perceptron terdiri dari 3 layer: Sensory, Association, dan
Response.
Rosenblatt, Frank. “The perceptron: a probabilistic model for information storage and
organization in the brain.” Psychological review 65.6 (1958): 386.
The Perceptron (Rosenblatt, 1958)
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
Activation function adalah fungsi non-linier. Dalam kasus perceptron
Rosenblatt, activation function adalah operasi thresholding biasa (step
function).
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
13
https://www.datarobot.com/blog/a-primer-on-deep-learning/
The Fathers of Deep Learning(?)
• Di tahun 2006, ketiga orang tersebut mengembangkan cara
untuk memanfaatkan dan mengatasi masalah training
9 October 2017
terhadap deep neural networks.
• Sebelumnya, banyak orang yang sudah menyerah terkait
manfaat dari neural network, dan cara training-nya.
https://www.datarobot.com/blog/a-primer-on-deep-learning/
The Fathers of Deep Learning(?)
• Automated learning of data representations and features is
9 October 2017
what the hype is all about!
https://www.datarobot.com/blog/a-primer-on-deep-learning/
Mengapa sebelumnya “deep learning” tidak sukses?
9 October 2017
ditemukan sebelumnya.
• Bahkan Long-Short Term Memory (LSTM) network, yang saat ini
ramai digunakan di bidang NLP, ditemukan tahun 1997
oleh Hochreiter & Schmidhuber.
16
Mengapa sebelumnya “deep learning” tidak sukses?
9 October 2017
learning.html
• Computers were slow. So the neural networks of past were tiny. And
tiny neural networks cannot achieve very high performance on
9 October 2017
learning yang bekerja secara praktikal.
The success of Deep Learning hinges on a very fortunate fact: that
well-tuned and carefully-initialized stochastic gradient descent
(SGD) can train LDNNs on problems that occur in practice. It is not a
And yet, somehow, SGD seems to be very good at training those large
deep neural networks on the tasks that we care about. The problem
of training neural networks is NP-hard, and in fact there exists a
family of datasets such that the problem of finding the best neural
network with three hidden units is NP-hard. And yet, SGD just solves
it in practice. 18
IlyaSutskever, http://yyue.blogspot.co.at/2015/01/a-brief-overview-of-deep-learning.html
Apa Itu Deep Learning?
19
9 October 2017
Networks (ANNs)
• Dan Neural Networks sebenarnya adalah sebuah
Tumpukan FungsiMatematika
9 October 2017
Ekspresikan permasalahan ke dalam sebuah fungsi F (yang
mempunyai parameter θ), lalu secara otomatis cari
parameter θ sehingga fungsi F tepat mengeluarkan output
yang diinginkan.
Y=F(X;θ)
21
X: “Buku ini sangat menarikdan penuh manfaat”
Apa itu Deep Learning?
Untuk Deep Learning, fungsi tersebut biasanya terdiri dari
9 October 2017
tumpukan banyak fungsi yang biasanya serupa.
F(X;θ3)
Gambar ini sering disebut Tumpukan Fungsi ini
dengan istilah sering disebut dengan
F(X; θ2)
ComputationalGraph TumpukanLayer
F(X; θ1)
22
“Buku ini sangat menarikdan penuh manfaat”
Apa itu Deep Learning?
• Layer yang paling terkenal/umum adalah Fully-Connected
9 October 2017
Layer.
Y F ( X ) f (W .X b)
• “weighted sum of its inputs, followed by a non-linear function”
W R MN N
M unit X R f w x b
i i
N unit i
b RM f
w x b
f (W .X b)
i i
i 23
X Non-linearity
Mengapa perlu “Deep”?
• Humans organize their ideas and concepts hierarchically
9 October 2017
• Humans first learn simpler concepts and then compose
them to represent more abstract ones
• Engineers break-up solutions into multiple levels of
abstraction and processing
24
Y. Bengio, Deep Learning, MLSS 2015, Austin, Texas, Jan 2014
(Bengio & Delalleau 2011)
Neural Networks
Y f (W1.X b1 )
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
X Y f (W1.X b1 )
25
Neural Networks
Y f (W1.( f (W1.X b2 )) b1 )
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
Y f (W2 .H1 b2 )
X H1 f (W1.X b1 )
26
Neural Networks
Y f (W1.( f (W2 .( f (W3.X b3 )) b2 )) b1 )
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
Y f (W3.H 2 b3 )
X
H1 f (W1.X b1 ) H 2 f (W2 .H1 b2 )
27
Alasan matematis mengapa harus “deep”?
9 October 2017
can approximate any continuous function arbitrarily well.
28
Alasan matematis mengapa harus “deep”?
Akan tetapi...
9 October 2017
• “Enough units” can be a very large number. There are functions
representable with a small, but deep network that would require
exponentially many units with a single layer.
• The proof only says that a shallow network exists, it does not say
(e.g.,Hastadetal.1986,Bengio&Delalleau2011) (Braverman,
2011) 29
Mengapa perlu non-linearity?
f w x b
i i
9 October 2017
i
f
w x b
i
i i
9 October 2017
• Secara random, kita inisialisasi semua parameter W1, b1, W2,
b2, W3, b3
9 October 2017
randomly
W(2)
W(1)
32
Bukuinisangatbaikdanmendidik
Training NeuralNetworks
• Initialize trainable parameters
9 October 2017
randomly
• Loop: x = 1 → #epoch:
• Pick a training example
W(2)
W(1)
33
x Bukuinisangatbaikdanmendidik
Training NeuralNetworks
pos neg • Initialize trainable parameters
y’
9 October 2017
True Label 1 0 randomly
• Loop: x = 1 → #epoch:
• Pick a training example
Pred. Label y 0.3 0.7
(Output) • Compute output by doing feed-
h2
W(2)
h1
W(1)
34
x Bukuinisangatbaikdanmendidik
Training NeuralNetworks
pos neg • Initialize trainable parameters
y’
9 October 2017
True Label 1 0 randomly
L
• Loop: x = 1 → #epoch:
y • Pick a training example
Pred. Label y 0.3 0.7
(Output) • Compute output by doing feed-
W(2)
h1
W(1)
35
x Bukuinisangatbaikdanmendidik
Training NeuralNetworks
pos neg • Initialize trainable parameters
y’
9 October 2017
True Label 1 0 randomly
L
• Loop: x = 1 → #epoch:
y
Pred. Label • Pick a training example
y 0.3 0.7
(Output) L • Compute output by doing feed-
9 October 2017
True Label 1 0 randomly
L
• Loop: x = 1 → #epoch:
y
Pred. Label • Pick a training example
y 0.3 0.7
(Output) L • Compute output by doing feed-
9 October 2017
True Label 1 0 randomly
L
• Loop: x = 1 → #epoch:
Pred. Label y • Pick a training example
y 0.3 0.7
(Output) L • Compute output by doing feed-
9 October 2017
n
(t)
9 October 2017
Digunakan untuk mencari konfigurasi parameter-parameter
sehingga cost function menjadi optimal, dalam hal ini
mencapai local minimum.
41
Tidak dijamin mencapai global minimum !
Gradient Descent (GD)
9 October 2017
Misal, kita pilih x dimulai dari x=2.0:
Local minimum
42
Gradient Descent (GD)
Algorithm:
9 October 2017
xt 1 xt t f '(xt )
If f '(x t 1 ) then return "converged on critical po int"
If xt xt 1 then return "converged on x value"
Tips: pilih αt yang tidak terlalu kecil, juga tidak terlalu besar.
Gradient Descent(GD)
9 October 2017
while not converged :
2(t 1) (t2) t f ((t) )
2(t)
(t 1)
(t)
t f ((t) )
n(t)
n n
44
Gradient Descent(GD)
9 October 2017
n
(t)
9 October 2017
tidak).
P( y 0 | x;) 1 P( y 1| x;)
46
1
(z)
1 e z
Logistic Regression
9 October 2017
i1
P( y 0 | x;) 1 P( y 1| x;)
47
Dengan fungsisigmoid sebagai activationfunction
Logistic Regression
9 October 2017
Bagaimana bila belum ditentukan ?
Kita bisa estimasi parameter θ dengan memanfaatkan
training data {(x(1), y(1)), (x(2), y(2)), …, (y(n), y(n))} yang
x1
θ1
x2 θ2
θ3
x3
θ0
48
+1
Logistic Regression
Learning
Misal,
n
9 October 2017
h(x) (0 1 x1 ... n xn ) (0 i xi )
i1
i1
m
49
(h(x ))
(i ) y( i ) (i )
.(1 h(x )) 1 y( i )
i1
Logistic Regression
Learning
9 October 2017
maksimal).
i1
m
J () y(i ) log h(x(i ) ) (1 y ) log (1 h(x ))
(i ) (i )
i1
50
Logistic Regression
Learning
9 October 2017
menurunkan:
J ()
i
9 October 2017
while not converged :
: (h
m
(x (i ) ) y(i ) ) xi
n n n
i1
52
Logistic Regression
Learning
9 October 2017
inisialisasi θ1, θ2, …, θn
while not converged :
2 2 2
53
Logistic Regression
Learning
9 October 2017
dihitung dengan cara rata-rata/sum dari sebuah mini- batch
sample (misal, 32 atau 64 sample).
54
Multilayer Neural Network (Multilayer Perceptron)
9 October 2017
x1
θ1
Misal, ada 3-layer NN, dengan 3 input unit, 2 hidden unit, dan
2 output unit.
9 October 2017
x1 (1)
𝑊11
(1) (2)
𝑊21 𝑊11
9 October 2017
Dari contoh sebelumnya, ada 2 unit di output layer. Kondisi ini
biasanya digunakan untuk binary classification. Unit
pertama menghasilkan probabilitas untuk pertama, dan unit
57
MultilayerNeural Network (MultilayerPerceptron)
9 October 2017
Untuk menghitung output di hidden layer:
a1(2) f (z1(2) )
a2(2) f (z (2)
2 ) Ini hanyalah perkalian matrix !
x1
W (1)W (1)
W x b1(1)
(1)
13 58
z(2) W (1) x b(1) 11(1)
W23(1) x
b(1)
12
2
W 21 W22 2
(1)
3
Multilayer Neural Network (Multilayer Perceptron)
9 October 2017
z (2) W (1) x b(1)
a (2) f (z(2) )
59
MultilayerNeural Network (MultilayerPerceptron)
Learning
9 October 2017
adalah banyaknya training examples.
n 1
m sl sl1
1
Regularization terms
9 October 2017
adalah banyaknya training examples.
(i) 2
1 m 1 nl 1 sl sl1
h
1 1
J (W , b; x, y) ) y
(i )
h (x ) y(i )
(i ) 2 (x (i ) 2 61
W ,b W ,b j j
2 2 j
MultilayerNeural Network (MultilayerPerceptron)
Learning
9 October 2017
inisialisasi W, b
while not converged : Bagaimana cara menghitung gradient ??
(l ) J (W , b)
bi b i
(l )
bi(l )
62
MultilayerNeural Network (MultilayerPerceptron)
Learning
9 October 2017
dJ(W,b;x,y) menentukan overall partial derivative dJ(W,b):
J (W , b) 1 m
9 October 2017
1. Jalankan proses feed-forward
2. Untuk setiap output unit i pada layer nl (output layer)
i(nl ) J (W , b; x, y) (ai(nl ) yi ) f '(zi(nl ) )
(W
(l ) (l ) (l 1)
) f '(z (l )
)
i ji j i
j 1
4. Finally..
J (W , b; x, y) J (W , b; x, y) 64
a(lj )(li1) (li1)
Wij(l ) bi(l )
MultilayerNeural Network (MultilayerPerceptron)
Learning (3)
(2) 𝑊1
𝑊11 𝑊1(3)
Back-Propagation
(2)
𝑊21
9 October 2017
(2)
Contoh hitung gradient di output ... 𝑊12 (3)
𝑊2
(2)
𝑊22 𝑊2(3)
J (W , b; x, y)
1
a (3)
y 2
1
a (3)
y 2
(2) 𝑊(2)
1 1 2 2 𝑊1 2
a1(3)
a1 1
+1
(3)
f z f ' z (3) J a z(3)
a (3) (3)
J (W , b; x, y) 1
1
1 1
z (3)
z1
1
W (2)
a (3)
z(3)
W (2)
(3) 12 12
1 1 1
a y f (3)
(3)
z W a W a b
(3) (2) (2) (2) (2) (2)
1 1 '(z 1 ) a2(2)
1 11 1 12 2 1
z(3) (2) 65
1
a2
W (2)
12
Sensitivity – Jacobian Matrix
The Jacobian J is the matrix of partial derivatives of the
9 October 2017
network output vector y with respect to the input vector x.
. . . .
. . . . yk
J
67
Recurrent Neural Networks
O1 O2 O3 O4 O5
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
h1 h2 h3 h4 h5
X1 X2 X3 X4 X5
68
One of the famous Deep Learning Architectures in the NLP community
Recurrent Neural Networks (RNNs)
9 October 2017
• Menghasilkan Sequences
• …
• Intinya … ada Sequences
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
SequenceInput/Output
(e.g. Machine Translation) 70
Sequence Output
(e.g. Image Captioning)
http://karpathy.github.io/2015/05/21/rnn-effectiveness/
Recurrent Neural Networks (RNNs)
9 October 2017
RNNs combine the input vector with their state vector with a fixed
Misal, ada I input unit, K output unit, dan H hidden unit (state).
9 October 2017
ht R H 1
xt RI 1 yt RK 1
X1 X2
yt W (hy ) st 72
h0 0
Recurrent Neural Networks (RNNs)
9 October 2017
influence on the output layer.
( y ) (h)
K
(hy ) ( y)
H
(hh) (h)
i,t Wi, j j ,t Wi,n n,t 1 f 'h
i,t
(h)
Lt
Di output: i,t y
( y)
i,t
Disetiapstep,kecuali paling
kanan:
Lt
X1 X2 i,t(h) i,T(h)1 0 73
hi,t
Alex Graves, Supervised Sequence Labelling with Recurrent Neural Networks
Recurrent Neural Networks (RNNs)
9 October 2017
get the derivatives with respect to the
network weights.
L
(h)j ,t si,t 1
(hy )
W T
W (hh) L T
L T
X1 X2
( xh)
Wi, j
j ,t xi,t
(h)
74
t 1
Recurrent Neural Networks (RNNs)
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
75
9 October 2017
pada step k mempengaruhi cost
pada step-step setelahnya (t>k)
L L h h
t
t
t t k
W (hh) k 1 ht hk W (hh)
t st 1
k t 1
W (hh)
W (hh)
Recurrent Neural Networks (RNNs)
9 October 2017
caused by the explosion of the long term components, which can grow
exponentially more then short term ones.”
And “The vanishing gradients problem refers to the opposite behaviour, when
Kokbisaterjadi?Cobalihatsalahsatutemporalcomponentdarisebelumnya:
Bengio, Y., Simard, P., and Frasconi, P. (1994). Learning long-term dependencies with gradient
descent is difficult. IEEE Transactions on Neural Networks
Sequential Jacobian biasa digunakan
Recurrent Neural Networks (RNNs) untuk analisis penggunaan
konteks pada RNNs.
Vanishing& Exploding Gradient Problems
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
78
9 October 2017
2) Definisikan arsitektur baru di dalam RNN Cell!, seperti Long-Short Term
Memory(LSTM)(Hochreiter&Schmidhuber,1997).
9 October 2017
2. These blocks can be thought of as a differentiable version
of the memory chips in a digital computer.
3. Each block contains:
82
017
2
r
ob
e
ct
O
9
9 October 2017
information over long periods of time,
thereby mitigating the vanishing gradient
problem.
84
Alex Graves, Supervised Sequence Labelling with Recurrent Neural Networks
S. Hochreiter and J. Schmidhuber. Long Short-Term Memory. Neural Computation,9(8):1735
1780, 1997
Long-Short Term Memory (LSTM) Visualisasi
laindarisatucelldiLSTM
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
85
Komputasi diLSTM
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
86
h1 h2 h3 h4 h5
88
I went to west java
LSTM + CRF for Semantic Role Labeling
(Zhou and Xu, ACL 2015)
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
89
Attention Mechanism
9 October 2017
network needs to be able to compress all the necessary information of
a source sentence into a fixed-length vector. This may make it difficult
for the neural network to cope with long sentences, especially those
that are longer than the sentences in the training corpus.
90
Attention Mechanism
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
Sutkever, Ilya et al., Sequence to Sequence Learning with Neural
Networks, NIPS 2014.
91
https://blog.heuritech.com/2016/01/20/attention-mechanism/
Attention Mechanism
9 October 2017
Sutkever, Ilya et al., Sequence
to Sequence Learning with
Neural Networks, NIPS 2014.
9 October 2017
• Each time the proposed model generates a word in a translation, it
(soft-)searches for a set of positions in a source sentence where the
most relevant information is concentrated. The model then predicts
a target word based on the context vectors associated with these
Attention Mechanism
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
94
Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, Neural Machine Translation by Jointly Learning to Align and
Translate, arXiv:1409.0473, 2016
Attention Mechanism
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
95
Cell merepresentasikan bobot attention, terkait translation.
Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, Neural Machine Translation by Jointly Learning to Align and
Translate, arXiv:1409.0473, 2016
Attention Mechanism
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
96
Colin Raffel, Daniel P. W. Ellis, F EED -F ORWARD NETWORKS WITH ATTENTION CAN
S OLVE S OME L ONG-T ERM MEMORY P ROBLEMS, Workshop track - ICLR 2016
Attention Mechanism
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
97
Yang, Zichao, et al., Hierarchical Attention Networks for Document Classification, NAACL 2016
Attention Mechanism
Yang, Zichao, et al., Hierarchical Attention Networks for Document Classification, NAACL 2016
Attention Mechanism
https://blog.heuritech.com/2016/01/20/attention-mechanism/
Attention Mechanism
9 October 2017
Attention Model digunkan
Sebagai contoh:
untuk menghubungkan
• Premis: “A wedding party taking pictures“ kata-kata di premis dan
102
Tim Rocktaschel et al., REASONING ABOUT ENTAILMENT WITH NEURAL ATTENTION, ICLR 2016
Attention Mechanism
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
103
Tim Rocktaschel et al., REASONING ABOUT ENTAILMENT WITH NEURAL ATTENTION, ICLR 2016
Recursive NeuralNetworks
R. Socher, C. Lin, A. Y. Ng, and C.D. Manning. 2011a. Parsing Natural Scenes and Natural Language with
Recursive Neural Networks. In ICML
Recursive NeuralNetworks
Socher et al., Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank, EMNLP
2013
Recursive NeuralNetworks
Socher et al., Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank, EMNLP
2013
Convolutional Neural Networks (CNNs) for Sentence Classification
(Kim, EMNLP 2014)
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
107
Recursive Neural Network for SMT Decoding.
(Liu et al., EMNLP 2014)
9 October 2017
Alfan F. Wicaksono, FASILKOM UI
108