Professional Documents
Culture Documents
Having a fast GPU is a very important aspect when one begins to learn
deep learning as this allows for rapid gain in practical experience which
is key to building the expertise with which you will be able to apply deep
learning to new problems. Without this rapid feedback, it just takes too
much time to learn from one’s mistakes and it can be discouraging and
frustrating to go on with deep learning. With GPUs, I quickly learned
how to apply deep learning on a range of Kaggle competitions and I
managed to earn second place in the Partly Sunny with a Chance of
Hashtags Kaggle competition using a deep learning approach, where it
was the task to predict weather ratings for a given tweet. In the
competition, I used a rather large two layered deep neural network with
rectified linear units and dropout for regularization and this deep net
fitted barely into my 6GB GPU memory. The GTX Titan GPUs that
powered me in the competition were a main factor of me reaching 2nd
place in the competition.
Figure
1: Setup in my main computer: You can see three GPUs and an
InfiniBand card. Is this a good setup for doing deep learning?
The user experience of using parallelization techniques in the most
popular frameworks is also pretty good now compared to three years
ago. Their algorithms are rather naive and will not scale to GPU clusters,
but they deliver good performance for up to 4 GPUs. For convolution,
you can expect a speedup of 1.9x/2.8x/3.5x for 2/3/4 GPUs; for
recurrent networks, the sequence length is the most important
parameters and for common NLP problems one can expect similar or
slightly worse speedups then convolutional networks. Fully connect
networks usually have poor performance for data parallelism and more
advanced algorithms are necessary to accelerate these parts of the
network.
So today using multiple GPUs can make training much more convenient
due to the increased speed and if you have the money for it multiple
GPUs make a lot of sense.
I personally think using multiple GPUs in this way is more useful as one
can quickly search for a good configuration. Once one has found a good
range of parameters or architectures one can then use parallelism across
multiple GPUs to train the final network.
So overall, one can say that one GPU should be sufficient for almost any
task but that multiple GPUs are becoming more and more important to
accelerate your deep learning models. Multiple cheap GPUs are also
excellent if you want to learn deep learning quickly. I personally have
rather many small GPUs than one big one, even for my research
experiments.
On the other hand, NVIDIA has now a policy that the use of CUDA in
data centers is only allowed for Tesla GPUs and not GTX or RTX cards. It
is not clear what is meant by “data centers” but this means that
organizations and universities are often forced to buy the expensive and
cost-inefficient Tesla GPUs due to fear of legal issues. However, Tesla
cards have no real advantage over GTX and RTX cards and cost up to 10
times as much.
That NVIDIA can just do this without any major hurdles shows the
power of their monopoly — they can do as they please and we have to
accept the terms. If you pick the major advantages that NVIDIA GPUs
have in terms of community and support, you will also need to accept
that you can be pushed around by them at will.
With the same software, the TPU could be even more cost-efficient, but
here also lies the problem: (1) TPUs are not available for the use of the
fastai library, that is PyTorch; (2) TPU algorithms rely mostly on the
internal Google team, (3) no uniform high-level library exist which
enforces good standards for TensorFlow.
All three points hit the TPU as it requires separate software to keep up
with new additions to the deep learning algorithm family. I am sure the
grunt-work has already been done by the Google team, but it is unclear
how good the support is for some models. The official repository for
example only has a single model for NLP with the rest being computer
vision models. All models use convolution and none of them recurrent
neural networks. With comes together with a now rather old report
from February, that the TPUv2 did not converge when LSTMs were
used. I could not find a source if the problem has been fixed as of yet. On
the other hand, one big milestone in NLP was BERT which is a big
bidirectional transformer architecture which can be fine-tuned to reach
state-of-the-art performance on a wide range of NLP tasks. TPUs were
critical for training the training bidirectional transformers on a lot of
data. In total 256 TPU-hours were needed to train a base model of BERT.
How does this compare to GPUs? I wrote a detailed analysis on this and
find that new RTX GPUs are critical for transformer performance and
that one can expect a run-time of about 400 GPU-hours. This shows that
TPUs perform quite well on this task and TPUs have a big advantage
over GPUs for training transformers.
So AWS GPU instances are very useful but they need to be used wisely
and with caution to be cost-efficient. For more discussion on cloud
computing see the section below.
While a good simplified advice would have been “pay attention to the
memory bandwidth” I would no longer recommend doing that. This
because GPU hardware and software developed over the years in a way
that bandwidth on a GPU is no longer a good proxy for its performance.
The introduction for Tensor Cores in consumer-grade GPUs complicates
the issue further. Now a combination of bandwidth, FLOPS, and Tensor
Cores are the best indicator for the performance of a GPU.
While Tensor Cores only make the computation faster they also enable
the computation using 16-bit numbers. This is also a big advantage for
matrix multiplication because with numbers only being 16-bit instead of
32-bit large one can transfer twice the number of numbers in a matrix
with the same memory bandwidth. I wrote in detail about how this
change from 32-bit to 16-bit affects matrix multiplication performance,
but in general, one can hope for speedups of 100-300% when switching
from 32-bit to 16-bit and an additional of about 20% to 60% for
LSTMs using Tensor Cores.
These are some big increases in performance and 16-bit training should
become standard with RTX cards — never use 32-bit! If you encounter
problems with 16-bit training then you should use loss scaling: (1)
multiply your loss by a big number, (2) calculate the gradient, (3) divide
by the big number, (4) update your weights. Usually, 16-bit training
should be just fine, but if you are having trouble replicating results with
16-bit loss scaling will usually solve the issue.
So overall, the best rule of thumb would be: Look at bandwidth if you use
RNNs; look at FLOPS if you use convolution; get Tensor Cores if you can
afford them (do not buy Tesla cards unless you have to).
Figure
2: Normalized Raw Performance Data of GPUs and TPU. Higher is
better. An RTX 2080 Ti is about twice as fast as a GTX 1080 Ti: 0.77 vs
0.4.
Cost Efficiency Analysis
The cost-efficiency of a GPU is probably the most important criterion for
selecting a GPU. I did a new cost performance analysis that incorporated
memory bandwidth, TFLOPs, and Tensor Cores. I looked at prices on
eBay and Amazon and weighted them 50:50, then I looked at
performance indicators for LSTMs, CNNs, with and without Tensor
Cores. I took these performance numbers and averaged them to receive
average performance ratings with which I then calculated
performance/cost numbers. This is the result:
Figure
3: Normalized performance/cost numbers for convolutional networks
(CNN), recurrent networks (RNN) and Transformers. Higher is better.
An RTX 2070 is more than 5 times more cost-efficient than a Tesla V100.
From this data (1, 2, 3, 4, 5, 6, 7), we see that the RTX 2070 is more cost-
efficient than the RTX 2080 or the RTX 2080 Ti. Why is this so? The
ability to do 16-bit computation with Tensor Cores is much more
valuable than just having a bigger ship with more Tensor Cores cores.
With the RTX 2070, you get these features for the lowest price.
However, this analysis also has certain biases which should be taken into
account:
(1) Prices fluctuate. Currently, GTX 1080 Ti, RTX 2080 and RTX 2080 Ti
cards seem to be overpriced and they could be more favorable in the
future.
(2) This analysis favors smaller cards. The analysis does not take into
account how much memory you need for networks nor how many GPUs
you can fit into your computer. One computer with 4 fast GPUs is much
more cost-efficient than 2 computers with the most cost/efficient cards.\
I personally wanted to get an RTX 2080 Ti, but since the RTX 2070
release it is a much more cost-efficient card and with a virtual 16-bit
memory which is equivalent to 16 GB in 32-bit I will be able to run any
model that is out there.
However, mind the opportunity cost here: If you learn the skills to have a
smooth work-flow with AWS instances, you lost time that could be spent
doing work on a personal GPU, and you will also not have acquired the
skills to use TPUs. If you use a personal GPU, you will not have the skills
to expand into more GPUs/TPUs via the cloud. If you use TPUs you are
stuck with TensorFlow and it will not be straightforward to switch to
AWS. Learning a smooth cloud work-flow is expensive and you should
weight this cost if you make the choice for TPUs or AWS GPUs.
Another question is also about when to use cloud services. If you try to
learn deep learning or you need to prototype then a personal GPU might
be the best option since cloud instances can be pricey. However, once
you have found a good deep network configuration and you just want to
train a model using data parallelism with cloud instances is a solid
approach. This means that a small GPU will be sufficient for prototyping
and one can rely on the power of cloud computing to scale up to larger
experiments.
If you are short on money the cloud computing instances might also be a
good solution, but the problem is that you can only buy a lot of compute
per hour when you only need some little for prototyping. In this case, one
might want to prototype on a CPU and then roll out on GPU/TPU
instances for a quick training run. This is not the best work-flow since
prototyping on a CPU can be a big pain, but it is a cost-efficient solution.
Conclusion
With the information in this blog post, you should be able to reason
which GPU is suitable for you. In general, I see two main strategies that
make sense: Firstly, go with an RTX 20 series GPU to get a quick upgrade
or, secondly, go with a cheap GTX 10 series GPU and upgrade once the
RTX Titan becomes available. If you are less serious about performance
or you just do not need the performance, for example for Kaggle,
startups, prototyping, or learning deep learning you can also benefit
greatly from cheap GTX 10 series GPUs. If you go for a GTX 10 series
GPU be careful that the GPU memory size fulfills your requirements.
TL;DR advice
Best GPU overall: RTX 2070
GPUs to avoid: Any Tesla card; any Quadro card; any Founders Edition
card; Titan V, Titan XP
Cost-efficient but expensive: RTX 2070
Cost-efficient and cheap: GTX Titan (Pascal) from eBay, GTX 1060
(6GB), GTX 1050 Ti (4GB)
I have little money: GTX Titan (Pascal) from eBay, or GTX 1060 (6GB),
or GTX 1050 Ti (4GB)
I have almost no money: GTX 1050 Ti (4GB); CPU (prototyping) +
AWS/TPU (training); or Colab.
I do Kaggle: RTX 2070. If you do not have enough money go for a GTX
1060 (6GB) or GTX Titan (Pascal) from eBay for prototyping and AWS
for final training. Use fastai library
I am a competitive computer vision or machine translation researcher:
GTX 2080 Ti with the blower fan design; upgrade to RTX Titan in 2019
I am an NLP researcher: RTX 2070 use 16-bit.
I want to build a GPU cluster: This is really complicated, you can get
some ideas here
I started deep learning and I am serious about it: Start with an RTX
2070. Buy more RTX 2070 after 6-9 months and you still want to invest
more time into deep learning. Depending on what area you choose next
(startup, Kaggle, research, applied deep learning) sell your GPU and buy
something more appropriate after about two years.
I want to try deep learning, but I am not serious about it: GTX 1050 Ti
(4 or 2GB). This often fits into your standard desktop. If it does, do not
buy a new computer!
Acknowledgments
I want to thank Mat Kelcey for helping me to debug and test custom
code for the GTX 970; I want to thank Sander Dieleman for making me
aware of the shortcomings of my GPU memory advice for convolutional
nets; I want to thank Hannes Bretschneider for pointing out software
dependency problems for the GTX 580; and I want to thank Oliver
Griesel for pointing out notebook solutions for AWS instances.