Professional Documents
Culture Documents
A Broadband-Testing Report
By Steve Broadhead, Founder & Director, BB-T
Published by Broadband-Testing
A division of Connexio-Informatica 2007, Andorra
Tel : +376 633010
E-mail : info@broadband-testing.co.uk
Internet : HTTP://www.broadband-testing.co.uk
2011 Broadband-Testing
All rights reserved. No part of this publication may be reproduced, photocopied, stored on a retrieval system, or transmitted without the express written consent of the
authors.
Please note that access to or use of this Report is conditioned on the following:
ii
1.
2.
The information in this Report, at publication date, is believed by Broadband-Testing to be accurate and reliable, but is not guaranteed. All use of and reliance on
this Report are at your sole risk. Broadband-Testing is not liable or responsible for any damages, losses or expenses arising from any error or omission in this
Report.
3.
NO WARRANTIES, EXPRESS OR IMPLIED ARE GIVEN BY Broadband-Testing. ALL IMPLIED WARRANTIES, INCLUDING IMPLIED WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT ARE DISCLAIMED AND EXCLUDED BY Broadband-Testing. IN
NO EVENT SHALL Broadband-Testing BE LIABLE FOR ANY CONSEQUENTIAL, INCIDENTAL OR INDIRECT DAMAGES, OR FOR ANY LOSS OF PROFIT,
REVENUE, DATA, COMPUTER PROGRAMS, OR OTHER ASSETS, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
4.
This Report does not constitute an endorsement, recommendation or guarantee of any of the products (hardware or software) tested or the hardware and
software used in testing the products. The testing does not guarantee that there are no errors or defects in the products, or that the products will meet your
expectations, requirements, needs or specifications, or that they will operate without interruption.
5.
This Report does not imply any endorsement, sponsorship, affiliation or verification by or with any companies mentioned in this report.
6.
All trademarks, service marks, and trade names used in this Report are the trademarks, service marks, and trade names of their respective owners, and no
endorsement of, sponsorship of, affiliation with, or involvement in, any of the testing, this Report or Broadband-Testing is implied, nor should it be inferred.
Broadband-Testing 1995-2011
TABLE OF CONTENTS
TABLE OF CONTENTS ........................................................................................ 1
BROADBAND-TESTING ..................................................................................... 3
EXECUTIVE SUMMARY ...................................................................................... 4
INTRODUCTION: OPTIMISING THE WAN MORE IMPORTANT THAN EVER ...... 7
TEST OVERVIEW ............................................................................................... 8
Products In This Report ............................................................................ 8
How The Technology Is Applied ............................................................... 10
PUT TO THE TEST: WAN OPTIMISATION......................................................... 13
Our Test Bed ........................................................................................ 13
Test 1: Video ........................................................................................ 14
Test 2: WAFS/File Transfer Tests ............................................................. 22
Test 3: Email ........................................................................................ 25
Test 4: FTP .......................................................................................... 26
Test 5: Web-based Applications - SharePoint............................................. 27
Test 6: Microsoft Cloud-based SharePoint ................................................. 30
Test 7: Web Application Test Salesforce.com .......................................... 35
SUMMARY & CONCLUSIONS ........................................................................... 37
Figure 16 Blue Coat Video Test: 100 Streams Bandwidth Reduction Achieved..................................................................... 20
Figure 17 Blue Coat Video Test: 500 Streams Bandwidth Reduction Achieved..................................................................... 20
Broadband-Testing 1995-2011 1
Figure 33 SharePoint Cloud Service, Branch Office to Cloud via Direct Internet Connection, Cold ............................................ 33
Figure 34 - SharePoint Cloud Service, Branch Directly Connected to Cloud via Internet, WARM Run........................................... 34
Figure 35 Branch Directly Connected to Cloud via Internet: Cold Run .................................................................................. 35
Broadband-Testing 1995-2011
BROADBAND-TESTING
Broadband-Testing is Europes foremost independent network testing facility and consultancy
organisation for broadband and network infrastructure products.
Based in Andorra, Broadband-Testing provides extensive test demo facilities. From this base,
Broadband-Testing provides a range of specialist IT, networking and development services to
vendors and end-user organisations throughout Europe, SEAP and the United States.
Broadband-Testing is an associate of the following:
Limbo Creatives (bespoke software development)
Broadband-Testing Laboratories are available to vendors and end users for fully independent
testing of networking, communications and security hardware and software.
Broadband-Testing Laboratories operates an Approval scheme that enables products to be
short-listed for purchase by end users, based on their successful approval.
Output from the labs, including detailed research reports, articles and white papers on the latest
network-related technologies, are made available free of charge on our web site at
HTTP://www.broadband-testing.co.uk
Broadband-Testing Consultancy Services offers a range of network consultancy services
including network design, strategy planning, Internet connectivity and product development
assistance.
Broadband-Testing 1995-2011 3
EXECUTIVE SUMMARY
As the networking world becomes more distributed, so the need for LAN-type
performance across the WAN becomes ever more important. The reality is that
people are finally working in a more flexible fashion, geographically. But
regardless of where they are based head office, branch office or even working
from home they rightly expect to be able to use the same set of applications
and services. With more types of applications travelling on the same WAN link,
connecting more offices and more users its inevitable that performance suffers
due to latency, congestion and bandwidth limitations.
Not only do traditional enterprise applications and services need optimising, but
applications such as streaming video and cloud-delivered SaaS applications are
becoming increasingly important in the enterprise and are competing for that
same bandwidth. They need predictable, quality performance as well, video
especially.
To see if the existing WAN optimisation hardware is capable of delivering on these
requirements, we tested comparable Blue Coat MACH5 SG600 and Riverbed
Steelhead 1050 products. We created a test bed using real traffic across a
simulated WAN link (using typical bandwidth and latency settings). Our
application selection for testing was based on typical usage patterns and included
video, CIFS/file transfer, FTP, email, web-based SharePoint collaboration,
Microsoft Business Productivity Online Suite (BPOS) and Salesforce.com. We ran
multiple tests simulating both cold and warm runs. Cold runs mean the data has
never been seen by the optimisation device before; warm runs indicate that the
device has seen that data previously, meaning that various cache mechanisms
are warm and can deliver greater performance benefit.
We found that, when testing with traditional applications such as Microsoft file
shares over CIFS, FTP, and email, performance between both vendors was
relatively even with some small advantages for each vendor in different
situations. When it came to the fast growing applications like web-based apps,
video and Cloud/SaaS, however, Blue Coat performed significantly better than
Riverbed.
For SharePoint, tested as an internally deployed web application (SharePoint
operates over HTTP/SSL with an HTML browser interface to the user) the cold
results were relatively even between the two vendors. On warm runs, however,
Blue Coat performed significantly better an average of 5x better than Riverbed
across the board.
Our email test showed relatively even performance for both parties: 96%
bandwidth reduction for Riverbed and 97% for Blue Coat.
Broadband-Testing 1995-2011
Broadband-Testing 1995-2011 5
Broadband-Testing 1995-2011
Broadband-Testing 1995-2011 7
TEST OVERVIEW
Products In This Report
To compare WAN Optimisation performance, we tested comparable products from Blue
Coat (the MACH5 SG600) and Riverbed (the Steelhead 1050). In each test case, both
devices were optimally configured for the tests.
Blue Coat MACH5 SG600
The MACH5 SG600 is just one in a complete family of appliances, virtual appliances and
mobile software clients focused on WAN optimisation.
Blue Coat supports a full range of optimisation for collaboration, file sharing, email,
storage, backup and disaster recovery. In addition, Blue Coat has integrated specialised
technologies to optimise web-based applications and video over HTTP/SSL, as well as
video optimisation for RTMP (Adobe Flash), MMS and RTSP (Microsoft Windows Media
Server).
Broadband-Testing 1995-2011
The Steelhead 1050 is part of a range of physical and virtual appliances from Riverbed.
Supported applications for acceleration include file sharing (CIFS and NFS), Exchange
(MAPI), Lotus Notes, web (HTTP- and HTTPS-based applications), database (MS SQL and
Oracle), and disaster recovery. Riverbed claims the Steelhead can cut bandwidth usage
typically by 60-95% while offering real-time visibility into application and WAN
performance.
Broadband-Testing 1995-2011 9
10
Broadband-Testing 1995-2011
12
Broadband-Testing 1995-2011
1340k.doc
7108k.doc
1100k.xls
500k.ppt
3500k.ppt
268KB
1372KB
7108KB
1108KB
256KB
464KB
3568KB
Our testing covered video, WAFS/CIFS, email, FTP, collaborative and web applications.
Broadband-Testing 1995-2011 13
Test 1: Video
For the video testing, we used Microsofts Windows Media Load Simulator. Setting our
WAN simulator to a T1 (1.544Mbps) bandwidth limit with 100ms latency to create a realworld scenario, we began by loading to a maximum of 10, then 20 concurrent clients,
then 30, then 50 and finally 100, depending on the ability of the device under test to cope
with that number of video streams. We monitored and measured bandwidth utilisation at
each step. Tests were run over a 350 second period.
14
Broadband-Testing 1995-2011
BaselineLinkUtilisation
20Clientsover350seconds
120.00%
100.00%
80.00%
60.00%
40.00%
20.00%
0.00%
0
50
100
150
200
250
300
350
Moving onto sustaining 20 video streams, our baseline is as above, with a very similar
pattern to our 10-stream test.
Comparing this baseline with our device tests, again we see some very interesting
bandwidth usage patterns. The Riverbed Steelhead is largely pegged at maximum
bandwidth utilisation throughout the test. Blue Coat peaked between 25-30% utilisation
until all the video was stored in the object cache and then served locally again.
Broadband-Testing 1995-2011 15
Moving to 30 client streams, the pattern continued as before. Blue Coat was able to
rapidly cache and serve the video to multiple clients directly from the branch device,
while Riverbed required constant communication between the head and branch. With 20
clients all pulling the same video at the same time, the WAN link was quickly saturated.
Not so for Blue Coat WAN bandwidth utilisation was between 20-30% until the entire
video was cached, then dropped to near zero even while continuing to serve 20 clients.
BaselineLinkUtilisation
30Clientsover350seconds
120.00%
100.00%
80.00%
60.00%
40.00%
20.00%
0.00%
0
50
100
150
200
250
16
Broadband-Testing 1995-2011
300
350
Again we found inbound bandwidth pegged throughout the test by our WAN limit and the
Riverbed device. As we approached 30 streams, the Riverbed 1050 completely filled the
available bandwidth, eventually resulting in the load test software straining to establish
new streams. Again we saw Blue Coats superior performance with no additional
bandwidth requirement than when running 10 and 20 stream tests.
BaselineLinkUtilisation
50Clientsover350seconds
120.00%
100.00%
80.00%
60.00%
40.00%
20.00%
0.00%
0
50
100
150
200
250
300
350
Broadband-Testing 1995-2011 17
At 50 clients, we saw huge numbers of errors being recorded by the Riverbed device (see
illustration below of Riverbed device under test below).
Again, this is appears to be due to the bandwidth limits of our WAN link being hit early in
the test with no ability to optimise that bandwidth significantly enough to support large
numbers of video streams.
At this point only the Blue Coat device was capable of potentially supporting more
streams, so we reran the test again with 100 concurrent video clients.
18
Broadband-Testing 1995-2011
BaselineLinkUtilisation
100Clientsover350seconds
120.00%
100.00%
80.00%
60.00%
40.00%
20.00%
0.00%
0
50
100
150
200
250
300
350
The pattern was exactly as before, minimal outbound utilisation still and a short period of
30% or so utilisation, followed by almost zero utilisation inbound. We were able to sustain
100 concurrent video streams and there were no indications that adding more would hurt
performance in any way.
At most, the Blue Coat device used significantly less than half of the available WAN
bandwidth and effectively none once the entire video had been placed in the object cache.
Broadband-Testing 1995-2011 19
Figure 16 Blue Coat Video Test: 100 Streams Bandwidth Reduction Achieved
Finally, we decided to roll the dice and test Blue Coats SG600 with 500 video streams.
That test saw successful delivery of on-demand video streams to 500 clients. Again,
utilization was about 30-35% of bandwidth at the start of the video, and then took
almost no bandwidth for the last two thirds of the time.
Blue Coat delivered 500 video streams over a T-1 with plenty of bandwidth available for
other applications. That discovery shows significant optimization results.
Figure 17 Blue Coat Video Test: 500 Streams Bandwidth Reduction Achieved
20
Broadband-Testing 1995-2011
Broadband-Testing 1995-2011 21
The operations were repeated for the different MS Office file types and sizes we created,
as described earlier, with our default WAN simulator settings in place and repeated our
tests over multiple iterations.
We started with cold run testing, ensuring caches and every element were cleared before
running the tests, first a file open operation, then a save and then a close. Across the cold
runs, performance was relatively similar between all devices.
22
Broadband-Testing 1995-2011
On the file open tests, Riverbed delivered significant optimisation, though we did note
with some additional Excel file tests (not shown) it was significantly slower across all runs
than the others. On the file save tests, Riverbed tended to edge many tests.
Broadband-Testing 1995-2011 23
WARMTimetoOpenOfficeDocument
19.3
1340k.doc
3.5
3.5
7108k.doc
4.0
3.6
67.1
Baseline
21.6
1100k.xls
RiverbedWarm
3.4
3.5
BlueCoatWarm
22.7
500k.ppt
3.5
3.4
3500k.ppt
3.7
3.5
50.3
0.0
20.0
40.0
60.0
WARMSAVETimeforOfficeDocument
22.8
1340k.doc
5.0
5.4
7108k.doc
5.1
7.1
Baseline
21.2
1100k.xls
RiverbedWarm
4.8
5.1
BlueCoatWarm
20.4
500k.ppt
5.0
5.4
45.9
3500k.ppt
5.1
6.4
0.0
20.0
40.0
60.0
Our warm test runs again showed a relatively close level of performance between both
vendors, file saves showing a larger margin than compared to file open. Depending on the
actual file involved, the results were between 5-30% with most results inside a 10%
margin.
24
Broadband-Testing 1995-2011
Test 3: Email
For our email test we used our Office file set as attachments and monitored a series of
cold then warm runs with each test device using a pair of Outlook clients and Exchange
server across our WAN simulator.
We found that both the Riverbed and Blue Coat devices were able to significantly further
optimise the email attachments on a warm run. For the cold, Blue Coat recorded 8.26%
and Riverbed 10% optimisation. On the warm run, Riverbed improved significantly to
96% bandwidth optimisation. However, Blue Coat was still better with an excellent
97.81% bandwidth optimisation.
In terms of bandwidth utilisation given our 1.544Mbps link - this translates to Riverbed
utilising an average of 6kbps while Blue Coat required on average 4.9kbps to send each
of our test documents. Obviously, the native compressibility of each document was the
significant factor in the cold runs but once a document was cached, the MACH5 was more
efficient at sending the attachments.
100.0%
100.0%
92.0%
90.0%
80.0%
60.0%
40.0%
20.0%
4.0%
3.0%
0.0%
Baseline
RiverbedCold/Warm
BlueCoatCold/Warm
Broadband-Testing 1995-2011 25
Test 4: FTP
For our FTP tests we again took our Office file set and our default WAN simulator settings
of 1.544Mbit and 100ms round-trip-time and created an FTP client-server connection. We
downloaded all of the files used for the WAFS Benchmark and summarised the results.
We ran a number of cold runs, resulting in a baseline time of 230 seconds to complete the
transfers. We then performed several warm run iterations and averaged these out to
produce the results as shown in the graph below, with transfer times measured in
seconds.
We found Blue Coat to be faster than Riverbed on both the cold and warm runs, about
17% faster cold, 30% faster on the warm. This is classic optimisation territory and each
device provided significant savings over baseline, with Blue Coat taking the ultimate
honours here.
ColdandWarmTimetoTransferFTPFile
9.0Warm
BlueCoat
128.0Cold
13.0Warm
Riverbed
155.0Cold
BASELINE
230.0
0.0
100.0
200.0
26
Broadband-Testing 1995-2011
2MB.txt
11.34MB.doc
scenario we have here and in a live Internet environment (see BPOS testing
1.1MB.xls
section next, for example) where bandwidth and latency may vary
enormously from user to user involved in the collaborative workspace.
Broadband-Testing 1995-2011 27
On the warm runs we see significant optimisation from both vendors, but Blue Coat still
dominated with near-instant transfer times in all cases - a good level of performance all
round.
We then switched our attention to SharePoint 2010 (see next page) and reran cold and
warm test runs with the same configuration as before. Again, on the cold runs we found
largely consistent performance, but with Riverbed just shading many of the tests. On
warms runs, Blue Coat again emerged on top, albeit not quite as clearly as with
SharePoint 2007, but consistently so across all tests.
28
Broadband-Testing 1995-2011
MicrosoftSharePoint2010COLDRUNS
74.5
77.4
76.4
13MB.mp3
2.9
2.9
3.6
300k.docx
2MB.txt
4.6
5.0
1340k.doc
4.2
5.5
7108k.doc
12.5
8.3
BASELINE
39.7
9.1
9.6
RiverbedCold
BlueCoatCold
3.3
1.9
2.7
500k.ppt
20.3
19.6
21.7
3500k.ppt
1100k.xls
2.3
2.9
0.0
6.8
20.0
40.0
60.0
80.0
Broadband-Testing 1995-2011 29
MicrosoftSharePoint2010WARMRUNS
1.4
13MB.mp3
0.8
0.8
300k.docx
0.5
0.9
2MB.txt
0.6
0.9
1340k.doc
0.6
RiverbedWarm
1.2
7108k.doc
0.7
BlueCoatWarm
0.8
500k.ppt
0.5
1.0
3500k.ppt
0.6
0.8
1100k.xls
0.6
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
30
Broadband-Testing 1995-2011
1.6
The other approach is a branch office connected directly to the Internet (what well refer
to as Direct-to-Internet; each branch office is able to route traffic to the company data
centre as well as maintaining a direct connection to the Internet as provided by their local
ISP or via an MPLS split tunnel design. Each WAN optimisation vendor integrates
differently between these two approaches.
Riverbed must be deployed in a hub-and-spoke environment, where all traffic from each
branch passes through the single hub of the data centre.
Blue Coat, while supporting the hub-and-spoke topology, also supports an asymmetric
approach where Internet access is provided at the endpoint by the local ISP or MPLS split
tunnel a branch-to-Internet solution.
As a consequence of these differing philosophies, each test has been run twice: Once with
hub-and-spoke, and again with the branch office directly connected to the Internet. See
graphs on the next two pages.
In the hub-and-spoke topology, while the cold run results were reasonably similar
between our vendors, on several of the warm runs Blue Coat left Riverbed behind. Blue
Coat was significantly quicker than Riverbed when transferring the large PowerPoint file
(3500K.ppt), as well as the large (and highly compressible) 7108k Word document. As
the file sizes increased, Blue Coat was consistently faster.
Broadband-Testing 1995-2011 31
BPOSHubandSpokeCOLD
3.0
2.2
2.0
250k.doc
1340k.doc
21.3
9.0
9.3
7108k.doc
116.3
24.7
27.0
1100k.xls
17.3
4.0
3.7
500k.xls
1.0
1.0
Baseline
7.0
RiverbedCold
BlueCoatCold
3.0
1.0
1.0
250k.ppt
500k.ppt
7.0
7.7
13.3
58.0
57.7
56.0
3500k.ppt
0.0
20.0
40.0
60.0
80.0
100.0
120.0
BPOSHubandSpokeWARM
2.7
1340k.doc
1.0
17.0
7108k.doc
1.5
1.7
1100k.xls
1.0
RiverbedWarm
1.0
1.0
500k.xls
BlueCoatWarm
1.5
1.0
500k.ppt
8.0
3500k.ppt
1.5
0.0
2.0
4.0
6.0
8.0
10.0
12.0
14.0
16.0
The results from the Hub-and-Spoke configuration showed both vendors optimising the
download by several factors.
32
Broadband-Testing 1995-2011
18.0
Where the branch offices were connected directly to the Internet, however, the results
were a very different story. Without an upstream device to assist in compressing the file,
every vendor was at the mercy of the uplink speed again, using a common 512Kbps line
with 100ms latency.
The download speeds on the Direct-to-Internet cold run are all hovering just around the
same times as the baseline, indicating that no single vendor can improve performance.
MSFTBPOSBranchOfficetoCloudviaDirectInternetConnectionCOLD
22.0
21.7
21.0
1340k.doc
121.3
116.3
116.7
7108k.doc
17.0
17.7
17.7
1100k.xs
Baseline
6.3
7.0
7.0
500k.xls
RiverbedCold
13.0
13.2
13.7
500k.ppt
BlueCoatCold
58.0
58.2
57.7
3500k.ppt
0.0
20.0
40.0
60.0
80.0
100.0
120.0
Figure 33 SharePoint Cloud Service, Branch Office to Cloud via Direct Internet Connection, Cold
Broadband-Testing 1995-2011 33
MSFTBPOSBranchOfficetoCloudviaDirectInternetConnectionWARM
22.0
22.0
1340k.doc
1.0
121.3
116.0
7108k.doc
1.3
17.0
16.7
1100k.xs
1.0
500k.xls
1.0
6.3
7.0
Baseline
RiverbedWarm
500k.ppt
1.0
13.0
14.0
BlueCoatWarm
58.0
58.0
3500k.ppt
1.2
0.0
20.0
40.0
60.0
80.0
100.0
120.0
Figure 34 - SharePoint Cloud Service, Branch Directly Connected to Cloud via Internet, WARM Run
34
Broadband-Testing 1995-2011
SFDCBranchOfficetoCloudviaDirectInternetConnectionCOLD
5.0
5.0
5.0
300K.doc
35.0
35.0
2MB.txt
2.0
1340k.doc
18.0
1100k.xls
6.0
22.0
22.0
Baseline
RiverbedCold
17.0
17.0
BlueCoatCold
7.0
7.0
500k.ppt
3.0
58.0
58.0
3500k.ppt
19.1
0.0
20.0
40.0
60.0
Looking at the cold runs first, we see that the Blue Coat device largely dominates in terms
of best results. Obviously the compression options from Blue Coat are working especially
well on the text file here.
Moving on to the warm results, we can see that only the Blue Coat technology was
capable of actually accelerating from the cold runs, with everything being accessed
instantly, while Riverbed shows no optimisation whatsoever.
Broadband-Testing 1995-2011 35
SFDCBranchOfficetoCloudviaDirectInternetConnectionWARM
300K.doc
1.0
5.0
5.0
35.0
35.0
2MB.txt
1.0
22.0
22.0
1340k.doc
1.0
Baseline
RiverbedWarm
17.0
17.0
1100k.xls
1.0
BlueCoatWarm
7.0
7.0
500k.ppt
1.0
58.0
58.0
3500k.ppt
1.0
0.0
20.0
40.0
60.0
36
Broadband-Testing 1995-2011
Broadband-Testing 1995-2011 37
38
Broadband-Testing 1995-2011