Professional Documents
Culture Documents
Home
Castagna: Danger!
Disaster! Data
management!
MANAGING THE INFORMATION THAT DRIVES THE ENTERPRISE
Core
hyper-converged
purchase criteria AGILE INFRASTRUCTURE STORAGE REVOLUTION / TOIGO
Matchett: The
power and benefits
Castagna: Danger!
Disaster! Data
Danger!
management!
Toigo: Superhero
DATA STORAGE GROWTH IS A FACT
Disaster! Data
universe of data That may be a little over the top, but probably uncomfort-
management
ably close to home for some of you. Even as the compli-
ance delete-everything, save-everything debate fades into
Contain your
storage management! a distant memory, companies seem convinced that theres
gold in them thar data points. Hoarding is in, and data
Core Data has taken on a life of its own and
hyper-converged deletion simply isnt an option anymore. And if hanging
purchase criteria keeps growing. Its time to take control. onto everything that naturally finds its way into our data
How NVMe over
centers isnt enough, we keep coming up with things to
Fabrics will change collect even more data and cause even more data storage
storage growthdevices you wear or ones that watch you or track
Motivations for
everything you do on the internet.
hyper-convergence IT HAS ALL the makings of a spine-tingling, modern day Were addicted to data, even if were not sure what to
buys disaster movie. With corporate data growing at the rate of do with it.
CDM goes
about a gazillion percent per year, companies are drown- I expect some companies are spending so much time
mainstream ing in the stuff. And like that great 1958 movie monster, collecting, curating and cracking open their customers
the Blob, much of that binary flotsam is escaping data data that they no longer have to time to actually produce
Sinclair: Time to
rethink software- centers and seeping under doorways and out into the wild products to sell.
defined storage where its forming great lakes of data.
Soon, giant chunks of data break off and create an even
Matchett: The
power and benefits more threatening situationjust like that Delaware-sized DATA CAPACITY IS OUT OF CONTROL
of IT disruption iceberg that snapped off Antarctica and is ominously Of course, the problem with the data boom is that you
About us
adrift. Data bergs begin to bounce off data centers, spill- have to have some place to put itand wherever that is,
ing social security and credit card numbers, which are you also must protect it. Both of those issues mean spend-
scooped up by dark web bottom feeders who sell identities ing bucksbig bucks probablyand committing other re-
like popcorn at a disaster flick. sources to the care and feeding of all those zeros and ones.
Did that send a shiver up your spine? Under normal circumstances, that is, in the pre-data
Motivations for
hyper-convergence
buys
CDM goes
mainstream
Sinclair: Time to
rethink software-
defined storage
Matchett: The
power and benefits
of IT disruption
About us
Castagna: Danger!
Disaster! Data
The superhero
management!
Toigo: Superhero
dont actually have any superpowers, it seems discordant
universe of data
universe of data to hear some vendor woo me with his data management
management
riff when hes really talking about storage management
management
Contain your or simply flash storage, object storage or a cloud storage
storage service.
Real data management isnt just about moving data
Core Data management is the Superman,
hyper-converged among different storage tiers based on its hotness or
not the Batman, of the storage world.
purchase criteria coldness. No matter how many times IBM says hot edge,
How NVMe over
cold center to describe a managed storage infrastructure,
Fabrics will change it still sounds like they are describing a Pittsburgh-style
storage steak rather than data management technology. And
Motivations for
whether youre Cohesity, Ctera, Nasuni, Panzura or any of
hyper-convergence IS IT JUST me, or are meta-humans becoming meta-bor- a dozen other data management companies, the agenda
buys ing? Summer 2017 seems to have passed by in a blur, youre advancing sounds a heck of a lot more like a pitch
CDM goes
punctuated every few days by yet another blockbuster about the need for more flash, cloud or object storage than
mainstream movie based on a Marvel or DC superhero. My kids want a better enterprise data management strategy.
to see every one, of course, despite the commonality of Dont get me wrong: There may be a lot of value in pro-
Sinclair: Time to
rethink software- the plot lines. We usually end up debating the fidelity of viding a better caching algorithm to position frequently
defined storage the movie to the comic or canonicity of this installment accessed data on an edge NAS device to get to that data
to previous films in the series. Frankly, its getting pretty faster than in a public cloud or to let users and apps still
Matchett: The
power and benefits yawn-inducing. tied to file systems, rather than object storage, access it.
of IT disruption This is the same feeling I get when a vendor pitches But this approach is really just adding an armored flying
About us
his latest storage wares, which are increasingly contex- suit to Robert Downey Jr., not gifting us with a meta-hu-
tualized as big wins for data management. Truth be told, man like Superman or the Flash. And while I like Scarlett
most of these products arent doing much at all in the way Johanssons costumery and keen marksmanship as Black
of enterprise data management strategyat least not in Widow, even she cant really wield Thors hammer or im-
the grand sense. Just as most of the Avengers and Batman itate those crazy demigoddess antics of Wonder Woman.
Motivations for
hyper-convergence
buys
CDM goes
mainstream
Sinclair: Time to
rethink software-
defined storage
Matchett: The
power and benefits
of IT disruption
About us
your storage
Unfortunately, containers werent originally designed
to implement full-stack applications or really any appli-
cation that requires persistent data storage. The original
Heres where enterprise-class persistent idea for containers was to make it easy to create and de-
data storage for containers is headed. ploy stateless microservice application layers on a large
BY MIKE MATCHETT scale. Think of microservices as a form of highly agile
middleware with conceptually no persistent data storage
requirements to worry about.
JJPAN/GETTY IMAGES
HOME
STORAGE SEPTEMBER 2017 8
Home PERSISTENCE IN PERSISTING store dynamic application data.
Castagna: Danger!
Because the container approach has delivered great The second required data store is for container man-
Disaster! Data agility, scalability, efficiency and cloud-readiness, and is agement. Again, you can readily provide this with existing
management! lower-cost in many cases, people now want to use it for storage. Whether you use Docker, Kubernetes, Tectonic,
Toigo: Superhero
far more than microservices. Container architectures pro- Rancher or another flavor of container management, it
universe of data vide such a better way to build modern applications that will need management storage for things like configura-
management
we see many commercial software and systems vendors tion data and logging.
Contain your transitioning internal development to container form
storage and even deploying them widely, often without explicit
end-user or IT awareness. Its a good bet that most Fortune
PERSISTENT DATA STORAGE
Core
hyper-converged 1000 companies already host third-party production IT ACTS IN MANY WAYS LIKE
purchase criteria applications in containers, especially inside appliances, AN ANCHOR, HOLDING
How NVMe over
converged approaches and purpose-built infrastructure. CONTAINERS DOWN AND
You might find large, containerized databases and even
Fabrics will change
THREATENING TO REDUCE
storage storage systems. Still, designing enterprise persistent
storage for these applications is a challenge, as contain- MANY OF THEIR BENEFITS.
Motivations for
hyper-convergence ers can come and go and migrate across distributed and
buys hybrid infrastructure. Because data needs to be mastered, Its the third type of storage, container application
CDM goes
protected, regulated and governed, persistent data storage storage, that provides the most difficult challenge. When
mainstream acts in many ways like an anchor, holding containers down only supporting true microservice-style programming,
and threatening to reduce many of their benefits. container code can write directly over image directories
Sinclair: Time to
rethink software- Container architectures need three types of storage. and files. But containers use a type of layered file system
defined storage The first is image storage. This can be provided with that corrals all newly written data into a temporary, vir-
existing shared storage and has requirements much like tual layer. The base container image isnt modified. Once
Matchett: The
power and benefits platforms already built for distributing and protecting a container goes awayand containers are designed to be
of IT disruption virtual machine (VM) images in server virtualization. A short-lived compared with VMsall its temporary storage
About us
benefit is container images are much smaller than golden disappears with it.
VM images because they dont duplicate operating system If a containerized application needs to persist data, the
code. Also, running container images are immutable by first option is to explicitly mount a specific system data
design, so they can be stored and shared efficiently. There volumeor persistent volume in Kubernetesinto the
is a consequence, though, as the container image cannot containers namespace. This gives the container direct
About us
code to ensure reliable cluster-level sharing. The good Instead of trying to force legacy storage into new con-
news is that expert storage admins can potentially bring tainer environments, a growing alternative enlists a new
existing enterprise storage, such as NAS and SAN, to bear wave of software-defined storage (SDS) to do the job. SDS
in this new container world. And if they work closely consists of a storage operating system and services that are
with developers, they can realistically configure high-end fully deployed as a software layer, often as VMs, but now,
Core
hyper-converged
purchase criteria
How containers affect storage
CONTAINERS CREATE SEVERAL challenges for existing n Data migration, keeping up with container
How NVMe over
Fabrics will change storage, including the following: migration and avoiding performance degradation.
storage Containers move a lot, and their data should migrate
n Dynamic provisioning of lots of containers. Any- with them to maintain top performance.
Motivations for
hyper-convergence thing other than cloud-scale storage can be strained
buys by container-scale dynamic provisioning demands. n Network issues. Its not just east-west container
CDM goes
talk, but sharing remote storage across a cluster of
mainstream n Lost, isolated and unknown usage. Ongoing con- thousands of containers could bring the network to
tainer development and DevOps can end up creating a crawl. Consider affordable 40 Gigabit Ethernet and
Sinclair: Time to
rethink software- lots of fragmented islands of wasted capacity. Thin greater cluster interconnects.
defined storage provisioning, snapshots and automatic recovery of
capacity policies might help. To be sure, there will be more challenges. For ex-
Matchett: The
power and benefits ample, the wide variety of containerized applications
of IT disruption n Difficult to isolate contention. With so much going in your future will probably require choosing from a
on in thousands of containers, resolving deadlocks broader catalog of storage services than just the
About us
and contention can be challenging, especially with traditional three medal levels of bronze, silver and
legacy systems and traditional management tools. gold. n
Castagna: Danger!
containerized approaches such as Ceph, Torus and Glus- Containerized applications tend to be cloud-oriented by
Disaster! Data terFS, is it brings storage right into container clusters. nature, with architectures allowing for independent scal-
management! While managing something like GlusterFS can be a big ing of different internal services depending on changing
Toigo: Superhero
departure for traditional SAN administrators, container- external workload profiles or growth. This same cloud
universe of data ized storage naturally gains container-world benefits like approach pervades how modern application developers
management
agility, scalability and availability, while increasing appli- think of, and want to work with, storage. Its natural many
Contain your cation performance by keeping persistent data storage new containerized applications are written for object stor-
storage local to consumption. age I/O instead of traditional file or block.
If this sounds too complex, pre-converged and hy- Even if most current container environments are fairly
Core
hyper-converged per-converged container appliances make it much simpler modest in practiceexcept, of course, in public clouds
purchase criteria with native container storage capabilities built in, such web-scale object storage from the likes of Hedvig, Qumulo
How NVMe over
as Datera and Diamanti. These inherently employ SDS and Scality align well with web-scale container visions.
Fabrics will change to gain the flexibility and agility necessary to converge Amazon Web Services Simple Storage Service (S3) and
storage everything into a platform appliance format. We havent similar public clouds already use object storage as the
Motivations for
heard much yet from enterprises wanting to seriously persistent storage layer when implementing or migrating
hyper-convergence invest in converged container hosting for production, but container applications.
buys the future of IT infrastructure is going to continue down
CDM goes
the convergence path while building out more cloud-like
mainstream services. FUTURE REQUESTS
Of course, the trick for IT folks is to judge whether pay- We have yet to see the ultimate in persistent data storage
Sinclair: Time to
rethink software- ing for a vendors unique intellectual property and value for containers. From past experience in storage evolution,
defined storage on top of free open source is worth the investment and expect to see container-aware storage that knows its
additional lock-in. To benefit from pre-integration, proven being provisioned and used by containers and manages
Matchett: The
power and benefits enterprise-class features and round-the-clock support, itself appropriately. Much like virtual machine-aware
of IT disruption its often worth making a longer-term commitment to a storage, we should see a container storage service that
About us
specific vendor open source distribution or pre-converged persists container data and followsor even leadsthe
stack. In other words, its not just old school IT vendor container across a cluster and maybe even across clouds.
versus vendor, but variations of intellectual property- We fully expect to eventually see container-aware caching
laden vendors versus open source vendors versus do-it- using server-side flash and emerging persistent memory,
yourself projects. such as nonvolatile memory express, hopefully integrated
Motivations for
hyper-convergence
buys
CDM goes
mainstream
Sinclair: Time to
rethink software-
defined storage
Matchett: The
power and benefits
of IT disruption
About us
Toigo: Superhero
universe of data ost critical features in recent
DM D Number of server cores in recent
management
hyper-converged infrastructure hyper-converged infrastructure
Contain your purchases* purchases
storage
Core >
_ 5,000
33% Overall capacity
hyper-converged
purchase criteria
One to nine
Matchett: The
power and benefits 13% Can be optimized for specific apps
of IT disruption
About us
11% Data protection type D938 TERABYTES
11% Advanced networking features Average storage capacity IT shops
added with recent hyper-converged
*MULTIPLE SELECTIONS ALLOWED infrastructure deployments
How NVMe
over Fabrics will
change storage STORAGE NETWORKS STARTED becoming popular in the late
1990s and early 2000s with the widespread adoption of
Fibre Channel technology. For those who didnt want the
The future of storage performance and transport
lies in this type of connectivity and protocol. expense of installing dedicated Fibre Channel hardware,
the iSCSI protocol provided a credible Ethernet-based
BY CHRIS EVANS
alternative a few years later. Both transports rely on the
use of SCSI as the storage protocol for communicating be-
tween source (initiator) and storage (target). As the stor-
age industry moves to adopt flash as the go-to persistent
medium, were starting to see SCSI performance issues.
This has led to the development of NVMe, or nonvola-
tile memory express, a new protocol that aims to surpass
SCSI and resolve performance problems. Lets take a look
at NVMe and how it differs from other protocols. Well
also explore how NVMe over Fabrics changed the storage
networking landscape.
HOME
STORAGE SEPTEMBER 2017 15
Home anyone familiar with installing disks into servers will must wait behind other requests to be completed. This was
Castagna: Danger!
remember ribbon cables. It transitioned to a serial inter- less of an issue with hard drives, but a single queue is a
Disaster! Data face with the development of SAS. The PC counterpart big bottleneck with solid-state media where there are no
management! was Advanced Host Controller Interface (AHCI), which moving parts and individual I/O latency is low.
Toigo: Superhero
developed into SATA. You find both protocols on current
universe of data hard drives and solid-state disks.
management
Fibre Channel or Ethernet provides the physical con- ENTER NVMe
Contain your nectivity between servers and storage, with SCSI still The industrys answer to the interface problem is NVMe
storage acting as the high-level storage communication protocol. as a replacement for SCSI, both at the device and network
However, the industry developed SCSI to work with levels. Nonvolatile memory express uses the PCIe bus,
Core
hyper-converged HDDs, where response times are orders of magnitude rather than a dedicated storage bus, to provide greater
purchase criteria slower than system memory and processors. As a result, bandwidth and lower latency connectivity to internally
How NVMe over
although we may think SSDs are fast, we see serious per- connected disk devices. A PCIe 3.0 four-lane device, for
Fabrics will change formance issues with internal ones. Most SATA drives are example, has around 4 Gbps of bandwidth per device.
storage still based on the SATA 3.0 specification with an interface The biggest change in NVMe has been the optimization
Motivations for
limit of 6 Gbps and 600 MBps throughput. SAS drives of the storage protocol. The amount of internal locking
hyper-convergence have started to move to SAS 3.0, which offers 12 GBps needed to serialize I/O access has been reduced, while
buys throughput, but many still use 6 Gbps connectivity. the efficiency of interrupt handling has increased. In
CDM goes
The issue for both SAS and SATA, however, is the ability addition, NVMe supports up to 65,535 queues, each with
mainstream to handle concurrent I/O to a single device. Look at the a queue depth of 65,535 entries. So rather than having a
geometry of a hard drive, and its easy to see that handling single queue, NVMe provides for massive parallelism for
Sinclair: Time to
rethink software- multiple concurrent I/O requests ranges from hard to the I/O to a connected device. In modern IT environments
defined storage impossible. With some serendipity, the read/write heads where so much work is done in paralleljust think of the
may align for multiple requests. And you can use some number of cores in modern processorswe can see the
Matchett: The
power and benefits buffering, but its not a scalable option. Neither SAS nor benefits of having storage devices capable of processing
of IT disruption SATA were designed to handle multiple I/O queues. AHCI multiple I/O queues and how this will improve the exter-
About us
has a single queue with a depth of only 32 commands. nal I/O throughput.
SCSI is better and offers a single queue with 128 to 256 The NVM Express working group, a consortium of
commands, depending on the implementation. about 90 companies, developed the NVMe specification
Single queues negatively affect latency. As queue size in 2012. Samsung was first to market the next year with
increases, new requests see a greater latency because they an NVMe drive. The working group released version 1.3
Motivations for
RDMA enables the transfer of data to and from the
n SAS replacement
hyper-convergence application memory of two computers without involving
buys NVMe will supplant SAS as the main internal pro-
the processor, providing low latency and fast data transfer.
tocol for storage arrays. In correctly architected
CDM goes
RDMA implementations include Infiniband, iWARP and
products, this change will result in significant
mainstream RDMA over Converged Ethernet, or RoCE (pronounced
performance improvements as the benefits of
rocky). Vendors, such as Mellanox, offer adaptor cards
Sinclair: Time to flash are unlocked.
rethink software- capable of speeds as much as 100 Gbps for both Infiniband
defined storage and Ethernet, including NVMe over Fabrics offload. n Easier transitions
NVMe over Fibre Channel uses current Fibre Channel NVMe over Fabrics can coexist within Fibre Chan-
Matchett: The
power and benefits technology that can be upgraded to support SCSI and nel Gen 6 technology and onwards.This allows a
of IT disruption NVMe storage transports. This means customers could transition to storage arrays supporting NVMe over
About us
potentially use the technology they have in place by sim- Fabrics and an easier transition than the rip-and-
ply upgrading switches with the appropriate firmware. replace approach required for a move to NVMe
At the host level, host bus adapters (HBAs) must support over Ethernet. n
NVMetypically 16 Gbps or 32 Gbpsand, obviously,
storage devices have to be NVMe over Fabrics capable, too.
About us
Castagna: Danger!
Snapshot 2
Simplicity and price motivate hyper-convergence buys
Disaster! Data
management!
Toigo: Superhero
universe of data D Whats driving new hyper-converged D What do you look for in a hyper-
management
infrastructure purchases?* converged infrastructure vendor*
Contain your
storage
37% Easier to purchase 69% Price
Core
hyper-converged
purchase criteria 33% Quicker deployment 51% Product features, functions and performance
CDM goes
14% Easier to add future capacity or capability 21% Preassembled technology (plug-and-play)
mainstream
of IT disruption
About us 6% 8%
of enterprises have plan to deploy it
hyper-converged-based within a year.
primary storage deployed.
CDM goes
problems. It focuses on both protecting production data
and improving the management of production data cop-
ies. The goals are to cut storage costs, improve data visi-
HOME
STORAGE SEPTEMBER 2017 21
Home protection vendors jump in. provide an understanding of where the copy data man-
Castagna: Danger!
EMC (now Dell EMC) and IBM introduced CDM prod- agement market is having the greatest impact. They also
Disaster! Data ucts last year, and early this year, Veritas came out with identify which vendors are leading the effort to provide
management! Veritas Velocity CDM. Its not surprising that vendors universal data visibility, instant access to data, automated
Toigo: Superhero
are interested in copy data management. Its a natural data protection, and data portability that capitalizes on
universe of data extension of their storage and data protection products, hybrid and multicloud environments.
management
and many organizations have moved beyond tire kicking
Contain your to implementing CDM. A 2017 Taneja Group study found
storage that more than 30% of companies are evaluating CDM CDM PRIORITIES
products or have implemented them. IT pros responding to the Taneja Group survey listed low-
Core
hyper-converged So what CDM capabilities do companies value the ering storage costs; better data visibility, insight and com-
purchase criteria most? And how are vendors responding to user demand pliance; and the ability to consolidate secondary storage
How NVMe over
for CDM functionality? The answers to these questions as the top three CDM capabilities beyond data protection.
Fabrics will change
storage
Motivations for
CDM goes
mainstream Lower storage costs by using fewer data copies
41%
Provide better data visibility and insight to find
Sinclair: Time to
rethink software-
out-of-place confidential data and ensure compliance 34%
defined storage Consolidate secondary storage use cases,
such as backups, file services and object storage 32%
Matchett: The
power and benefits Improve data lifecycle management using
of IT disruption policy-based orchestration 29%
Enable automated copy creation and
About us management for DevOps workflows 24%
SOURCE: TANEJA GROUP SURVEY OF MORE THAN 300 IT PROFESSIONALS. *MULTIPLE SELECTIONS ALLOWED
Motivations for
CDM also enables data lifecycle management through
hyper-convergence THREE-QUARTERS OF respondents to a recent Taneja policy-based orchestration. True automation requires
buys Group survey said copy data management, or spinning up and down an entire infrastructure, which
CDM goes
CDM, was either complementary to data protec- means creating policies that provision data copies; setting
mainstream tion or a whole new category of technology that network parameters, refresh frequencies and retention
will replace data protection. periods; and cleaning up copies and VMs as needed.
Sinclair: Time to
rethink software- Interviews revealed that while IT professionals All major vendors in the copy data management mar-
defined storage dont necessarily agree on whether CDM is a data ket offer policy-based orchestration, but managing ser-
protection complement or replacement, they do vice-level agreement (SLA) compliance and cloud support
Matchett: The
power and benefits agree that its functionality should be seamlessly can set a vendor apart. For example, Dell EMCs eCDM
of IT disruption integrated with data protection. This must be offers comprehensive functionality when it comes to
About us
done in a way that provides a unified experience full-lifecycle SLA compliance that monitors SLA quality of
whether the goal is to monitor, protect, manage service. Also, having a visual workflow builder is beneficial
or optimize the secondary data environment. n for ease of use. Hitachi Data Instance Director is strong
in this area.
Cloud support has become an important aspect of or-
Castagna: Danger!
Disaster! Data
Time to rethink
management!
Toigo: Superhero
achieved by those who had deployed it, including the
software-defined
universe of data following:
management
storage
Contain your
n r educing storage operational expenses;
storage n simplifying or expediting new storage deployments;
and
Core Dont get hung up on the software
n simplifying storage management.
hyper-converged
purchase criteria
part of SDS at the expense of what
really matters.
How NVMe over
These are only the top three of a longer list. It appears
Fabrics will change the software-defined storage market is delivering value
storage to its users, so why hasnt adoption taken off? While
Motivations for
on-premises SDS deployments continue to grow, they
hyper-convergence havent grown as much as alternatives that take advantage
buys FEW STORAGE TERMS have been hyped as much in recent of software-defined storage technologymost notably,
CDM goes
years as software-defined storage. It seems that every new public cloud services and hyper-converged infrastructure
mainstream storage player has an SDS-based offering in some shape or (HCI). One issue could be that the do-it-yourself SDS
form. The question remains though, despite the hype and deployment model hasnt been widely accepted.
Sinclair: Time to
rethink software- the abundance of offerings, why hasnt SDS technology
defined storage taken over the data center?
It isnt due to a lack of advantages. To better understand THE DO-IT-YOURSELF FALLACY
Matchett: The
power and benefits the software-defined storage market, Enterprise Strategy When vendors introduced SDS technology, the storyline
of IT disruption Group (ESG) conducted a research study of more than 300 was often about SDS using commodityor serverhard-
About us
IT professionals responsible for evaluating, purchasing ware to reduce hardware lock-in. The net result touted
and managing data storage technology. Respondents had was a lower capital cost of infrastructure. While this has
to be using, evaluating or at least be interested in SDS as happenedthe fourth SDS benefit identified in ESGs
a long-term strategy. This survey provided insight into study is reduced capital cost of storagethe ability to use
whats driving IT to the technology, as well as benefits commodity hardware hasnt been enough to sway typical
Core
hyper-converged
purchase criteria
Motivations for
hyper-convergence
buys
CDM goes
mainstream
Sinclair: Time to
rethink software-
defined storage
Matchett: The
power and benefits
of IT disruption
About us
Castagna: Danger!
Disaster! Data
Toigo: Superhero
CRT monitors. But usually, they get subsumed as a lower
benefits of
universe of data tier inside a larger, newer umbrella and relegated to nar-
management
rower, less prestigious use cases.
IT disruption
Contain your Emerging disruptive storage technologies include
storage nonvolatile memory express server-side flash and per-
sistent memory; in-storage data processing, combining
Core IT must become a cutthroat provider
hyper-converged software-defined storage, containerization and in-stream
and champion of disruptive technologies.
purchase criteria processing; global file systems and databases with global
How NVMe over
consistency, security, protection and access features; per-
Fabrics will change vasive machine learning; and truly distributed internet of
storage things data processing.
Motivations for
hyper-convergence I HAVE A theory that true IT disruption happens when
buys something nonlinear occurs to change the traditional IF YOU KNOW ITS COMING, IS IT DISRUPTIVE?
CDM goes
expectation or baseline for a key operating capability. In You might think its been the monstrous, growing volume
mainstream storage, this could be related to capacity, performance or of data, which is certainly growing nonlinearly, thats
value. Weve seen great market disruptionnot to men- given birth to these disruptive technologies. Thats the
Sinclair: Time to
rethink software- tion data center evolutionwith the rise of scale-out vs. story told by all the vendor presentations Ive seen in the
defined storage scale-up storage architectures, flash vs. disk and big data last few years. Ive even presented that line of thinking.
analytics vs. data warehouse business intelligence, for The problem with that interpretation is that it makes
Matchett: The
power and benefits example. These disruptions have all brought orders of the storage industry look reactive instead of proactive.
of IT disruption magnitude improvement, enabling many new ways to And it also makes them look like heroes or at least para-
About us
distill more value out of data. gons of product management, figuring out exactly what
Im not saying we leave disrupted technologies com- people need and delivering it just in time to save IT from
pletely behind, but old top-tier technologies can quickly certain disaster.
drop down our perceptual pyramids of perceived value. If were honest, the truth is more likely that newer
Some older techs do disappearthink floppy drives and generations of storage technologies let us retain massively
Contain your
storage
Core
hyper-converged
purchase criteria
Motivations for
hyper-convergence
buys
CDM goes
mainstream
Sinclair: Time to
rethink software-
defined storage
Matchett: The
power and benefits
of IT disruption
About us
Matchett: The
power and benefits
of IT disruption
About us