Professional Documents
Culture Documents
In collaboration with
REPORT Power at the edge 2017 Senza Fili Consulting www.senzafiliconsulting.com |1|
Table of contents
I. Analyst Report 3
Introduction. Complexity enables intelligence 4
A map of the territory: what analytics does and how 6
Managing complexity, with machine learning and AI 11
Analytics tradeoffs: time and depth 14
Drivers to adoption: cost, services, usage and technology 16
Challenges and benefits: getting over the cultural shift 19
The ingredients for a healthy business case 21
Implications 24
II. Vendor profiles and interviews 25
Empirix 26
EXFO 34
InfoVista 41
Intel 48
Glossary 56
References 57
Further resources 58
Introduction. multiple times. Over the last few decades, performance and coverage have
improved tremendously, and we increasingly rely on wireless connectivity to
keep in touch with each other, to be informed and entertained, and to carry out
a growing number of the tasks we do in our daily lives. With IoT, wireless
Complexity
networks are taking on our environment, too: they have started to monitor the
things that inhabit our surroundings and to take actions within our world. If we
include both cellular and Wi-Fi, wireless has become the dominant way we
communicate with each other as the preferred alternative not only to mail and
fixed calls, but also to wireline communications. The latest laptops have only
Yet, despite their success, our wireless networks are not terribly smart or
efficient. In most cases, they still strive to push through as many minutes of voice
intelligence calls or bits of data as they can, regardless of what those calls or bits are, who is
sending them, what service or application they are tied to, how valuable or
urgent they are to the operator or the subscriber, and what the network
conditions are.
This is about to change. Networks are about to become smarter and more
efficient, and at the center of that transformation is analytics.
The rising complexity of wireless networks and of the traffic they carry is the
fundamental enabler. Complexity provides the necessary ground for
optimization. In a homogeneous system, the scope for optimization is limited;
brute-force management of resources prevails. New technologies and more
powerful processing capabilities are also enablers of the transformation to smart
and efficient networks. But a key driver is the operators realization that end-to-
end network optimization is no longer an option it is a necessity.
In widening the range of data sources, analytics requires more effort than
traditional optimization, but it also provides a unified and converged platform
for multiple targets of optimization. These targets can be categorized as
infrastructure (the end-to-end network, whether virtual or legacy), customer-
facing components (services and CRM), and third parties and IoT (giving them
access to analytics data to improve their applications or services).
A map of the from vendors, operators and other ecosystem players to map what analytics
does or is expected to do in the near future.
The ultimate goal of analytics is to improve the subscriber experience or, for IoT,
territory: what
the service quality, in a way that is cost effective and optimizes the use of
network resources. Operators move from a network-based to a customer-
centric approach provides the momentum for the adoption of analytics and
accounts for many of the features that are emerging as central to analytics.
analytics does The most salient feature is the use of real-time, location-based big data: large
volumes of raw data from multiple sources some structured, some
unstructured to drive optimization. The data itself primarily data from
wireless networks and their users has always been available to operators, but
and how they could not easily collect and store it, let alone analyze it at the depth and
time resolution required to make it useful. Network monitoring and optimization
are still mostly done on historical data, but this severely limits their scope. What
is different now is that operators are finding ways to use big data from their
networks and subscribers, and they have started to add external data e.g.,
demographic or location data to enrich their analysis.
As a result, operators have to deal with massive amounts of data, and this may
feel overwhelming. In fact, for some time, worries about the sheer amount of
data available have been a drag on mobile operators commitment to adopting
analytics. The first challenge that analytics poses is to find ways to take control of
the data. Specifically, operators need to:
Big data: Multiple levels: drill down as Measure and understand network Testing
Structured and unstructured data needed, to network element, performance and QoE RAN, core, end-to-end
sets individual user, location, time, Identify network anomalies and optimization
KPIs/KQIs, etc. QoE issues
Internal and external sources Monitoring, root-cause analysis
Multiple targets within an Trace cause of problems
Location based Security
organization
Real time Suggest solutions CRM
Variable time resolutions: batch
How to use it: Predict future disruption and Service assurance
analysis to real time
requirements
Clean data Automation Network planning
Correlate data sources Visualization
Find relevant data
Initially, finding the relevant data is the most difficult step, because we do not Analysis
know what is relevant and what can be discarded. We want to find anomalies or
unexpected correlations in the data, but we do not need to store data that After creating a robust framework to collect, clean, correlate and filter the data,
shows that the network is consistently behaving within the expected range we the analysis can be done at different levels, for different purposes. In the pre-
may want to note this down, delete the raw data and move on. But this is analytics framework, operators typically do not have a common data platform
difficult to do since we because we do not know a priori which data will give us that serves many optimization tools, but rather have separate data resources,
the insight in the network that we hope for, and because the data set is used for each used for a different task. Inevitably, this creates unnecessary duplication
multiple purposes and by different groups within the organization, each with its and forces operators to adopt a narrow focus that enables them to refine
own relevance criteria. performance only on a specific portion of their network. With analytics,
operators can use the same pool of data for all their service and network
Yet finding whats relevant is necessary. Storing and analyzing data has become optimization needs.
less costly, but it is still expensive, requires much effort, and can lead to false
positives or useless recommendations. Some vendors estimate that 90% of data They can do so by selecting different depth levels looking at the high-level,
can be safely thrown away. After we learn more about relevance in the data, this end-to-end network performance or drilling down to the experience of a single
figure is likely to rise, and, as a result, the analysis process can be streamlined. subscriber, depending on what they want to accomplish. For instance, high-level
Furthermore, data that is needed for real-time or near-real-time tasks need be data is useful to chart overall network performance through time, but when a
stored only for the time needed. subscriber calls in to complain about service, getting real-time information about
the networks performance where the subscriber is located, and about the
subscribers activities and device, requires a deeper dive into the data.
The same principle extends to the identification of network anomalies that can Analytics is a platform in which data from multiple sources converges to serve
account for QoE issues, and to root-cause analysis. Operators are moving multiple audiences within the operator and, as we will see later, within third
beyond an element-based approach in which, when an element in the network parties. It can cover the entire life cycle in the end-to-end network, from testing
does not perform as it should, they fix it and expect that the end-to-end network (to ensure that the network is ready for commercial launch) to network planning
will recover. (where analytics can proactively and precisely identify when and where the
network is due for expansion).
While fixing elements that misbehave is still necessary, it is not sufficient to
optimize QoE. Every single part of the network may perform as expected, but During commercial operations, analytics helps operators optimize the overall,
subscribers may still be unhappy about the quality of the service, or the network end-to-end network and each component within it. It can also assist with
may not support the level of QoE that the operator wants to achieve. In this monitoring network performance and root-cause analysis to solve performance
case, analytics provides a way to identify anomalies or even correlations that issues that affect the network as they arise or, eventually, to predict them.
may explain low QoE or suggest ways to improve QoE.
Analytics can be a powerful tool for identifying and managing security threats. It
Similarly, root-cause analysis benefits from analytics because analytics can help can help operators identify security vulnerabilities, as well as detect and isolate
identify problems with complex sources, which may stem from the interaction of suspicious activity in the network that may be caused by malicious attacks.
different elements or may not simply be reduced to the malfunctioning of a
single element. Another key benefit of analytics and the attendant QoE-centric approach it
supports is the ability to tie together the monitoring and management of
The guidance that analytics provides results both in a better and deeper network performance and service quality. While service quality inevitably
understanding of the subscriber experience and network performance, and in depends on network performance, operators still manage them separately, to a
recommendations to optimize QoE and resource utilization. These large extent, through different units in their organization. Analytics will bring the
complexity, Todays networks are much more complex. Network architectures continue to
evolve, with the addition of Wi-Fi access, small cells and DAS, C-RAN, unlicensed
access, carrier aggregation, VoLTE, virtualization, edge computing, network
slicing, and eventually 5G.
with machine Managing networks that grow in size and complexity becomes difficult because
there is a need to integrate new elements and technologies into the existing
network in order to benefit from the technological advances.
learning and AI In parallel with the growth in network complexity, we have seen a growth in
traffic heterogeneity. Where voice traffic once dominated, it now accounts for
just a few percentage points of overall traffic. Video traffic is becoming
dominant, but increasingly video is too coarse a category to distinguish among,
say, conversational video, streamed video and downloaded video each with
different requirements. IoT will further increase the heterogeneity and
complexity of network architecture and traffic composition.
Complexity creates the fertile ground needed for analytics to grow and prosper,
because complexity creates the opportunity to optimize networks and services
in non-trivial, more sophisticated ways that will make networks smarter, more
efficient, and better at serving subscribers. Complexity gives operators more
flexibility and more choices, but of course those do not come for free. Reaping
the benefits of analytics requires effort and skill.
Data sources
The relevant data sources can be both internal (i.e., collected from the network
by the operator) and external (i.e., generated by third parties such as
government and private entities). The inclusion of multiple data sources reflects
AI has a wider scope: to replicate (or improve on) human intelligence, or some
aspects of it, and other cognitive functions in machines. In this context, functions
such as learning, pattern matching, problem solving and prediction are relevant
to analytics.
Analytics covers a lot of ground, and for many of its tasks those that are
sufficiently mature and well understood existing deterministic, rule-based
algorithms are efficient and well suited, and there is no need to use the more
effort-intensive tools that ML and AI offer. But as analytics becomes more widely
deployed and deepens our understanding of the network, we can expect that
ML and AI may expand their reach.
For instance, today we may not need ML and AI to decide where to put a new
macro station, because there are many constraints that limit the possible
choices. But as we move to more complex network architectures, there will be Similarly, ML and AI can play a significant role in identifying anomalies in
more solutions available for adding infrastructure at a given location, and ML network and service performance that may point to performance issues,
and AI may become useful. security threats or attacks, or other useful information e.g., about an
unexpected or unplanned event that in turn may generate a prediction or a
The potential for ML and AI is in helping vendors and operators address areas recommendation for specific action.
which are new we have no historical data and too complex to understand
with more traditional approaches. ML and AI can correlate multiple sources of
data and to find what is relevant within the entire data set. Going through this
process manually is too labor intensive to get beyond a basic correlation and
selection of data that gives only limited insight into network and service
performance. ML and AI may uncover correlations that were not previously
recognized, because their automated processes can explore data more deeply
and more systematically than humans can. Human expertise is still crucially
valuable in narrowing the focus to find solutions and to keep complex problems
manageable, but it can limit the ability to find novel solutions or insights.
Analytics Operators can decide how aggressive they want to be as well as how much risk
they are willing to accept and how much effort they want to invest into this.
With the potential for tradeoffs along many dimensions, each operator will chart
its own unique path and get analytics to fit its specific requirements.
tradeoffs: time Time and depth are two of the most important dimensions. And they interrelate
in defining how aggressive the approach to analytics is.
Time refers to the temporal resolution of the analysis, ranging from data
and depth
collected over a period in the past to real-time data, which could be
collected on the millisecond scale.
Depth combines network depth (from the end-to-end network, down to a
single-element level) and location (of subscribers and infrastructure).
A high-level approach, which requires less effort but provides only results
at the network level and for historical data.
A deep approach, with data collected and analyzed in real time and using
geolocation, so that optimization can happen at the edge of the network,
targeting the RAN, the subscriber or both.
Operators will pick any combination of time and depth they see fit for different
tasks and at different times, depending on the target of their analysis. If their
target is QoE, they may want to look at it at the network level to see how it
changes through time, but they may also look at an individual subscriber to
customize the service offering.
Today, operators are still mostly in the first boundary case (the lower-left corner
in the graph on the next page). But they are moving toward near-time and real-
time analytics, and combining that with geolocation and with the ability to drill
deep into the network.
As operators move toward real time and closer to the subscriber, the volume of
data that analytics tools have to crunch grows quickly, increasing the processing
Drivers to for vendors to develop solutions and for operators to deploy. The increased
ability to manage big data at lower cost has made real-time and location-based
analytics manageable and affordable.
adoption: cost,
But this is still not enough to justify adoption. Multiple drivers are involved in
shifting mobile operators from their earlier caution about big-data analytics
toward the realization that deep optimization of services and networks is not
only possible, but in the long term necessary. Each operator is moving at a
different pace and selecting a different approach, but there is a consistent
In the table on the next page, we list three groups of drivers that, from different
and technology
directions, strengthen the case for the adoption of analytics:
Move to a subscriber-centric service model, based on QoE. This creates the need to understand what QoE is, how to quantify it, how it relates to
network KPIs. In addition, other performance metrics are being introduced to capture new components of the user experience that traditional KPIs do
not track: for example, using metrics like time to content, stalling rate and duration, or frame rate to specifically capture video experience. (Other types of
traffic and services have their own new specific metrics.) Analytics provides the framework to incorporate this new data and correlate it to the KPIs and
other metrics currently used.
Manage traffic based on service and application. A key element in quantifying the inherently subjective QoE is to analyze network and service
performance at the service and application level, to understand what the subscriber satisfaction level is for each. This is part of the shift to a subscriber-
centric model: subscribers directly care about how well apps work on their devices, and they care more for some applications than for others.
Understanding this enables operators to prioritize traffic management accordingly.
Improve efficiency to retain profit margins. Usage continues to grow fast, but revenue growth lags, so increasing network resource utilization which is
well below capacity through optimization has become a top priority for operators. Analytics can help, first by getting a more granular understanding of
inefficiencies in resource utilization, and then in identifying ways to improve it.
Improve performance and QoE without increasing costs. Related to the need to improve efficiency is the need to avoid an increase in deployment and
operational costs as operators enhance performance and QoE. Analytics can strengthen the ability to compare the effectiveness of different
infrastructure upgrades i.e., their impact on performance and QoE so operators can select the upgrades that are most cost effective.
Keep churn low. Although churn is, as always, a top-of-mind worry for all operators, we still do not fully understand what causes subscribers to move to a
different operator and how to prevent that. Analytics can give operators actionable insight into the causes, and can flag subscribers who are at risk of
defecting.
Expand revenues through new revenue streams. Revenues from subscriber service have flattened in many markets, and IoT is the best opportunity to
unlock a new revenue stream. To enable it, however, operators need to manage the coexistence of IoT and the existing subscriber services, in a way that
keeps users satisfied while also using network resources efficiently. Analytics can help to manage the traffic from subscribers and from IoT applications so
neither suffers.
Differentiated traffic, devices, users. The heterogeneity in traffic types, wireless devices and users keeps increasing as we rely more on the wireless
infrastructure, and this trend is accelerating with the growth of IoT. As we have noted before, this complexity is an enabler for analytics, but it is also a
driver for adoption, because analytics helps operators manage traffic types, devices and users differently depending on eachs requirements and the
operators strategy.
Traffic growth outpacing traffic capacity. While operators strive to increase capacity in a cost-effective way, traffic continues to grow faster than capacity.
One way to address this is to intensify resource utilization. This increases the volume of traffic transported by a network without having to increase
capacity. At the same time, analytics can help operators manage traffic more effectively in real time, taking into account network load. This is especially
useful to reduce the incidence and impact of congestion.
Technology drivers
Virtualization. Virtualized and, even more so, hybrid networks need a robust analytics platform to enable the orchestrator to allocate network
resources effectively. Here there is also a significant potential role for ML and AI as the ecosystem continues to learn how to operate in a virtualized
environment. Conversely, virtualization gives analytics a new direction in which to showcase its value. In a non-virtualized network, core resources are
assigned to specific hardware, so the scope for optimization, and hence analytics, is limited. Virtualization transforms the wireless core into a dynamic
environment, which has to be optimized in real time to extract the benefits virtualization.
Edge computing. As wireless networks start to embrace distributed architectures, operators need to balance which functions should be centralized and
which should be pushed to the edge and decide where in the edge those should be located. The more detailed understanding of network and service
usage that new analytics tools make possible may help operators chart their path toward edge computing.
Network slicing. Effective analytics is fundamental to the successful implementation and use of network slicing. Operators have flexibility on how to slice
traffic how many slices, how to split traffic across slices. Analytics can help operators figure out the most effective way to do so. The decisions depend
on the type of traffic on their network, and this varies by location and time.
Policy and subscriber management. Advanced policy and subscriber management requires analytics insight into subscriber behavior in order to resolve
customer support calls, reduce churn, and upsell and customize services. Being able to drill down for a detailed snapshot of the subscriber experience in
real time gives operators the opportunity to respond more efficiently to subscribers calls, or preempt them by contacting subscribers when the operator
notices QoE issues.
5G. 5G will be the culmination of a process of integrating multiple air interfaces and network layers, and of increasing network complexity to
accommodate a wider range of use cases. As we move toward this target, and as operators and vendors continue to refine it, analytics will grow in
prominence and maturity.
Challenges and operators, but it is also challenging. Learning to manage and leverage massive
data sets can be a daunting task, and applying the insights from analytics in
commercial networks can be risky initially.
benefits:
The main concerns for operators stem from the difficulty of the tasks that
analytics tackles and the amount of effort required to manage a big data
platform. In addition to that, operators have to either train their employees to
use analytics or hire new employees to do it, but data specialists are in great
demand these days, and difficult to find.
getting over But the greatest challenge is likely to come from the cultural shift that analytics
requires within the organization. The combination of real-time operations and
automation within an expanded analytics framework causes a loss of direct
the cultural
control over the network the type of control that operators still have by
manually optimizing the network. Giving up that level of control is necessary
because the complexity of networks makes automation unavoidable.
Of course, operators are aware of this, but the cultural and skill shifts that
shift analytics requires are still difficult to introduce. It will need commitment from
top management, and it will take time to be absorbed. Eventually the transition
has to be completed; the open question is how long the process will take and
how much internal resistance operators will face.
Despite all that, operators commitment to analytics has grown swiftly over the
past few years, because the benefits outweigh the challenges.
First among the benefits are the lower costs and new revenue stream; we will go
over these in the following section on the business case.
In addition, analytics can enable operators to improve their support for existing
services, the creation of new services, and the customization of service offerings.
Analytics can give operators the information they need to optimize QoE for
specific services and applications. The combination of analytics and network
slicing will push this capability even further.
REPORT Improving latency and capacity in transport for C-RAN and 5G 2016 Senza Fili Consulting www.senzafiliconsulting.com |19|
At the same time, a better understanding of what subscribers do individually
and within market segments will help operators define new services and
applications and estimate their attractiveness. It will also enable operators to Challenges
offer plans that are better suited to a subscribers need, or services that the
subscriber may be interested in. Operators are already doing this today, but Hard work
analytics will provide richer insight about how best to engage the subscriber. Too much data to process
Not enough skilled people
As we mentioned in the previous section, advanced policy and subscriber
management drive the adoption of analytics, but also improve subscriber Less control over network
management and the implementation of policy. Once a subscriber calls in, the Difficult cultural shift
service representative will immediately get all the information she needs to
diagnose the issue and suggest a solution. Benefits
The ingredients services efficiency that operators sorely need now. A substantial investment is
required and a good part of that will go to customizing and integrating the
analytics solution within the operators organization. Nevertheless, the business
case is attractive, because the financial benefits extend to the entire network.
business case minimizes traffic variability (i.e., raising average throughput by filling in the
valleys and reducing variance). Another possibility is to prioritize time-sensitive
or higher-value traffic, while delaying traffic where a short delay has no
significant impact on the subscriber experience (e.g., app updates, large content
downloads, background activity). Yet another way to increase resource
utilization is to encourage subscribers to shift some of their activities to peak
hours (e.g., by exempting some off-peak access from the monthly allowances).
Lower per-valuable-bit cost. Some bits are more valuable than others.
Increasing the utilization of network resources lowers the per-bit cost of a
deployed network, but even more important, it lowers the cost of the most
valuable bits.
For instance, increasing traffic during off-peak times or at lightly used locations,
but leaving it unchanged in high-traffic areas, lowers the cost per bit, but the
financial advantages of that reduction are small. With analytics, operators can
change the traffic composition and distribution in hot zones prone to
congestion.
For instance, the operator may address a performance issue that could affect Advertisers may use this information to select the location and type of ads they
QoE, before the subscriber notices or picks up the phone to call in. When this is serve. Retailers may decide where to open the next store. Public agencies may
not possible, the operator may alert subscribers about service limitations, and want to understand or predict peoples behavior during events that are
offer alternatives or compensation. This can keep subscribers satisfied and unplanned or that have an unpredictable impact on traffic or public areas. These
reduce the number of calls. are only a few examples of a potentially large market that mobile operators can
what were really focused on: being able to If we want, for example, to take all the data thats
performance provide the right data at the right time. traversing the network, and we want to be able to
correlate it to the actual end user, we will also
A conversation with Robert Laliberte, So were taking that data and correlating it to the need to pull in information from CRM and
VP Marketing, Empirix end-to-end call, for example, so customer support associate it with a user.
will know this customers having a problem at this
point. The key, for us, is working with the service
How can mobile operators deal with all the data provider to understand what are the critical KBOs
they have and optimize their networks, rather We are able to understand the traffic, so the and then building out the appropriate KPIs, KQIs,
than be overwhelmed? How can they get the data service provider can pass along location-based so they get the information that they want to see.
they need, at the right temporal resolution services information, or can find a real-time alert Of course, they also need to be able to drill down
(milliseconds instead of days), and at the right to a network problem thats causing a problem to an individual subscriber level, not just macro
depth (from the end-to-end network, down to the with the quality of experience for an individual level data like this cell tower is out.
individual subscriber)? subscriber.
In this day and age, we really need to be able to
Robert Laliberte, VP of Marketing at Empirix, Thats what were trying to do: take vast amounts instantly pinpoint which subscribers are impacted
shared how Empirix correlates network data to of data and roll that up into the information, by a network challenge. Or if an individual
multiple use cases, both internal to the operator knowledge and wisdom that service providers subscriber calls in to customer care, we need to
and for third parties. need to operate their environments more know whats going on with their account
effectively and efficiently. immediately.
Monica Paolini: Robert, can you tell us what you
do in this area at Empirix? Monica: Operators have always had access to this The biggest shift weve seen from the operators is
data, but theyre only now learning to use it. What much more focus on the customer, many of them
Robert Laliberte: Empirix is a leading provider of has changed in the past few years? refer to it as and more customer centricity; its
customer experience assurance solutions. We whats driven a lot of this activity and the need to
provide all of the data collection probes that go Robert: In the past, there were separate voice and be able to correlate all that data into a single, easy-
out throughout a service providers network, data systems. You had data coming in, but it got to-use interface.
collect all that voice and data traffic, bring that in, stuck in silos, and there was a lot of swivel-chair
correlate it, enrich it, and pass that on for either management: How do we correlate the Monica: Thats challenging, because you need
real-time dashboarding or reporting, ad hoc information between the two disparate systems? granularity of the data about the individual
Robert: For us, what that would mean is that All those bugs need to be worked out. Theres no It takes time to convert from a manual to a fully
were going to be deploying a hybrid environment doubt they will. The enterprise has already figured automatic mode, but it gives them the sense of
of both physical probes and virtual probes. this out. Theyve built fully virtualized control they need as theyre learning to trust the
environments. Its just a matter of time for the automation software.
NFV is now getting into what Gartner refers to as service providers to be able to pull this together,
the trough of disillusionment in its hype cycle. At do the testing that they need to do, and then roll This is where, as I said, it gets exciting. Theres a lot
the beginning everyone loves the idea, they love forward their solutions. of vendor development going on when things
the concept. They start using it, and they find out work or maybe things dont work, and they need
theres maybe some limitations, or there is The smaller service providers will probably roll this to change and modify their product.
something causing a problem. out faster, because they have a less complex
environment. Itll be simpler for them to deploy Again, its going to be a progression. Its going to
This is where it gets really fun. This is where we get and control. As you get to larger operators, its take time. People will test the waters. Theyll do it
to go out and test these environments and push going to take a little bit longer, just because the in pilots first before they roll it into production.
the limits, and understand what works, what environment is more complex. Were still looking at years before all this gets fully
doesnt, and what needs to be fixed or changed. adopted, but like I said, this is the exciting time
Monica: In your pilots, what is it you hear from now. This is where a lot of the developments
Were entering a phase of rapid development right operators? They are used to doing manual, small going to happen, a lot of the real testing is going to
now, as more organizations and more service changes, and having limited data. The more they take place, and finding out what works well.
providers start deploying these pilot environments open up the gate to use more data, the more
and finding out where the virtual environments automation they use, the less control they have. Certainly, with virtualization at the edge,
How are mobile operators coping with it, not just organizations and service providers dont have to
Monica: If we look forward over the next few We view this as really critical to the development
years to 5G and more virtualization, how is that of fully virtualized environments, because there is
Subscriber
Service
Network
That was just one example, which can be extended Where do we collect the data? Where do we act In addition, many times, the data is very
to other scenarios, as well. MEC also has useful from an analytics point of view? meaningful locally whether it is in one node or in
applications in network management within our several adjacent nodes. Weve been talking with
wireless networks if you provide the right Caroline: We see this as a hybrid model. There are some of the machine learning innovators out there
platform and APIs and expose them to developers. reasons to do centralized analytics, because the about the enablement of this kind of data, and
cloud has a large amount of computer storage. they are experimenting with it.
People do come up with applications that are And, when its centralized, the cost does go down.
meaningful and useful to the management of data. When discussing distributed architecture, we think
Its not just data for horizontal markets. A lot of But distributed analytics architectures, like FOG its important to note that some of the data should
times, theres a vertical focus to data, for a specific computing and Multi-access Edge Computing, go to the cloud. Centralized learning technology
and targeted type of enterprise. have their place as well, because they help extract belongs in the cloud. But, there is a rightful place
all the value from data at the IoT endpoint device. for distributed analytics, as well to get a faster
Monica: Lets get to the vertical distinction there, Its also a way of taking the cloud-based learnings response and faster insights for things like road
because thats really a huge part of analytics and to the edge distributed from the data center. hazard situations with autonomous-driving cars.
a huge opportunity for optimizing the networks.
And lets look at the horizontal level, too the A distributed architecture allows you to spread the In a driving scenario, you need very low latency
centralized versus distributed. data analytics workload over multiple nodes, in all and instantaneous, localized, response. This can be
classes of servers, instead of asking one single done locally. But the big learnings are also needed,
The obvious goal is to optimize the network end to node to tackle a very big problem. Also, and they should be performed in the cloud. Again,
end, and provide the best quality of experience to remember, this type of algorithm runs across we see this as a hybrid model thats the path we
the end user. But where is the best place to act many of the nodes it forms a cluster of the data. are pursuing.
within the end-to-end network to ensure that?
Learning to share. CBRS in the 3.5 GHz band changes how we use spectrum
Power at the edge. Processing and storage move from the central core to the network edge
Improving latency and capacity in transport for C-RAN and 5G. Trends in backhaul, fronthaul, xhaul and mmW
Massively densified networks. Why we need them and how we can build them
Voice comes to the fore, again. VoLTE and Wi-Fi Calling redefine voice
Getting the best QoE: Trends in traffic management and mobile core optimization
The smart RAN. Trends in the optimization of spectrum and network resource utilization
2017 Senza Fili Consulting, LLC. All rights reserved. The views and statements expressed in this document are those of Senza Fili Consulting LLC, and they should not be inferred to reflect the position of the
report sponsors, or other parties participating in the interviews. No selection of this material can be copied, photocopied, duplicated in any form or by any means, or redistributed without express written
permission from Senza Fili Consulting. While the report is based upon information that we consider accurate and reliable, Senza Fili Consulting makes no warranty, express or implied, as to the accuracy of the
information in this document. Senza Fili Consulting assumes no liability for any damage or loss arising from reliance on this information. Names of companies and products here mentioned may be the trademarks
of their respective owners. Cover photo by Senza Fili, Heceta Lighthouse, Oregon, USA.