Home » loT

loT

  • by

Major technology vendor transforms customer experience with near real-time insights.

This client has a product lined built on Reactive Architecture, that allows them to provide enhanced customer service. Every piece of storage equipment they have around the world has IoT sensors on it. There are over 20 billion sensors deployed in data centers all around the globe sending trillions of metrics each day, providing analytics on petabytes of telemetry data.

Their system takes that sensor data, which it has accumulated over the years, and applies Machine Learning algorithms to drive predictive analytics that allow for preventative maintenance, so storage is never offline. 

Moving to a near real-time user experience required an infrastructure that self-heals, scales massively and is always available to process streaming workloads – i.e., a perfect example of why enterprises choose reactive Architecture.

The Challenge

Increasing demand from customers and internal stakeholders to deliver insight in real time necessitated a major overhaul to their product architecture, requiring an infrastructure that self-heals, scales massively and is always available to process streaming workloads, no matter what.

The legacy system was hindered by the time it took to run batch processes against very large, extremely complex data sets. To deliver value faster, they needed to evolve beyond its classic, batch mode, big data architecture.

The Solution

The client turned to YoppWorks for consulting and the use of the Akka Platform from Lightbend for the elasticity and resilient self-healing required to deliver big data at speed. The use of Apache Kafka, Apache Spark, Lagom, and a multitude of other technologies into a de facto fast data distribution, made good business sense to use their platform.

The Results

YoppWorks use of the Akka Platform provided the client with microservices frameworks for processing continuous application logic; multiple streaming engines for handling trade-offs between data latency, volume, transformation, and integration; machine learning and deep learning tools for applying algorithms from the data science team; and intelligent management and monitoring tools for reducing the risk of running this always-on system in production.

Data services in a dynamic, streaming application must run forever and be able to scale horizontally across multiple nodes. This is a fundamental advantage of Akka Platform’s concurrency model, which is built around independent, self-contained processes that communicate asynchronously via messages. This means that they can be distributed and run across many machines.

In addition to scaling elastically, Akka Platform enables the client’s streaming applications to recover from failures quickly. Failure is treated a first class concept: failures are normal, anticipated, and handled within the application, which enables self-healing.

The approach is fast becoming a reference for client’s other business units looking to deliver similar real-time customer value.

Major technology vendor transforms customer experience with near real-time insights. SEE TRI

This client has a product lined built on Reactive Architecture, that allows them to provide enhanced customer service. Every piece of storage equipment they have around the world has IoT sensors on it. There are over 20 billion sensors deployed in data centers all around the globe sending trillions of metrics each day, providing analytics on petabytes of telemetry data.

Their system takes that sensor data, which it has accumulated over the years, and applies Machine Learning algorithms to drive predictive analytics that allow for preventative maintenance, so storage is never offline. 

Moving to a near real-time user experience required an infrastructure that self-heals, scales massively and is always available to process streaming workloads – i.e., a perfect example of why enterprises choose reactive Architecture.

The Challenge

Increasing demand from customers and internal stakeholders to deliver insight in real time necessitated a major overhaul to their product architecture, requiring an infrastructure that self-heals, scales massively and is always available to process streaming workloads, no matter what.

The legacy system was hindered by the time it took to run batch processes against very large, extremely complex data sets. To deliver value faster, they needed to evolve beyond its classic, batch mode, big data architecture.

The Solution

The client turned to YoppWorks for consulting and the use of the Akka Platform from Lightbend for the elasticity and resilient self-healing required to deliver big data at speed. The use of Apache Kafka, Apache Spark, Lagom, and a multitude of other technologies into a de facto fast data distribution, made good business sense to use their platform.

The Results

YoppWorks use of the Akka Platform provided the client with microservices frameworks for processing continuous application logic; multiple streaming engines for handling trade-offs between data latency, volume, transformation, and integration; machine learning and deep learning tools for applying algorithms from the data science team; and intelligent management and monitoring tools for reducing the risk of running this always-on system in production.

Data services in a dynamic, streaming application must run forever and be able to scale horizontally across multiple nodes. This is a fundamental advantage of Akka Platform’s concurrency model, which is built around independent, self-contained processes that communicate asynchronously via messages. This means that they can be distributed and run across many machines.

In addition to scaling elastically, Akka Platform enables the client’s streaming applications to recover from failures quickly. Failure is treated a first class concept: failures are normal, anticipated, and handled within the application, which enables self-healing.

The approach is fast becoming a reference for client’s other business units looking to deliver similar real-time customer value.