Today’s businesses have big data problems. As IT teams continue to find new ways to incorporate cutting-edge internet of things (IoT) devices within their infrastructures, businesses are left with troves of important data that must be efficiently collected, connected and analyzed. The problem is that many enterprises continue to support these next-generation devices with traditional processing power -- and transferring data to a centralized public cloud can be costly, unreliable and slow.
So, the race to “the edge” is on. To boost the efficiency of data processing, IT leaders are starting to push computing power closer to the devices themselves. This allows the data to travel the shortest distance possible, reducing latency (delays). For businesses that heavily rely on real-time analysis of IoT-generated data or real-time responses, edge computing is a more efficient and reliable option for moving this information.
However, every IoT application is different, and the edge is not a silver bullet for success. To optimize edge computing and develop a strong data strategy for new IoT implementations, businesses must first carefully define the role of the edge for their specific needs, and then determine the right technologies that will ensure success.
Think strategically: Identify distinct IoT application needs
The first step for any business looking to equip IoT with edge computing power is to understand the needs of each application. While this may seem like an obvious starting point, many business and IT leaders fail to fully understand their unique IoT applications and their place in the wider tech ecosystem. IT teams need to pinpoint where data is being generated, what people expect from this data, and critical success factors for the device to fulfill the promise of the application.
For example, a surveillance company may be collecting video content of a building or site, and this information may only need to be streamed to an office within that same building. This requires a very different strategy than an industrial inspection company that needs to collect and analyze insight from hundreds of sensors located across the country or the world. In forming a smart IT strategy that makes full use of big data analysis, answering these questions will inform the more tactical aspects of improving IoT implementations.
Assess global workflow needs and optimize for local ingest
A big problem organizations face with IoT applications is the sheer distance data must travel from the device to the data consumer (and every stop it needs to make in between) to provide fast alerts. Oftentimes, there are latencies associated with long-distance data transfers – and this can have a negative impact on applications that require real-time data for fast decision making.
Take, for example, our industrial inspection use case. If a smart sensor is monitoring the condition of an underwater pipeline and catches an anomaly, it needs to quickly alert a human user or team, who may be located hundreds, even thousands of miles away. To minimize the latency of that data transfer – and allow the necessary parties to receive and address the issue as quickly as possible – IT teams must look to reduce the physical distance between the source of data and the network edge.
By connecting an IoT system to a strong private network with strategically located Points-of-Presence (PoPs) across the world, the company can provide touchpoints along the path of the data. The closer the server is to the data source, the faster the data or content will be delivered. Ultimately, this effectively eliminates latencies associated with sending information to distant data centers, and ensures data is processed and analyzed in real-time.
This enhanced edge computing benefit isn’t only useful for global use cases. Even if data doesn’t need to travel thousands of miles from device to end-user, the sensor is still generating loads of data. Therefore, supporting IoT devices with a strong infrastructure that has local capacity to efficiently ingest all this information and seamlessly absorb traffic spikes is critical.
Edge compute can also be used to perform the first level of filtering and/or aggregation of raw data from a site or region. This “pre-processed” data, if required, is then forwarded to the centralized cloud, which enables fast and full analysis of multiple data sets.
Prioritize security standards for data transfers
In defining the edge and bolstering the network to support an IoT application, security must remain a top priority. Where edge computing can fail IoT from a security perspective is by relying on the public internet. Critical IoT workflows that deliver sensitive information require safe connections and using the public internet can open data to be intercepted and make IoT systems susceptible to harmful attacks. This is even more important for enterprises that manage critical information such as insights from smart city sensors, video from surveillance cameras, or IPSec tunnels.
To keep these devices running efficiently and securely, IT teams must ensure they are supported by a strong network that keeps information protected. Leveraging a private, dense network to move insight collected from IoT sensors to the edge will not only improve performance (by surpassing the congestion and bottlenecks on the public internet), but it also isolates data from cybersecurity issues such as hackings or malicious distributed denial of service (DDoS) attacks.
IoT will impact just about every industry in the coming years – but driving measurable efficiency improvements will require IT leaders to define what the edge means for their business. By taking the steps to address the unique needs of their applications and identify performance and security requirements, enterprises can ensure their edge compute strategy is ready for anything in the innovation age.
About the author
Neil Glazebrook joined Limelight in 2017 and drives the company’s edge compute product strategy to help enterprises increase efficiency, improve customer experiences and drive innovation through global connectivity.