It’s Time to Redefine Real-Time for IoT

The Internet of Things (IoT), for all its promise, remains a technology that is just shy of realizing its potential. Smart buildings can turn on lights and adjust HVAC systems based on how many people are in a room. They cannot, however, sense fires, turn on targeted suppression systems, understand what occupants are in the most danger, and intelligently guide occupants to exit to safety. Smart retail boasts self-service kiosks, but falls short of offering personalized recommendations as a shopper walks through a store. Smart manufacturing robots can pick up items from an assembly line, but they can’t react in real-time to objects dropped on the floor or other unknown situations that might occur - because that’s the difference between real life and simulation, the unknown.

 

The reason for these shortcomings is time. IoT was built on the consumer cloud, which uses an approximation of human reflexes that is inadequate for machines. The version of real-time needed for grocery self-checkout is much slower than the real-time required for facial recognition software to identify a criminal upon entry and have security alerted to the situation. And as the Uber disaster in Arizona recently proved, the real-time of ordering a rideshare is about 1,000 times slower than real-time analysis required for a car to drive itself and avoid danger.

FREE SENSORS NEWSLETTER

Like this story? Subscribe to Sensors Online!

Sensors delivers a suite of newsletters, each serving as an information resource to help engineers and engineering professionals make efficient design and business decisions. Sign up to get news and updates delivered to your inbox and read on the go.

 

Cloud-native edge computing is the final bridge between the promise of IoT and its realization. It combines the capital expenditure, security, and agile development process benefits of the consumer cloud and the compute speed of the edge with machine-compatible speed and scalability. As the world becomes more cyber-physical and connected, cloud-native edge computing will shrink the concept of real-time—from an approximation of human reflexes to robotic-enabled microseconds.

 

Digital Disruption, Phase Two

 

The way businesses navigate and interact with the physical world is changing at a rapid pace. This is evidenced by self-driving cars, commercial delivery drones and other bleeding-edge—and much over-hyped—companies making headway, bringing life-changing technology, the crux of many a science fiction plots, to fruition. 

 

In large part, these changes are due to the phenomenon known as digital transformation. Businesses are integrating new technologies, including AI and machine learning, into their regularly scheduled programming to automate laborious processes, improve efficiencies, enhance value and optimize their bottom lines. Shifting strategic investments out of dated assets and into emerging technologies is a must for any business to stay competitive.

 

Updating assets, however, is only Phase One of digital disruption. The next phase involves harnessing intelligence from the billions of IoT sensors, mobile devices and connected machines to shrink time to machine standards, diversify what IoT can do--and enable automation to fulfill its promise.

 

Human Real-Time is Not Machine Real-Time

 

Let’s revisit the example of a self-driving car. Essentially, a self-driving car is a rolling data center. It can contain 200 or more networked and virtualized systems that take in sensor data, process it continuously with microsecond timing, then decide what the car should do to safely navigate its environment. This enables the car to swerve around obstacles and drive safely. The sensor data can be pushed up to the cloud for deep analysis and learning to improve the “driver” app (or “model”) and the app should then be updated frequently to continually improve decision making. But the actual decision is handled by the driver app in the car.

 

The industrial setting contains a similarly time-sensitive example. A robot working on a line must continuously absorb and learn from sensor data about its immediate and changing environment to inform its actions. IoT sensors essentially become the robots’ eyes and ears. If a human accidentally steps in the robotic arm’s pathway, the local system must understand the situation and change course, i.e. the robot must stop before accidentally taking the human’s head off. This application of real-time interprets data in microseconds interpreting data being generated by sensors that can reach 40-50 megabits per second.

 

Cloud computing and the IoT is changing our understanding of time significantly, moving out of the human tolerance scale and into microseconds. Data needs to be faster than human reflexes by a factor of 1000. While the cloud-native datacenter supported the first phase of IoT--the digital transformation phase--only cloud-native edge computing can power the next.

 

The New Machine Reality

 

Between robot factory lines and fleets of self-driving freight trucks, pervasive computing is coming, and it involves complex, real-time analysis of an influx of signals at all given times. In this emerging world, computers can issue commands not just to other computers but to actual things. Connecting embedded systems to the consumer cloud is ultimately a waste of time. And re-defining real-time from a human orientation to the reality of machines is the crucial first step on the path to successful IoT projects.