News

IoT 101: Latency, or the 50:1 Reason Experts Expect 5G To Drive IoT to New Heights

Latency is one of those things that in some ways doesn't really matter, until it does -- and then it really does.

For example, in Internet of things (IoT) applications, latency is going to be something engineers look at when it comes to commands received by your thermostat or washing machine, but not to the point where 1 millisecond vs. 50 milliseconds is going to make a discernable difference. But an automatic car? Real-time image processing for manufacturing errors on an assembly line? Robotic arms performing surgery on a live patient? In these use cases and others, latency is everything.

And that's one reason edge computing -- moving the processing as close to the data as possible (vs. sending it to the cloud for processing and back) -- has become a lynchpin of IoT architecture. And its become even more difficult yet important with the introduction of artificial intelligence and machine learning capabilities into many IoT programs, creating even larger programs that need to fit into smaller spaces on the edge.

Real-time processing is the ideal, and where possible, processing takes place within the SoC (system on a chip) itself or the device. If not, getting as close as possible will help with latency. This is why size and power so often come into play in IoT -- if it's a removable device, for example, batteries can only power so much (and they can cause latency issues of their own, as they are often programmed to power-down and then power-up to save power when not in use, thus causing further delays). And then you have size. There's a reason blockchain just isn't done on IoT devices (really, it isn't -- in fact, IBM actually filed a patent for a way to do blockchain with IoT and decentralized devices) -- the footprint of blockchain, and really any distributed ledger security, in general is too large for most IoT applications. In fact, one of the current pushes in IoT security is to try, when possible, to keep everything on the device as a kind of assurance that the data is secure because it isn't being transported at all.

But, of course, not everything can be kept on the device as not every program is small enough, and so when the processing has to take place on the network (even if, perhaps, not back to a centralized cloud or hub), if it's not a wired device, you're going to be transmitting this data wirelessly. And this is exactly where 4G vs. 5G comes in.

In its specifications for 5G, the international association that governs the guidelines for 5G, GSMA, specified that 5G's latency should be 1 millisecond, which is 50 times better than 4G's current 50 milliseconds.

So 50 times improvement in latency. However, that's just the theoretical recommendation at this time -- it's unclear as of yet whether 5G providers will be able to provide this in the real word, especially at the launch of 5G. (Side note: For a good 4G vs. 5G speed comparison, go here).

But if it does end up being this significant, reports like this one are estimating that the improved latency -- particularly with "short-range" IoT devices -- will help drive the growth of the number of IoT devices in the U.S from 694 million in 2020 to almost 6.3 billion in 2025, a compound annual growth rate of just over 55 percent.

About the Author

Becky Nagel is the former editorial director and director of Web for 1105 Media's Converge 360 group, and she now serves as vice president of AI for company, specializing in developing media, events and training for companies around AI and generative AI technology. She's the author of "ChatGPT Prompt 101 Guide for Business Users" and other popular AI resources with a real-world business perspective. She regularly speaks, writes and develops content around AI, generative AI and other business tech. Find her on X/Twitter @beckynagel.