Deterministic Networking, Or The Divorce of Sensors and Algorithms

Full name
11 Jan 2022
5 min read
Deterministic Networking, Or The Divorce of Sensors and Algorithms

Imagine taking the bus to work like any other day, and you are about to use the RFID card to pay as you usually do. Sure, nowadays most public transport companies work with apps (glad they do), but until not long ago, cards were the norm. So, you get on the bus, you present the card to the reader, and, to your surprise, the card is declined, indicated with an unfriendly red light and an annoying buzzer. You are sure you topped it, so what’s the deal?

You try again, and the card is now approved. Weird. Anyway, you go grab a seat. No biggie, you think, can happen. But what has just happened, actually? Well, in the way the system works, when you present your card to the reader, the reader’s near field provides parasitic power to the chip inside your card through the antenna coil embedded in the card, and the chip sends your card data to the reader by loading the field in a way the reader can detect it. Most likely—systems vary, though—your data is sent to a remote database through a 4G/5G connection where this info is contrasted in a database and validated. No credit? Declined. Credit? Enjoy the journey.

The reason why your card got unexpectedly declined was that something happened in the process described above. Maybe the checksum didn’t compute right. Maybe the mobile network connection dropped while the bus was moving across cells or under a long bridge. Maybe the network had a sudden overload. All in all, the whole system is just a best-effort system.

Making an effort is not enough

Best-effort delivery describes a network service in which the network does not provide any guarantee that data is delivered or that delivery meets any quality of service. In a best-effort network, all users obtain, well, a best-effort service. Under best-effort, network performance characteristics such as network delay and packet loss depend on the current network traffic load, and the network hardware capacity. When network load increases, this can lead to packet loss, retransmission, packet delay variation, and further network delay, or even timeout and session disconnect.  
To use a familiar analogy: the postal service physically delivers letters using a best-effort approach. The delivery of a certain letter or package is not scheduled in advance; no resources are preallocated in the post offices. The service will do the best they can to try to deliver, but the delivery may be delayed if too many letters or packages suddenly arrive at a postal office or triage center. You can never be sure when a package will arrive at your home, only that it will most likely arrive when you’re not there.

Now, picture that instead of paying for public transport or receiving a postcard from a friend living abroad, you are talking about life support equipment. Would you connect a pacemaker to the internet, where every next heartbeat would depend on a packet being sent somewhere remote and approved before sending an electrical pulse to the heart?  
Similar for other mission critical systems. Picture an autopilot on an airliner sending aircraft information through the internet where a remote algorithm would decide what torque to exert on the control surfaces to keep it levelled? Or the accelerometer signal of a car’s airbag sent through the Internet before deciding if it makes sense protecting a passenger from a crash about to happen at 100 kph?

The Internet Protocol offers a best-effort service for delivering datagrams between hosts. IP datagrams may be lost, arbitrarily delayed, corrupted, or duplicated. The Internet has been designed to try “the best it can”. But is that enough?  

It was for a long while, but with countless machines now connected to the network exchanging information related to industrial processes and critical infrastructure, network determinism is an increasing need.

The alternative? Deterministic networking

Deterministic networks provide guaranteed latency on a per-deterministic-flow basis. The data traffic of each flow is delivered within a guaranteed bounded latency and low delay variation constraint. Deterministic networks aim to deliver zero data loss due to congestion for all allocated deterministic flows. Deterministic networks may reject or degrade flows to maintain the characteristic for the admitted deterministic flows. Deterministic networks support a wide range of applications, each possibly having different Quality of Service (QoS) requirements.  
Engineering deterministic services requires a different paradigm when compared to engineering traditional packet-switched services. The latter have loss/latency/jitter curves which have a wide probability distribution. In traditional networks, achieving lower latency means discarding more packets (or requires heavy over-provisioning).
A core objective of DetNet is to enable the convergence of sensitive non-IP networks onto a common network infrastructure. This requires the accurate emulation of currently deployed mission-specific networks, which, for example, rely on point-to-point analog (e.g., 4-20mA) and serial-digital cables or buses for highly reliable, synchronized, and jitter-free communications. While the latency of analog transmissions is low, legacy serial links are usually slow (in the order of Kbps) compared to, say, Gigabit Ethernet, and some latency is usually acceptable. What is not acceptable is the introduction of excessive jitter, which may, for instance, affect the stability of control systems.

Ongoing work

Several emerging standards are defining the technology building-blocks needed for the delivery of reliable and predictable network services over Deterministic Networks. IEEE 802.1 is working to support deterministic Ethernet services in its Time-Sensitive Networking (TSN) Task Group. 3GPP is working to deliver deterministic 5G in support of Ultra-Reliable and Low Latency Communication (URLLC) usage scenarios. And the IETF is working to deliver deterministic services over IP routers and wireless networks in the respective Deterministic Networking (DetNet) and Reliable and Available Wireless (RAW) Working Groups. The technology standards being developed in the IEEE, 3GPP and IETF aim to deliver solutions that are complementary and can be combined by network operators to deliver end-to-end converged networks supporting both traditional and deterministic services. The ability to ensure the delivery of traffic for each flow’s different Quality of Service requirement is critical for all deterministic networking technologies.

How can space help?

Deterministic networks are going to reshape the rules for mission critical and cyber physical systems. Space systems will play a fundamental role ensuring additional and alternative paths for deterministic flows. With networks going deterministic, we could finally decouple sensing and control, two activities that have been historically sitting next to each in every machine and mission critical system out there. Picture a future where an autopilot algorithm could live in the cloud, with “the cloud” being a collective of ground and space-based servers. This would not only save mass and power in autonomous vehicles, but also reduce complexity. Sensors and algorithms have been close partners for too long due to our “best-effort” networks. Sensors must stay aboard the machines, for they must “hear” and “see” the environment. But algorithms need not to be there. They could be anywhere, as long as the networks can ensure the data will arrive at the right time, every time.

Authors

Author

Ignacio Chechile
Ignacio Chechile
Chief Technology Officer
ReOrbit