Author: adk

  • Sensor to Cloud is only half of the way

    Machine-oriented solutions form the bulk of the interesting new areas that people talk about when discussing M2M and the IoT. Unfortunately, neither of these terms is accurate. These are machines sending readings to a central database or computer that can take action – such as sending an electricity bill or scheduling a garbage truck. This is not really “machine to machine” but rather sensor to database. This may seem to be a pedantic use of semantics but there is an important point here that there is little reason for one “machine” – for example a device such as a smart meter – to talk directly to another. Equally it is not really an Internet of Things. The Internet implies an interconnected network where one computer can access information on another computer. Instead, typically, only the “owner” of the machine, such as the electricity company, will be able to access its readings and communicate with the device. Consumers that wish to read this data will then retrieve it from a cloud-based server, not direct from the machine. It is more like the Intranet of Things” where connectivity is restricted to self-contained groups rather than the “Internet’. Again, this may seem pedantic, but it has important implications for installation, security and network architecture.

    Most applications envisaged are self-contained in that the data generated is used just for that application. So, for example, in smart metering the meters report to electricity companies or with
    parking the car park sensors talk to parking applications.

    While the original version of Metcalfe’s law stated that, the value of a telecommunications network is proportional to the square of the number of compatible communicating devices (n2), it did not
    specify what protocol it should run on.

    Today, many people believe that, for IoT to be a reality, all devices must run the IP protocol so that they can form an interconnected network where all devices can share and access information. Research shows that the IP protocol may not be the best fit for Internet of Things. The question is therefore: How do we make a network for PCs, servers and constrained battery-driven devices that allows for the IoT vision to be a reality.

    The requirements

    A successful IoT network architecture has the following requirements:

    1. Devices in an IoT network must be able to access and share content in near real-time
    2. Devices must be able to send content to groups of devices
    3. Content must be secured so that only authorized parties can access it
    4. Offline devices must be able to pull content from the network when they wake up
    5. Content must be accessible across different types of physical communication mediums, be it
      wired or wireless, using the same naming scheme
    6. Communication standard MUST be open and royalty-free for anyone to implement and use
    7. Transmission rate must be fast enough to support firmware updates, sub-GHz included

    Managing the devices, the data and the data-security is important for organizations deploying IoT
    networks. A successful IoT networking protocol therefore need to implement these features.
    Requirements here are:

    1. Devices and security must be centrally managed
    2. Being able to centrally manage what device sends content and which receives that content

    The rapid growth in Internet of Things (IoT) deployment has posed unprecedented challenges to the underlying network design. Tomorrow’s global-scale IoT systems will focus on service-oriented data sharing and processing rather than point-to-point data collection. Requirements such as global reachability, mobility, communication diversity, and security will also develop naturally along with the change in the communication patterns. However, existing (IP-based)networks focus only on locations and point-to-point channels. The mismatch between the dynamic requirements and the functionalities provided by the network renders IoT communication inefficient and inconvenient.

  • Cellular Non-IP Networking (NIN)

    Telecom IoT-customers has found out that sending a 4-byte floating-point value using MQTT, has an overhead of 702 bytes:

    https://docs.devicewise.com/Content/Products/IoT_Portal_API_Reference_Guide/MQTT_Interface/MQTT-data-usage.htm

    Therefore, the telecoms and standards organisations has come up with Cellular Non-IP Networking. (1NCE has even made their own OS to solve this problem!):

    https://www.icann.org/en/system/files/files/octo-026-12jul21-en.pdf

    Similarly to the Huawei New IP proposal, the starting point of ETSI NIN ISG is the claim that TCP/IP is an old protocol, unsuitable for the new types of applications promised by 5G…

    and

    The initial assumption of this effort is that the TCP/IP protocol suite, now 40 years old, is no longer suitable for modern networking.

  • IoT solutions must design for “anarchic scalability”

    In semi-open or closed loops (i.e., value chains, whenever a global finality can be settled) the IoT will often be considered and studied as a complex system due to the huge number of different links, interactions between autonomous actors, and its capacity to integrate new actors. At the overall stage (full open loop) it will likely be seen as a chaotic environment.

    Given widespread recognition of the evolving nature of the design and management of the Internet of things, sustainable and secure deployment of IoT solutions must design for “anarchic scalability”

    Source: Wikipedia: Internet of Things

  • Interoperability is the devil

    40% of the total economic value of the Industrial IoT will remain locked

    In any interconnected system, all of its component devices must be able to communicate with each other—to speak in the same language, even if these devices work in very different fields. A lack of common software interfaces, standard data formats, and common connectivity protocols are complicating things in IoT-land. For industries, this means that 40% of the total economic value of the Industrial IoT will remain locked because different systems cannot work together.

    Source: Madison.tech

    Standardization creates interoperability

    The lack of standardization creates interoperability issues, which can lead to compatibility problems and limited functionality.

    For example, if two IoT devices cannot communicate with each other because they use different protocols or data formats, they cannot work together to achieve a common goal.

    Source: LinkedIn

  • The problem with TCP/IP for IoT?

    This section will discuss the problems with the IP protocol and why it it not a good fit for the IoT.

    Small MTU

    The maximum transfer unit (MTU) refers to the maximum amount of bytes you can fit in a data-packet. The MTU can be as little as 64 bytes in many wireless IoT systems. This is in clear contrast with today’s IP networks which typically assume a minimum MTU of 1500 bytes or higher.

    Multi-link subnets

    Multi-link subnets is the notion that a subnet may span multiple links connected by routers. RFC 4903, “Multi-Link Subnet Issues” [29], documents the reasons why the IETF decided to abandon the multi-link subnet model in favor of 1:1 mapping between Layer-2 links and IP subnets. An IoT mesh network, on other hand, contains a collection of Layer-2 links joined without any Layer-3 device (i.e., IP routers) in between. This essentially creates a multi-link subnet model that is not anticipated by the original IP addressing architecture.

    Multicast

    Multicast is a group communication model where data-transmission is addressed to a group of destination devices simultaniously. A lot of IP-based protocols make heavy use of IP multicast to achieve one of the two functionalities: notifying all the members in a group and making a query without knowing exactly whom to ask. Using multicast raises a number of concerns:

    • Devices sleeping will not receive the data-transmission
    • Receivers may have different data-transmission rate
    • Broadcasting data is too expensive, so a routing mechanism is necessary
    • Encryption for IP multicast still needs to be invented

    Mesh network routing

    IP based host-routing is a major challenge in constrained IoT devices as each host needs to maintain a routing table. This consumes memory and causes network overhead when the network changes. Also, forwarding traffic may involve decrypting the data from the incoming link and then encrypting it on the outgoing link – an expensive operation for battery-driven devices.

    Transport layer problems

    Due to the energy constraints, devices may frequently go into sleep mode, thus it is infeasible to maintain a long-lived connection in IoT applications. Also, a lot of communication involves only a small amount of data making the overhead of establishing a connection unacceptable.

    Unfortunately, current TCP/IP architecture does not allow to embed application semantics into network packets, thus failing to provide sufficient support for application level framing, which would allow the application more control over the data transmission.

    Application layer problems

    Many IoT applications implement a resource-oriented request-response communication model. ZigBee, CHIP/Matter are such examples. Influenced by the web, many IoT protocols has been working on bringing the same REST architecture into IoT applications. CoAP is an example of such a standard, which is also being used in the Thread protocol. There are a number of problems with this approach:

    • It usually requires resource discovery, such as DNS or Core-RD which in turn uses broadcast.
    • It requires that the client (requester) and the server (resource) is online at the same time
    • It requires a fundamental change to the security model in order to make the in-network caches secure and trustworthy

    Security

    In the IP based host-centric model, TLS/DTLS is used to secure the communication channel between the requester and the resource. However, this model does not fit with the requirements of IoT:

    • TLS/DTLS requires two or more exchanges of data to negotiate the communication channel, a resource and energy extensive task. Also, both ends have to maintain the state of the channel until it is closed, stopping devices from entering sleep-mode
    • Encryption for IP multicast still needs to be invented so security only works in host-based communication

    MQTT and AMQP

    MQTT and AMQP are publish/subscribe protocols that support x-to-many and many-to-x communication models. The underlying protocol is actually connection oriented, meaning that every publisher or subscriber have a connection to the broker (message exchange server). Messages are published to the broker and then the broker sends the message to each of the subscribers in turn, one by one.
    MQTT and AMQP does not have support for offline devices, so using a publish/subscribe protocol, requires the device to be awake 100% of the time. Battery driven sensors sleep and is offline most of the time, therefore not suited for using MQTT and AMQP.
    As these protocols are transport protocols, they do not specify how device management is done. Device management is supposed to be implemented as a layer on top.

    CoAP

    CoAP is a client-server protocol, modeled after the HTTP protocol, but for constrained (battery driven) devices. It allows a device to send a command or retrieve a value directly from another device – still connection oriented. It requires the sensor (server) to be awake (not sleeping) in order for it to respond.

    CoAP has a feature called “Observe”, which as described in RFC 7641, allows clients “subscribe” and be notified, when the condition has occurred. The great thing is that it allows a sensor to sleep until the condition occurs thereby saving battery, the downside of this one-to-many approach is that the sensor (server) must notify each client individually, each time setting up and tearing down a new connection. These connections are secured with DTLS, which is energy- and bandwidth consuming task, so a better way is clearly needed. So why not just use CoAP multicast? Well, you can, but there is no standard for encrypting CoAP multicast traffic. In fact, (DTLS) encryption in IP multicast still needs to be invented, meaning that you can not send encrypted information over the air to other CoAP devices using multicast!

    As for for sending data to an offline sensor, the CoAP standard does not specify how this is done, it is left to the device-creator, to implement this functionallity, making the device non-standard and therefore incompatible with other networks.

    As CoAP is just a transport protocol, it does not specify how device management is done. Device management is supposed to be implemented on top of CoAP.

  • IETF believes in ICN for IoT

    Internet Ingineering Task Force (IETF), the group that governs the Internet Standards (RFCs), believes that Information-Centric Networking (ICN) is much better suited to IoT than the current host based paradigms such as TCP/IP. Here are their findings:

    https://datatracker.ietf.org/doc/html/draft-irtf-icnrg-icniot-03

  • Is Semtech becoming a new Qualcomm?

    Qualcomm has faced multiple lawsuits and regulatory challenges related to its CDMA (Code Division Multiple Access) technology and its broader business practices, particularly concerning its licensing model and patent practices. Here are some key reasons and events surrounding these legal issues:

    1. Patent Licensing Practices: Qualcomm has a significant portfolio of patents related to CDMA and other wireless communication technologies. The company licenses these patents to various manufacturers of mobile devices. Critics have argued that Qualcomm’s licensing practices are anti-competitive, as the company often requires device manufacturers to pay royalties on the entire price of the device, not just the components that use Qualcomm’s technology. This has led to allegations of monopolistic behavior.
    2. Antitrust Allegations: Regulatory bodies in various countries, including the U.S. Federal Trade Commission (FTC) and the European Commission, have investigated Qualcomm for potential antitrust violations. The FTC filed a lawsuit against Qualcomm in 2017, alleging that the company engaged in anti-competitive practices by using its dominant market position to impose unfair licensing terms and by refusing to license its patents to competitors.
    3. Litigation from Competitors: Companies like Apple and other smartphone manufacturers have also sued Qualcomm, claiming that the company’s licensing practices are unfair and that it has engaged in anti-competitive behavior. For example, Apple accused Qualcomm of charging excessive royalties and of using its patent portfolio to stifle competition.
    4. Settlement and Legal Outcomes: Over the years, Qualcomm has reached settlements in some cases, while other cases have resulted in court rulings that have impacted its business practices. For instance, in 2019, Qualcomm reached a settlement with the FTC, which included changes to its licensing practices.
    5. Global Regulatory Scrutiny: Qualcomm’s practices have drawn scrutiny not only in the U.S. but also in other regions, including Europe and Asia. The European Commission fined Qualcomm for anti-competitive practices related to its chipset sales and licensing agreements.

    Qualcomm’s legal challenges related to its CDMA technology and licensing practices stem from allegations of anti-competitive behavior, monopolistic practices, and disputes with competitors over its patent licensing model. These issues have led to significant legal battles and regulatory scrutiny over the years.

    There is no clear evidence to suggest that Semtech is directly copying Qualcomm’s business practices related to CDMA technology. However, one key point stands out with regards to licensing practices:

    • Qualcomm: Gained a 90% market share of the 3G chipset in 2007 (1) by ignoring it’s FRAND (Fair, Reasonable, and Non-Discriminatory) commitment to the ETSI and other SDOs by demanding discriminatorily higher (i.e., non-FRAND) royalties from competitors and customers using chipsets not manufactured by Qualcomm. It seems that Qualcomm wants to control the market by selling it’s own chips.
    • Semtech: Does not license its LoRa technology to other companies, is sells the SX12xx chip for use in System-in-Package configurations (SIP) which means that other chip manufactures license the SX12xx chip for use inside their own chip. Examples are ST STM32WL series and Microchip SAM R34/35 series of chips. It seems that Semtech wants to control the market by selling it’s own chips.

    In conclusion, Qualcomm and Semtech seem to have the following in common:

    • They do not want to license their IP to other companies
    • The want to control the the market by monopolizing the supply of chips

    Are we witnessing the rise of a monopoly selling LoRa chips?

    1) https://casetext.com/case/broadcom-v-qualcomm-2

    2) https://ipwatchdog.com/2017/05/18/how-many-times-qualcomm-paid-old-technology/id=83332/

    3) https://community.cadence.com/cadence_blogs_8/b/breakfast-bytes/posts/cdma

  • Polite Spectrum Access (listen before talk)

    When an IoT device is about to transmit, it should listen if some other IoT device is transmitting before itself transmits. This is just like humans, we wait until the other finishes before talking. This is called Polite Spectrum Access. If you transmit without listening first, it is referred to as Aloha.

    Listen-before-talk (LBT) can increase the spectrum efficiency compared to Aloha by reducing the number of collisions and retransmissions.

    In Aloha, each node transmits its packets without checking if the channel is busy, which can lead to collisions and retransmissions. This can result in a significant waste of bandwidth and reduce the overall spectrum efficiency.

    LBT, on the other hand, requires each node to listen to the channel before transmitting, which can help to reduce the number of collisions and retransmissions. By listening to the channel, a node can determine if the channel is busy and wait until it is clear before transmitting.

    Studies have shown that LBT can increase the spectrum efficiency compared to Aloha by a factor of 2-5, depending on the specific scenario and parameters. For example, a study by the European Telecommunications Standards Institute (ETSI) found that LBT can increase the spectrum efficiency by a factor of 2.5 compared to Aloha in a scenario with a large number of nodes and a high traffic load.

    Here is a rough estimate of the spectrum efficiency gain of LBT compared to Aloha:

    • Aloha: 18% (this is a typical value for Aloha in a scenario with a large number of nodes and a high traffic load)
    • LBT: 45-60% (this is a rough estimate of the spectrum efficiency gain of LBT compared to Aloha, depending on the specific scenario and parameters)

    LoRa is based on a modulation scheme called Chirp Spread Spectrum (CSS). It means that during transmission it increases/decreases the frequency at a rate depending on the speed. This makes it almost impossible for CSS to use Polite Spectrum Access.

    The EU/US regulation states that an IoT device using the sub-GHz band (868MHz/915MHz) must only transmit 1% if the time, unless it is using Polite Spectrum Access (ETSI EN 300 220-2). Being polite allows for a much more efficient use of the frequency spectrum, which is very important as the frequency spectrum is a limited finite natrual resource. Going forward, we will have many more IoT devices using the spectrum, so using it efficiently is very important.

    Almost all commercial off-the-shelf FSK transmitters is able to be polite and listen before talk. Surely we must access the frequency spectrum most efficiently and not use inpolite transmitters.

  • LoRa is not frequency spectrum efficient

    LoRa uses a spread spectrum modulation technique to transmit data. While LoRa has some advantages, such as its ability to transmit data over long distances and its low power consumption, it is not considered to be a spectrum-efficient technology.

    There are several reasons why LoRa is not considered to be spectrum-efficient:

    1. Modulation technique: LoRa uses a modulation technique, which spreads the data signal across a wide frequency band. This can lead to a lower spectral efficiency, as the signal is spread across a wider frequency band than is necessary.
    2. Low data rate: LoRa is designed for low-data-rate applications, such as sensor networks and IoT devices. While this can be beneficial for certain applications, it also means that LoRa is not well-suited for higher-data-rate applications, which can be more spectrum-efficient.
    3. Listen-before-talk (LBT): LoRa is not able to use LBT due to it’s modulation technique. LBT increases the spectrum efficiency reducing the number of collisions and retransmissions.
    4. In-Building environments: LoRa (Long Range) has been developed for use in long range applications (kilometers) but is being used in building environments in which the long range is not needed. Higher data-rate modulation techniques, such as FSK or BPSK, allows for a shorter on-air transmission time therefore being more spectrum efficient.

    LoRa is not as spectrum efficient as other modulation schemes such as BPSK or TurboFSK, which can be implemented in an commercial of-the-shelf radio transceiver. Going forward, The number of IoT devices per square meter will only increase and since the amount of frequency spectrum available is is a natural and finite resource, we should strive to use it as efficiently as possible.

    Fun fact: The first wireless transmissions was done by Marconi using spark-gap transmitters, back in 1896. They basically emitted white noise, which could then be detected at long distances. Whenever a spark-gap transmission was in progress, it was using the entire frequency spectrum and hence only room for one transmission at a time! As time went on, the ability to modulate signals caused the frequency spectrum to be used much more efficiently.

  • Why is patented infrastructure a problem?

    Patented infrastructure is a problem because it can limit access, stifle competition, increase costs, reduce interoperability, and create dependence on a single entity, ultimately hindering innovation and progress.

    1. Limited access: When infrastructure is patented, it can limit access to the technology and prevent others from using it, even if they have a legitimate need to do so.
    2. High costs: Patent holders can charge high licensing fees to use their patented technology, which can make it difficult for others to access the infrastructure.
    3. Innovation stifling: Patented infrastructure can stifle innovation by preventing others from building upon or improving the existing technology.
    4. Dependence on a single entity: When infrastructure is patented, it can create a dependence on a single entity, which can be a risk if the entity experiences financial difficulties or changes its business model.
    5. Limited interoperability: Patented infrastructure can limit interoperability with other technologies and systems, making it difficult to integrate with other solutions.
    6. Barriers to entry: Patented infrastructure can create barriers to entry for new companies or individuals who want to enter the market, as they may not be able to access the patented technology.
    7. Inequitable distribution of benefits: Patented infrastructure can lead to an inequitable distribution of benefits, as the patent holder may reap most of the benefits while others may not be able to access the technology.

    In the context of infrastructure, patents can be particularly problematic because they can limit access to essential technologies and create barriers to entry for new companies or individuals. This can lead to a lack of competition, innovation, and progress in the field.

    In contrast, open standards and technologies can promote innovation, competition, and progress by allowing multiple companies and individuals to access and build upon the technology.

    Quoting the Linux Foundation report “The 2023 state of open standards”:

    Royalties and patent licenses are seen as a way for organizations to recoup investment costs in developing new technologies and are argued to provide incentives for innovation. In practice, however, this approach can reduce innovation in the market when these fees are cost-prohibitive to new entrants and viable alternatives to the incumbent, royalty-bearing standards are not available.
    Therefore, Royalty Free essential patent licensing standard options are seen as an important way to ensure competitive, democratic access to innovation, greatly reducing the risk of an organization monopolizing the market or controlling access to important market technology.

    Non-royalty free wireless communication standards:

    • WiFi
    • Bluetooth
    • Zigbee
    • Matter / Thread
    • LoRa

    Royalty free wireless communication standards:

    • Z-Mesh
    • FSK / OOK / BPSK
    • Wireless M-Bus