Thursday, February 2, 2017

Low Power Wide Area Networks for IoT: SigFox, LoRa, LTE-M, 5G LP-WAN

[In collaboration with guest blogger Marc Espinosa]

The important topic of connectivity protocols was discussed in our previous IoT post, it is time to dive deeper into the telecommunications protocols underneath. The fact is that the technologies that enable the IoT architecture need to assume low power at the same time as the transmission via the long distances (meaning - lower frequencies). 

For example, if we need to cover a field, a campus, an entire building or turning a city smart, we will need a specific communication protocol. The truth is that ZigBee and 6LoWPAN do create a low-power and low-cost WPANs, but since the assets can be distributed in an pretty wide area - we need to include another variable to the equation: the range.

The following networks/technologies are called Low Power Wide Area Networks (LP-WAN). Regardless of the fact that the consumption of devices they are connecting is low, they actually cover a wide area network, making things easier and better to connect from one point to another:

As you can appreciate, there are two new agents in the network ecosystem: LoRaWAN and SigFox. Both are called LPWAN and as you can realise they cover from 150 to 500 times in suburban or rural areas (respectively) the maximum range that ZigBee is offering. 
These two characters have been competitors in the LPWAN space for several years. The business models and technologies they use are different, but the targets are very similar: mobile networks adopting their technology to deploy IoT solutions.

Even though LoRa and SigFox serve similar markets, the first option is more likely if you need bidirectionality, because of the symmetric link (if you need to command-and-control functionality, like an electric grid monitoring). 

However, for applications that send only small and infrequent bursts of data (like alarms and meters) I would recommend the second one or ultra-narrow band technology that can hold a 2-way transport message as well (3*).

When we talk about mobile networks we can’t forget talking about NB-IOT: a LPWAN Narrow-Band radio technology standard that has been developed to enable a wide range of devices and services to be connected using cellular telecommunications bands. It has been designed for the IoT, standardised by the 3rd Generation Partnership Project (3GPP), a collaboration between groups of telecommunications associations.

To sum up all this content let’s group it into a table that explains qualitatively the LPWANs takeaways (4*).

To conclude, let’s highlight the key takeaways for the 3 low-power networks:
  • SigFox : extremely low power and bandwidth, kind of open standard due to you have to use their own network, easily tradable, limited security but it’s got some and lot of deployments.
  • LoRa: driven by a chip company (Semtech) so they want you to but the maximum number of chips. Not quite as low power as SigFox but pretty good too, it has more bandwidth to make control functions and send good data streaming, not an open standard as commented because you have to use compulsory the Semtech chip (in my opinion I consider this point as weakness because you are forcing your client to buy your product instead of you understanding the market needs), there are a lot of suppliers willing you to use Semtech chips, pretty good security (they do all the basic authentication), and several deployments
  • NB-IoT: a technology that mobile operators carry on, very low-power and bandwidth, similar to SigFox but not as deployed as the ultra-narrow band network, it is an open standard because it is part of the 3GPP, lots of suppliers because it is open, solid security and authentication and some deployments (Vodafone with Huawei did the  first commercial PoC that took place in Madrid using Vodafone Spain’s network on the September 19th 2016). There is another network called LTE-M (Long Term Evolution-Machine) that is pretty similar (open standard as well) to NB-IoT: not as power efficient as NB-IoT but best security. Low deployments but they are going to grow exponentially hand in hand with NB-IoT.

The question is: which of these is the IoT network of the future? Will LoRa and SigFox be able to survive if 5G standard includes the IoT-WAN? IoT is a big market, in our opinion - there´s a place for everyone, we just need to wait and see what happens.

Wednesday, February 1, 2017

Understanding the IoT Protocols: MQTT, CoAP, ZigBee

[In collaboration with the guest blogger, Marc Espinosa]

Let's start with the messaging protocols, MQTT and CoAP, and consider which of the following open standard protocols should be considered for your implementation.

If you're looking for the right guide to gain a solid perspective of the IoT business, these lines might just be what you need. The IoT can be defined as the a system of interrelated devices (such as sensors) that are provided with unique identifiers and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.

So, how do all these "Things" speak among them? Which are the languages/communication protocols they use and which one should we choose? The answer might surprise you... it depends!

There are 2 types of open standard protocols that work for small devices:

  • Message Queuing Telemetry Transport (MQTT)
  • Constrained Application Protocol (CoAP)

MQTT and CoAP are two of the most promising protocols for small devices such as sensors. Both are:
  1. Open standards lightweight protocols very useful to cover the IoT needs
  2. They suit to constrained environments better than HTTP does
  3. Provide mechanisms for asynchronous communication
  4. Run on IP
  5. Have a range of implementations

The main differences between these two protocols are shown in the table below:

None of these protocols applies to all the cases, both have their pros and cons. Choosing the correct one depends on your application.

Let's proceed with a second group of Protocols: ZigBee and 6LoWPAN for sensor Networks

As a curiosity, the name ZigBee comes from the communicative erratic patterns that the bees do. This looks back on the invisible networks and their respective connections. ZigBee is an IEEE standard 802.15.4 (a technical standard that defines operation of low-rate WPANs) based specification for high-level communication protocols used to create WPANs, operating at 2.4 GHz targeting applications that require relatively infrequent data exchanges at low date-rates over a restricted area and within a 100m range.

The technology defined by ZigBee is simpler and less expensive than other WPANs such as Bluetooth and WiFi. ZigBee is a low-cost, low-power, wireless mesh network standard 3 targeted at the wide development of long battery life devices in wireless control and monitoring applications. Their devices have low latency, which further reduces average current. The following diagram shows the ZigBee meshed architecture and all sort of devices on the network

To sum up, ZigBee was developed to satisfy and provide a standards-based protocol for interoperability (applications do not need to know the constraints of the physical links that carry their packets) of sensor networks. And guess what? A competitor came up in the market: 6LoWPAN

On the other hand 6LoWPAN which is an acronym for IPv6 3 over Low-Power Wireless Personal Area Networks, originated from a working group in the IETF.

Their products have entered the marketplace as ZigBee’s competition, as it can utilize 802.15.4. Moreover, it can run on other PHYs (physical layer of the OSI model, and it refers to the circuitry required to connect a link layer often called MAC to a physical medium such as a cable).
Let’s make a more detailed comparison to be able to see which fits best or the most used connectivity protocol nowadays and why.

On the one hand, ZigBee’s wireless interoperability protocol defines the communication between 802.15.4 nodes and then defines new upper layers all the way to the application. This means ZigBee devices can interoperate with other ZigBee devices, creating in my opinion a bit more constrained network at a glance.

On the other hand, 6LoWPAN offers interoperability with other 802.15.4 wireless devices in addition to any other IP network link (Wi-Fi or Ethernet) with a simple bridge device. Although bridging between non-ZigBee and ZigBee networks requires a more complex application layer gateway. The key requirement for IPv6 over 802.15.4 is that the maximum transmit unit (MTU) must be at least 1280 byte packets.

In terms of security, both connectivity protocols benefit from built-in AES128 encryption, which is part of the IEEE 802.15.4 standard.

Finally, and trying to get a conclusion of which goes first on the race, we can affirm that all the major players in the semiconductor industry promote and supply 802.15.4 chips which can be used for either ZigBee or 6LoWPAN, but these same companies even offer free ZigBee stacks. So, the support for 6LoWPAN stacks seems to be trailing behind ZigBee. 

Concluding, 6LoWPAN is pretty attractive, since it is IP based. Nevertheless, ZigBee appears to be more popular and has been adopted by major players in multiple industries, and the ZigBee alliance just introduced ZigBee IPv6 end-to-end networking to create a cost-effective and energy-efficient wireless mesh network.

Wednesday, December 21, 2016

Which SDN solution is the right one for me?

This is a question I've been getting A LOT in the last few years, and even though is sounds rather simple, somehow it gets really complex to convince all the parties (Developers, Systems/Virtualization and Network engineers and the CEO/CTO) why the solution you're proposing is a perfect fit. There are 2 simple explanations for this:

  • A so-called "language barrier" between the different departments.
  • SDN vendors being way too aggressive pushing their solution in the environments where it doesn't fit [understandable when you consider how much money they've invested in SDN, and with how much fear and hesitation the new clients are considering the migration of their production network to SDN].

What I want to try to do in this post is help you get a more objetive a non vendor-bias picture of the SDN solutions out there, and the environments each of them should be considered for.
*If you're not sure you understand the difference between Underlay and Overlay please refer to my previous posts.

There are 2 types of SDN solutions at the moment:

  1. SDN as an Overlay (VMware NSX and Nokia Nuage)
  2. Underlay and Overlay controlled via APIs (Cisco ACI and OpenDayLight)

SDN as an Overlay solutions tend to be much easier to understand and more graphical and user friendly solutions. This can be explained by the fact that they only handle the Overlat of the network, completely ignoring the physical network underneath, considering it a "commodity". Even though NSX and Nuage are both great solutions and there are environments where these would be definitely the SDN solution that I would recommend, there is a pretty serious conceptual problem with this approach, especially if your network isn't 100% virtual and if your physical topology has more then a few switches.

Systems and Virtualization engineers tend to love this kind of solutions, due to 2 factors:

  • They don't have a deep level understanding of Networking protocols.
  • They kinda get the impression that they will handle both, Compute and Networking environment in the Data Center, pushing out the Networking department [kinda true, if you ignore the fact that you actually end up with 2 departments handling your network, Systems guys taking care of the Overlay and Networking guys taking care of the physical infrastructure].

Network engineers tend to not like this kind of solutions, due to 2 factors:

  • They lose the visibility of what's going on in their Network.
  • They know that when the things don't work, or when there is a performance issue, the CEO will knock on their door, and they will have no idea what to do or where to look.

Why SDN as an Overlay is not as great as they explained in that Power Point?

Let me try to explain why SDN as an Overlay should not be considered for the environments with a Physical Network Topology with more then just a few switches. Bare with me here, because the explanation might seem a bit complex at first.

The concept of Virtualization is based on optimising the physical resources in order to get better performance using the same physical resources. This concept should apply to Server Virtualization and the Network Virtualization. Now imagine the Software that handles Server Virtualization "as an Overlay", taking the Physical servers as "commodity". For example, let's imagine that the 10 physical Servers on a picture below have 16GB of RAM, 4 Cores and 512GB of SSD each. Now let's say that we need to provision 100 VMs, each with 8GB of RAM and 2 Cores. Our Virtualization Software, having no visibility or control of the Physical Servers, will just randomly provision these machines in the physical infrastructure. In this way some of our physical servers will contain 20+ VMs and therefore start having performance issues due to the insane oversubscription, while the others will work with less then 20% capacity with just a few VMs.

While this seems to be pretty easy to understand, most of the Systems departments have trouble understanding that the exactly same thing happens to our Network when we assume that our SDN should be treated as an Overlay only. Yes, RAM and number of Cores are concepts far easier to understand then Switch Throughput, IP Flows and Interface Buffer Capacity, but the concept is the same - if we want to provision our applications to run over our network ignoring the importance of the Physical Network, even if your IP network is redundant and highly available as the topology below - some of our Links will have high drop numbers while the others will have almost no traffic, some of our Switches will have CPU 99% while the others get under 10% (this data is actually from the real SDN implementations). What can we do? We have two options. We either over-provision our Network Infrastructure and spend way more money then planned, or we suffer the performance issues and blame the guys who take care of the Physical Network.

If after this paragraph you still don't understand why your traffic wouldn't be magically balanced through the Physical Network but saturate a single group of Links and Switches instead, it's yet another sign that you should probably involve your Networking experts in the decision making process. Let's face it, Overlay is based on  VxLAN, and VxLAN is basically the tunnel between the two VTEPs, and therefore - a single IP flow. What happens with an IP flow in an IP Network? It's routed via the best IP path, a decision made locally based on every routers routing table. This means that ALL the traffic between any two Hypervisors will always go through the same links and same Network devices.

The worst of all is that none of these problems will show themselves in the Demo/PoC environment, as we are mostly testing the functionalities. The problems will get more and more serious as we're adding more applications/Network loads, and tryig to scale up the environment. In any case, 100% of the wrongly chosen SDN solutions in the beginning that I've seen ended up with the clients complete frustration and a rollback to the Legacy network, at least until the SDN is "more mature". No... there are mature SDN solutions, you were just convinced too easily and chose the wrong solution.


Before I get to the recommendations which solution is the perfect one for you, there is one thing that most of my clients are trying to avoid - every SDN solution is a vendor lock-in. Some of them lock you in with their Hardware, some with Software and Licences, and some with Support (Including Upgrades and Additional engineering when adding/upgrading other components in your Data Center).

To sum all this up, I'll give you a simple list of advices to help you decide which SDN solution I recommend you consider.

Is VMware NSX a perfect fit for my environment?

If your environment is 100% virtual and 100% VMware (or on the path to become 100% virtual in the next few years), and your Data Center Network Topology is rather simple and made of 100% high-end high-throughput Network Devices - NSX is the way to go! With the vRealize Network Insight you'll be able to get the basic picture of whats going on in the Physical Network and as VMware says do the "Performance optimization across overlay and underlay", and the NSX micro-segmentation just works perfectly. Have also in mind that Cisco and VMware are the two companies with the greatest number of experts, so you don't have to worry about the product support.

*There's a multi hypervisor version of NSX, called NSX Transformers (previously known as NSX-mh). At the moment (December 2016) this is not something that you should consider, as it has a very limited number of functionalities, and there is no way to get your hands on it (not even as a VMware employee or a partner)

Is Nokia Nuage a perfect fit for my environment?

If you have a multi-hypervisor 100% virtual environment (or on the path to become 100% virtual in the next few years) and your Data Center Network Topology is rather simple and made of 100% high-end high-throughput Network Devices - Nuage might be the way to go. Within the Nuage VSP (Virtualized Services Platform) there is a product called Nuage VSAP (Virtualized Services Assurance Platform). Have in mind that VSAP can give you a basic overview of what's going on in your physical network, but this is more of a Monitoring then a Network Management platform. On the Nuage web page you will find that if, for example, your physical link goes down, the triggered action would be sending an email to the Networking department or similar.

If you have many Branch Offices - you should definitely consider Nuage, as Nuage Networks Virtualized Network Services (VNS) solution can literally extend your VxLANs (and therefore your applications) in a matter of hours using a simple Physical or Virtual device.

Also worth mentioning - Nuage GUI is simply awesome, fast and intuitive. Your SDN admins will appreciate this (at least in the migration process, till you migrate to all-API Data Center environment).

Is Cisco ACI a perfect fit for my environment?

ACI is definitely one of my favourites on the market, and probably the only one that gives the entire control of Overlay and Underlay as a single Network, and Out of Box with the Support model defined. The problem is that the only switch supporting the Cisco ACI is Cisco Nexus 9k. So if you have a serious Network Topology and you're planning a renovation of your Switches (or you already have a significant number of Nexus 9k) - ACI is definitely the way to go. It lets you control your network (Physical and Virtual) from a single controller, and the Troubleshooting tools are just INSANE. You can even do a trace-route including Overlay, Underlay and Security with a graphical output.

Is OpenDayLight a perfect fit for my environment?

OpenDayLight is an open source solution, which means that if you don't already have a big team of motivated R&D Network Engineers - you should go for one of the distributions out there by a major vendor, such as Ericsson, Huawei, NEC, HP etc.

The advantage of OpenDayLight is the flexibility, because it has numerous projects that you can use or not in your environment. This allows you to make a perfect fit custom solution that handles the Overlay and the Underlay using open source projects. There is again an issue of handling the Physical infrastructure with the half-engineered protocols such as OpenFlow and OVSDB, but a good system integrator can overcome this, and I've seen it happen.

The disadvantage is that this kind of solutions requires a great number of engineering hours, and an update of a certain component in your hardware may require a re-engineering of a part of your SDN solution. There is also a question of customer support, having in mind that the only one who knows the details of the personalised solution that your system integrator of choice implemented is the proper integrator.