Now that RAN (Radio Access Network) rel.0 specs start to take shape it was time to verify practically that low spec hardware could deliver enough processing power and, especially, accuracy to implement the proposed specs.
The first aspect that I decided to tackle, since it appears to be the most critical, is that of time offset (Toff). Toff is a measure of how far into the assigned time slot (or out of it) a node is transmitting. This has implications also on the receiving since the node needs to wake up at the correct time in order to be able to catch messages destined to it.
Toff, just to recall from the RAN.0 specs, is a value that expresses ho many mS off the center of the assigned slot is the start of a message. A value of Toff between +/-50mS yields an ACK of the message while values outside that range will cause a NACK. In open loop control there is no such feedback and the node just compensates for timing changes it knows about and re-syncs to the BCH if it gets a NACK (or often re-syncs enough to avoid NACKs). In closed loop mode the AP is reporting Toff at every ACK/NACK so the node can adjust its timers.
The target processor for this testing was an ATMega 168.
I first tested an open loop control of Toff purely based on the node compensating the errors in the sleep duration and the variations of the duration of the transmit phase. Variations of transmit phase duration can be easily compensated as during the active phase the MCU is running on a relatively stable crystal clock, so internal timers can be used to gauge the phase duration. The problem comes with the sleep phase. First of all internal timers are not running, so we have no idea of how long we have been sleeping and, secondly, even if they did the only variation in sleep time is due to clock accuracy, so that cannot be corrected using timers that are run by the clock itself.
To test the open loop Toff control I manually first calibrated the sleep cycle to compensate for error in the sleep phase duration. This would, eventually, be feasible also in production by having an automated calibration phase. At a constant room temperature all is good with this method. Issues though come, as I suspected, with temperature variations. When the ATMega is in power down mode it only runs an internal 168KHz RC oscillator which, due to the fact it is an RC oscillator, is very susceptible to temperature drift. In fact warming up the node gently just a 10-15 degrees sent the sync all over the place. The node would recover sync only by re-syncing to the BCH but with just 15 degrees temperature change the original calibration value was so much off that re-sync to the BCH would be needed every few seconds. It could be possible to keep the Toff control in open loop probably for a system that is never in sleep mode, such as a node powered from the grid, in this case the node would be much more stable in temperature and re-sync to the BCH could be afforded at any time. For a node that makes use of sleep temperature changes would require it to probably re-sync with the BCH at every transmission/reception which, in some cases, might even be acceptable but would surely take a toll on battery usage.
I then moved forward to test the closed loop alternative which, as could be expected, proved to be much more stable for a large temperature range. Sudden temperature changes will still cause a single message to get out of the slot and being rejected but after that following messages will be in sync. So closed loop control of Toff is a viable way of keeping low power nodes in sync with the network.
Based on the results of these trials I will propose to mandate closed loop Toff control already in release 0 of iotp2p RAN, leaving eventually open the possibility for the nodes to ignore Toff reports if they wish, and can afford, to re-sync with the BCH at every frame.