IoT Security: Device and firmware encryption options

DiUS
9 min readJul 31, 2023

By Johny Mattson for DiUS

One of the things we have to navigate as embedded/IoT engineers here at DiUS is the level of device and firmware encryption to recommend to our clients. Contrary to what seems to be a somewhat common belief, best practice isn’t necessarily to “Do All The Things(tm)”. While device and firmware encryption can be extremely valuable, it also comes with significant associated costs. Thus, as with most things it comes down to what the acceptable trade-offs are. This, of course, depends greatly on the individual scenario and the choice of degree of encryption to use should be a business decision, albeit one guided by technical expertise.

One thing to note is that this blog post is all about device and firmware encryption, which is orthogonal to that of communications encryption used by the device. There is little justification for not employing encrypted communications channels these days, but that’s a topic for a different post. The level to which to lock down an embedded device depends on many factors such as:

  • the threat model,
  • regulatory constraints,
  • market expectations,
  • the usability needs, and
  • of course, cost.

To begin with, let’s explore the different degrees of locking down a system, and the benefits and various costs each brings.

Secure boot + signed firmware

These days, most platforms provide some manner of “secure boot” scheme, whereby the hardware guarantees that only firmware signed with the correct key will be allowed to boot on the system. The main implication of using such a scheme is that the device will effectively not be owned by the end user, but rather by the holder of the signing key. This means that the end user cannot modify or customise the firmware to suit their needs or preferences, or install alternative firmware from third-party sources. This may be acceptable or even desirable for some use cases, such as medical devices or industrial controllers, where safety and reliability are paramount, and where regulatory requirements may mandate such a level of control. But for other use cases, such as consumer electronics or devices targeted at the hobbyist market, this may be seen as a limitation or a violation of user rights.

Another often overlooked cost to be considered is that firmware verification adds a certain amount of overhead to each boot. In many cases this is negligible, but it may for example still not be affordable on a small battery powered device, where every millisecond of runtime counts.

Secure Boot + signed & encrypted firmware

Depending on your hardware platform, an extension to the typical secure boot scheme may be available which would additionally support using encrypted firmware. With this, not only is firmware restricted to officially signed versions, but the actual firmware itself is encrypted and thereby not open to scrutiny or reverse engineering. This also makes it impossible for unscrupulous competitors to simply clone the firmware onto their own devices, as they’d lack the necessary decryption key to actually boot it.

On top of the benefits and drawbacks of a secure boot scheme, the primary benefit is the protection of a company’s intellectual property. This does come at increased runtime overhead, however the extent of which will vary from platform to platform. For an embedded Linux device, it is not uncommon to have the boot time extended by up to tens of seconds — the larger the firmware, the longer the boot time. Edge devices using large AI/ML acceleration libraries such as CUDA will notice this in particular. system. The main implication of using such a scheme is that the device will effectively not be owned by the end user, but rather by the holder of the signing key. This means that the end user cannot modify or customise the firmware to suit their needs or preferences, or install alternative firmware from third-party sources. This may be acceptable or even desirable for some use cases, such as medical devices or industrial controllers, where safety and reliability are paramount, and where regulatory requirements may mandate such a level of control. But for other use cases, such as consumer electronics or devices targeted at the hobbyist market, this may be seen as a limitation or a violation of user rights.

Another often overlooked cost to be considered is that firmware verification adds a certain amount of overhead to each boot. In many cases this is negligible, but it may for example still not be affordable on a small battery powered device, where every millisecond of runtime counts.

Full storage encryption

Going a step further is that of full storage encryption.

With this, any data storage areas on the device will be encrypted at rest. The primary advantage of this is the protection of sensitive data in the case of physical theft or loss of the device. An example of sensitive data may include items such as personal identifiable information (PII), health data, or location data. Again, depending on the field there may be regulations making this effectively mandatory.

As with everything else, this encryption adds additional runtime overhead which has to be taken into account. For storage partitions, using best practices wherever possible, would be to use unique encryption keys for each and every device. This does require the presence of some manner of secure key storage such as a Secure Enclave (SE) or Trusted Platform Module (TPM).

A common overhead across all of the above mentioned solutions is that of increasing both initial development effort as well as ongoing maintenance. Trouble-shooting and -fixing becomes more complex at all stages of a project, and deploying quick hot-fixes may become impossible once the firmware is cryptographically signed. These are all things to consider as part of weighing up the trade-offs and business impact.

Common key vs device-unique key

Assuming a decision to use a secure boot scheme has been made, another consideration would be whether to use a common key for all devices or to use device-unique keys. There are advantages and disadvantages to both.

Common key

Using a common signing key across all devices is the comparatively easy option. This means that the firmware signing (and encryption, if used) can be integrated into your CI/CD pipeline (assuming proper care is taken to keep the signing keys safe, of course). At the end of each pipeline run, ready-to-deploy artefacts will be available.

A major disadvantage of course, is that if the signing key is somehow compromised, all of the devices are exposed as a result. This can be a significant risk, and depending on the scenario this could be a deciding factor.

Another thing to consider is that if using any GPLv3 licensed components within the firmware, the end user has the right under that licence to demand not only the source code for said component, but also the tools required to build and install a replacement version of said component, explicitly including any necessary authorisation keys. On a device locked down with a secure boot scheme, this obviously includes needing to hand over the signing key! Needless to say, if making use of GPLv3 licensed code in devices going to users outside the company, using a common signing key is not the ideal choice.

Device-unique key

Using unique signing keys for each device limits the fall-out if any one key or device is compromised. It does however present challenges both in terms of performing the actual signing (and encrypting, if used) of the firmware as well as in terms of key management. Either a significant amount of up-front effort has to be expended in order to pre-sign individual firmware copies for all devices, or the signing can be delayed until the time each device has the firmware deployed.

In the pre-signing case, the signing can be done in a restricted and reasonably isolated environment which helps keep the signing keys safe. If integrated into the CI/CD pipeline, it will likely impact and slow down the turnaround of each pipeline run, possibly to the point of unusability.

However, in the on-demand signing scenario, the signing keys have to be available to the signing system, which may put them at greater risk of compromise, especially if said system is accessible over the Internet.

Amid all this complexity, it may be tempting to take shortcuts. However, doing that is arguably the worst option of them all. To reuse a famous quote — “Do, or do not.” This type of firmware security is not something that works unless everything comes together precisely as needed. Attempting to only do bits here and there is far more likely to end up with something which not only does not provide the supposed level of security, but also inflicts most of the development (and on-going) costs — initial development effort and time, ongoing boot and/or runtime overhead, as well as increased complexity throughout the entire cycle.

So what if the above solutions are all too heavyweight for the need?

In that case, the best option may be to do simple firmware signing of the upgrade packages and only verify at installation time. Using asymmetric keys, you are able to embed the verification key in the firmware itself and any remotely provided firmware upgrades can then have their signature verified using that key before being accepted and installed. This can be especially important if firmware upgrades are triggered via a “push” mechanism, rather than being a “pull” action initiated by the device user.

Note that this approach won’t protect you against the local replacement of the firmware, but for a consumer device, that may be precisely the right option. After all, Linksys sold a lot of routers precisely because the end users could swap the default firmware for OpenWRT instead.

Development vs production signing keys

Regardless of which degree of encryption you choose for your embedded device, one other aspect which needs to be managed is how development devices are handled compared to production devices. During development there is usually good justification for being more relaxed with both the key management and what firmware gets signed. Frequently there will be development versions which provide much greater access than is desired on the production devices, and if such a firmware were to leak and be able to be installed in the field that could be a big security risk.

The common solution to this dilemma is to use different signing keys for these two environments. By having a separate key set for development, and thereby also having dedicated, separate devices that are purely for development use, a leak of a development firmware is typically less problematic. Due to the differing keys such a firmware would be unusable on production devices. Having separate development devices also means that they are less likely to mistakenly find their way into production units and cause a leak that way.

From a developer perspective, the overhead of managing another key set is well worth it as it provides for more flexibility with regard to testing and debugging, not to mention it removes the risk of having their development units reassigned to production unexpectedly.

When using separate development and signing keys, generally the development keys will be available locally to each developer for daily use. Access to the production keys on the other hand should be restricted as far as possible. Ideally those keys are only accessible for use by your CI/CD pipeline, and managed by a few select people. Special care needs to be taken to avoid accidentally exposing or leaking keys via the pipeline logs or artefacts.

It should be noted that in the case where device-unique signing keys are used, it is effectively a superset, and easily achieves the same effect simply by designating certain devices as development units and sharing the associated keys within the development team. Reassigning a device from development to production, while technically possible in this scenario, should still not be done as the associated keys no longer provide the expected level of security after having been shared widely for development purposes.

Hopefully this has given you a good overview of the key options and choices to be made in regard to device and firmware encryption.

Author’s meta commentary/disclaimer: I used this blog post as an experiment to try out GPT-4, in the form of Bing’s “Compose” mode. Alas, while very interesting and impressive in its own right, its limitations were too great to be truly useful to me this time. Between not having the ability to imitate my writing style, sometimes veering off on tangents more than even I do, getting details wrong, and having its generation cut off a third of the way through the requested scope, it just wasn’t up to the task. As a result, this post is at best loosely inspired by what it was able to generate across multiple attempts, and only incorporates 3 of the generated sentences, lightly modified (can you spot them?).

Originally published at https://dius.com.au on July 17, 2023.

--

--

DiUS
DiUS

Written by DiUS

We specialise in using emerging tech to solve difficult problems, get new ideas to market & disrupt business models.

No responses yet