Key Takeaways
- OTA DFU is no longer optional for most connected products. The EU Cyber Resilience Act requires manufacturers to deliver security updates throughout the product lifecycle – including, where applicable, automatic updates. For BLE devices without persistent internet connectivity, OTA DFU via a companion app is the most practical path to compliance.
- A bad OTA update is worse than no update at all. A firmware update that fails mid-transfer, installs corrupted code, or allows unauthorized images can brick devices at scale – destroying customer trust and triggering costly field recalls.
- BLE throughput for DFU has improved dramatically – from painfully slow on BLE 4.0 to practical speeds on BLE 5.0+ with 2M PHY, Data Length Extension, and optimized MTU. But achieving those speeds requires tuning on both sides of the link.
- Security means signing, not just encryption. Image signing (ECDSA-P256 or Ed25519) ensures authenticity – the firmware came from you and hasn’t been tampered with. Encryption protects confidentiality. Both matter, but signing is mandatory; encryption is situational. For higher assurance, Arm TrustZone provides hardware-isolated key storage and verification.
- OTA DFU is a system problem, not a chip problem. From cloud-based fleet management and staged rollouts to chain updates across wired buses (I2C, CAN, UART), a real-world OTA architecture extends far beyond the BLE link.
Introduction
Every connected product ships with bugs, which is why mastering BLE OTA DFU is essential.. This is not a failure of engineering discipline – it’s a reality of software development. What separates a professional product from a liability is whether those bugs can be fixed after the device leaves the factory.
Over-the-air device firmware update (OTA DFU) is the mechanism that makes this possible. It allows new firmware to be delivered to a deployed BLE device wirelessly, without physical access, without returning the product, and without asking the end user to connect a cable and run a desktop tool. For consumer devices – wearables, trackers, audio products – OTA DFU is the difference between a product that improves over time and one that becomes obsolete the day it ships.
But OTA DFU is also one of the most dangerous features in an embedded product. A firmware update that fails mid-transfer can leave a device in an unbootable state. An unsigned update mechanism is an open door for malicious firmware injection. A poorly optimized transfer that takes 30 minutes over BLE will be abandoned by users before it completes – leaving devices stranded on old, vulnerable firmware.
At needCode, OTA DFU is one of our core deliverables. We build both sides: the bootloader and DFU service on the embedded device (custom bootloaders, Nordic’s secure DFU, MCUboot – depending on the project), and the companion app that drives the update on iOS, Android, Flutter, Windows, and Linux. This brief covers what we’ve learned about building OTA systems that are secure, reliable, and fast enough that users actually complete them.
How BLE OTA DFU Works: The Architecture
A BLE OTA DFU system has three components that must work together flawlessly:
The bootloader.
This is a small, hardened piece of code that runs before your application. Its job is to validate the firmware image in flash, decide whether to boot it, and – if a new image is available – verify, install, and activate it. The bootloader is the root of trust for your entire device. If the bootloader is compromised or buggy, no amount of application-level security can save you. The bootloader should be write-protected against OTA overwrite (using hardware flash protection) and should be as small and well-tested as possible.
The DFU transport (GATT service).
The firmware image is transferred from the host (phone, PC, gateway) to the device over BLE using a dedicated GATT service. The two dominant protocols in the Nordic ecosystem are Nordic’s proprietary DFU protocol (using DFU Control Point and DFU Packet characteristics) and MCUmgr/SMP (the Zephyr-native Simple Management Protocol, which is transport-agnostic and works over BLE, UART, and UDP).
Silicon Labs uses its own OTA Bootloader GATT service with the Gecko Bootloader, and Espressif provides a DFU mechanism through its ESP-IDF framework. Regardless of the protocol, the transfer follows the same pattern: the host sends metadata (image size, version, hash); the device validates and enters DFU mode; the host streams firmware chunks via Write Without Response; and the device writes them to a staging slot in flash.
The companion app (DFU initiator).
This is the application running on the phone, tablet, or PC that orchestrates the entire update process. It discovers the device, negotiates BLE connection parameters (MTU, DLE, PHY), authenticates (if required), streams the firmware image, monitors progress, handles errors and retries, and confirms the update.
The companion app is where most DFU failures actually occur – not on the embedded side, but in the interaction between the mobile BLE stack and the device. Platform-specific BLE behavior on iOS, Android, Windows, and Linux means the same DFU protocol can behave differently depending on the host. This is why we build and test the companion app as an integral part of every DFU project – not as an afterthought.
How BLE Spec Evolution Improved OTA DFU
OTA DFU over BLE has gone from barely viable to production-ready, largely thanks to spec improvements that increased throughput and reliability:
BLE 4.0/4.1 – Painfully Slow.
The original BLE specification limited the Link Layer data payload to 27 bytes per packet. With ATT overhead, this left only 20 bytes of application data per packet. Combined with the 1M PHY and the 150 μs inter-frame spacing, throughput was severely constrained. A 256 KB firmware image could take 10–15 minutes to transfer – an eternity for a consumer waiting for their fitness tracker to update. Many early BLE products shipped without OTA DFU entirely, because the transfer time made it impractical.
BLE 4.2 – Data Length Extension (DLE).
BLE 4.2 increased the maximum Link Layer payload from 27 bytes to 251 bytes – nearly a 10x increase per packet. This was the single biggest throughput improvement for OTA DFU. With DLE enabled, fewer packets are needed per connection event, less time is wasted on packet overhead and inter-frame spacing, and the radio can be turned off sooner, saving power. In practice, DLE alone improved DFU throughput by 2–5x depending on the implementation. BLE 4.2 also introduced LE Secure Connections, meaning the DFU transport could now be encrypted with strong ECDH-based key exchange – critical for securing the update channel.
BLE 5.0 – 2M PHY.
The LE 2M PHY doubled the raw radio data rate from 1 Mbps to 2 Mbps. When combined with DLE, throughput for DFU transfers improved by approximately 80–90% over BLE 4.2 with DLE alone. A 256 KB image that took 90 seconds on 1M PHY with default settings could complete in roughly 45 seconds on 2M PHY with optimized MTU and connection interval. BLE 5.0 also introduced the Coded PHY for long-range communication – useful for updating devices that are physically distant (e.g., sensors mounted in hard-to-reach locations), though at significantly reduced throughput.
BLE 5.3 – Connection Subrating.
Connection subrating allows dynamic adjustment of connection parameters without a full renegotiation. During DFU, you want a short connection interval (e.g., 7.5–15 ms) for maximum throughput. During normal operation, you want a long interval for battery life. Connection subrating makes this transition smoother, reducing the overhead of switching between “DFU mode” and “normal mode.”
BLE 5.4 – Encrypted Advertising Data.
While not directly a DFU feature, EAD enables secure advertisement of DFU availability – a device can broadcast that an update is pending without revealing this to unauthorized observers.
Securing OTA DFU: Signing, Encryption, and Rollback Protection
An unsecured OTA mechanism is one of the most dangerous vulnerabilities a connected product can have. If an attacker can push arbitrary firmware to your device, they own it – completely and permanently. Here’s what a production OTA system must include:
Image signing (mandatory).
Every firmware image must be cryptographically signed by the manufacturer before distribution. The device’s bootloader holds the corresponding public key (burned into flash or secure storage during manufacturing) and verifies the signature before accepting any image. ECDSA with the P-256 curve is the standard choice for constrained devices – it offers strong security with small key and signature sizes. Ed25519 is an increasingly popular alternative with slightly better performance characteristics. The critical rule: never ship a production device with a debug signing key. This is one of the most common and most devastating mistakes we see – a device that accepts any image signed with a well-known debug key is effectively an open door.
Image encryption (situational).
Signing ensures authenticity – the image came from you and hasn’t been modified. Encryption ensures confidentiality – no one can read the firmware binary during or after transfer. Encryption is important if your firmware contains proprietary algorithms, trade secrets, or intellectual property you want to protect against reverse engineering. It adds complexity (key management, decryption on the embedded side) and is not necessary for all products. But for consumer electronics, wearables, and audio products where firmware contains competitive differentiators, encryption is worth the investment.
Version monotonicity (downgrade prevention).
The bootloader must enforce that new firmware has a version number strictly greater than the currently installed image. Without this, an attacker who obtains an older firmware version with a known vulnerability could “update” the device back to the vulnerable version – a downgrade attack. This check is simple to implement but frequently omitted.
Rollback protection (dual-bank / revert mechanism).
The most important reliability feature for field deployments. In a dual-bank (A/B slot) architecture, the new image is written to an inactive slot. The bootloader swaps to the new image on the next boot. If the new firmware fails to confirm itself (by calling a confirmation function after successful boot), the bootloader automatically reverts to the previous known-good image on the next reset. This self-healing loop ensures that a bad update never permanently bricks the device. MCUboot and Nordic’s secure DFU both support this pattern. If your bootloader doesn’t have automatic rollback, you are one bad firmware image away from a field recall.
Battery check before accepting OTA.
A firmware update that begins at 5% battery and loses power mid-write is a bricking event. The DFU service should check battery level before accepting an update and reject it if the level is below a safe threshold (typically 20% or higher, depending on the flash write energy budget). This check should happen on the device side – don’t rely on the companion app to enforce it.
Bootloader write protection.
The bootloader itself must be protected against OTA overwrite. If an attacker can replace your bootloader, they can replace your root of trust. Use hardware flash protection (MPU regions, BPROT, or equivalent) to make the bootloader region read-only after programming. Some architectures allow bootloader updates through a separate, more restricted path – but this must be designed with extreme care.
Hardware Root of Trust: Arm TrustZone and Secure Bootloader Chains
For products built on Arm Cortex-M33-based SoCs – including Nordic’s nRF5340 and nRF54 series, TI’s CC26x4, and STM32 families – the bootloader security story goes deeper than software-only image verification. These processors include Arm TrustZone, a hardware isolation technology that partitions the processor into a Secure Processing Environment (SPE) and a Non-Secure Processing Environment (NSPE), enforced by hardware – not just memory protection flags that software could bypass.
In a TrustZone-enabled DFU architecture, the bootloader and cryptographic verification logic run in the secure world. Signing keys, key material, and the signature verification code are hardware-isolated from the application. Even if the application firmware is completely compromised, an attacker cannot access the secure bootloader’s keys or tamper with the image verification process. This is a fundamentally stronger guarantee than a software-only bootloader running at the same privilege level as the application.
The practical implementation typically uses a two-stage bootloader chain. The first stage is an immutable bootloader – burned into flash during manufacturing and write-protected. It serves as the hardware root of trust. Its sole job is to verify and boot the second stage. The second stage is an upgradable bootloader (typically MCUboot), which verifies and boots the application image. This two-stage chain allows the upgradable bootloader itself to be updated via OTA – a critical capability for long-lifecycle products where the bootloader may need security patches – while keeping the immutable root of trust permanently protected.
Nordic’s nRF Connect SDK supports this architecture through the nRF Secure Immutable Bootloader (NSIB) as the first stage and MCUboot as the upgradable second stage, with Trusted Firmware-M (TF-M) providing PSA-compliant secure services (key storage, cryptographic operations, attestation) in the secure partition. TI’s SimpleLink SDK offers a similar architecture through TF-M with MCUboot on CC26x4 devices.
For consumer products that will face EU Cyber Resilience Act conformity assessment, a TrustZone-based secure boot chain is a strong answer to the CRA’s “secure by design” requirement. It demonstrates hardware-enforced security that goes beyond what software-only solutions can claim.
Not every product needs TrustZone – a Cortex-M4-based nRF52840 with MCUboot and proper ECDSA signing is already a solid architecture for many applications. But for products with higher security requirements, longer lifecycles, or regulatory scrutiny, TrustZone-based secure boot is the gold standard.
Fleet Management: From Cloud to Device
Building a secure bootloader and a fast BLE transport is necessary – but it’s not sufficient. For a product with thousands or millions of devices in the field, you also need a firmware distribution pipeline that gets the right image to the right device at the right time.
The full OTA pipeline looks like this: your CI/CD system builds the firmware image, the image is signed (ideally in an HSM-backed signing service, not on a developer laptop), the signed image is uploaded to a cloud distribution platform, the companion app (or gateway) checks for available updates, downloads the image, and transfers it to the device over BLE.
Staged rollouts.
Never push new firmware to your entire fleet simultaneously. Roll out to a small percentage first (1–5%), monitor for crashes, increased error rates, or unexpected behavior, then gradually increase the rollout percentage. If something goes wrong, you’ve affected hundreds of devices, not hundreds of thousands. This requires your distribution backend to support targeting – by device group, firmware version, hardware revision, or geographic region. The stakes scale with fleet size: when we worked on a fire protection system for a manufacturer with over 400,000 installed devices – modernizing a product that had been on the market for nearly 20 years – the OTA update architecture was a central design decision, because a failed update to a safety-critical device in someone’s kitchen is a fundamentally different problem than a failed update to a fitness tracker.
Version and hardware compatibility checks.
Not every firmware image is compatible with every hardware revision. Your distribution system must match firmware images to device capabilities – different images for different hardware variants, and validation that the target device meets the minimum hardware requirements before the update is offered.
Update status reporting.
The companion app should report update outcomes back to your backend: update started, progress percentage, update completed successfully, update failed (with error code), update reverted by bootloader. This telemetry is essential for monitoring rollout health and is directly relevant to CRA compliance – you need to demonstrate that security patches are actually reaching devices, not just being distributed.
Offline and delay-tolerant distribution.
BLE devices don’t have a persistent internet connection. The update is delivered through the companion app on the user’s phone. This means you’re dependent on the user opening the app, keeping the phone in range, and completing the transfer. Your distribution system should handle the case where an update is available but the user hasn’t connected in weeks – including nudging the user (via push notification) when a critical security patch is pending.
Several commercial platforms handle the cloud-to-device pipeline – Memfault, Golioth, and Nordic’s nRF Cloud among them. Alternatively, you can build a custom distribution backend. Either way, the firmware signing and the BLE transfer layer are custom engineering that depends on your bootloader, your GATT protocol, and your companion app – which is where needCode’s work typically fits.
Chain Updates: Updating Multiple Devices Over a Wired Bus
Many real-world products are not a single BLE SoC. They’re a system – a main processor with BLE connectivity, connected to one or more secondary processors or co-processors over I2C, SPI, UART, or CAN bus. Think of a wearable with a BLE main MCU and a separate sensor hub. Or an industrial controller with a BLE gateway MCU and multiple CAN-connected field devices. Or an audio product with a BLE SoC and a dedicated DSP or codec chip.
In these architectures, only the main MCU has BLE connectivity. The secondary devices have no direct wireless interface. But they still need firmware updates.
This is what we call a chain update (or multi-image update): the companion app transfers the firmware image for the secondary device to the main MCU over BLE, and the main MCU then forwards it to the secondary device over the wired bus.
Chain updates add significant complexity to every layer of the DFU system:
Image routing.
The companion app (or cloud backend) must know which images are destined for which devices in the chain. A single update session might include a new firmware for the main MCU, a new firmware for the I2C-connected sensor hub, and a new configuration for the CAN-connected motor controller. Each image must be tagged with its target, and the main MCU must route each image to the correct destination.
Protocol bridging.
The main MCU acts as a protocol bridge – receiving image chunks over BLE GATT and forwarding them over I2C, UART, SPI, or CAN to the secondary device. Each of these wired protocols has different packet sizes, flow control mechanisms, and timing constraints. The bridge must buffer data, handle backpressure from the secondary device, and manage errors on both the BLE and wired sides independently.
Independent verification.
Each device in the chain should independently verify the image it receives. The secondary device’s bootloader should have its own signing key and its own signature verification – don’t rely solely on the main MCU’s verification. If the main MCU is compromised, it should not be able to push unsigned firmware to the secondary devices.
Ordering and dependencies.
In some systems, the update order matters. If the main MCU firmware update changes the wired bus protocol, the secondary devices must be updated first (or the system becomes unable to communicate). If the secondary device update requires new functionality in the main MCU, the main MCU must be updated first. Your DFU system must support explicit ordering and dependency management, or you risk leaving the system in an inconsistent state.
Failure and rollback across the chain. If the main MCU update succeeds but the secondary device update fails, the system may be in an incompatible state. A robust chain update architecture includes coordinated rollback – if any image in the chain fails to install or verify, the entire chain reverts to its previous known-good state. This is significantly harder to implement than single-device rollback, but it’s essential for system-level reliability.
Chain updates are common in consumer products that most people think of as “simple” – a wireless earbud case that updates both the case MCU and the earbuds themselves, a smart watch that updates both the application processor and a co-processor, or a fitness tracker with a separate sensor front-end. Getting this right requires understanding both the BLE transport and the wired bus protocols – and designing the DFU architecture to treat the entire system as a single update unit.
Optimizing DFU Speed: Why It Matters and How to Do It
For consumer devices, DFU speed is a product experience issue. A user who starts an update on their fitness tracker expects it to complete in minutes, not tens of minutes. If the progress bar stalls or the transfer takes too long, they’ll close the app, walk out of range, or simply give up – leaving the device on old firmware. Every abandoned update is a device that doesn’t get its security patch.
Here’s how to optimize DFU throughput on both sides of the link:
Negotiate the largest possible MTU.
The ATT MTU determines how much data fits in a single GATT packet. The default is 23 bytes (20 bytes of payload after ATT overhead). With DLE, you can negotiate up to 512 bytes – but not all values are equally efficient. An MTU of 502 is often optimal: it splits into exactly two full Link Layer packets (244 + 251 bytes of application data) with no wasted space. An MTU of 512 requires three LL packets, with the third carrying only 10 bytes – nearly a full packet wasted. Always negotiate the largest MTU supported by both devices, and choose values that align with LL packet boundaries.
Enable Data Length Extension.
DLE should be enabled on both the peripheral and central sides. On some platforms (especially older Android devices), DLE is not enabled by default – the companion app may need to explicitly request it after connection. Verify DLE is active by checking the negotiated data length in your BLE sniffer traces.
Use 2M PHY when available.
Request the LE 2M PHY via the PHY Update Procedure after connection. Both devices must support it – LE 2M is optional in the Bluetooth specification, so not all chipsets that claim “Bluetooth 5” support it. On the mobile side, iOS has supported 2M PHY since iPhone 8, and Android support varies by device and manufacturer. Your companion app should request 2M PHY, fall back gracefully to 1M if it’s not available, and report the negotiated PHY to your analytics.
Use Write Without Response for data transfer.
Write Without Response (Write Command) allows the host to stream packets without waiting for an acknowledgement after each write. This is essential for DFU throughput – Write With Response cuts throughput by 50% or more because each packet requires a round-trip confirmation. The DFU protocol should use Write Without Response for the image data stream and Write With Response only for control commands (start, confirm, reset).
Tune connection interval.
A shorter connection interval (7.5–15 ms) allows more connection events per second and therefore more packets. During DFU, request the shortest interval the peripheral supports. After DFU, switch back to a longer interval (e.g., 100–500 ms) for battery life. On iOS, the connection interval is managed by the OS and cannot be directly set by the app – but the peripheral can request a specific range via a connection parameter update, and iOS generally honors reasonable requests.
Implement chunked transfer with resume.
BLE connections drop. Phones go out of range. Apps get suspended by the OS. A production DFU system must support resumable transfers – tracking the last successfully written offset and continuing from that point after reconnection. Without resume support, any interruption means starting over from byte zero. For a 256 KB image at 10 KB/s, that’s 25 seconds wasted every time the transfer restarts.
The Mobile App: Where Most DFU Failures Actually Happen
The embedded side of DFU gets most of the engineering attention. The mobile side causes most of the failures. Here’s why:
iOS background execution limits.
iOS aggressively suspends backgrounded apps. If the user switches to another app during a DFU transfer, iOS may suspend the BLE connection within seconds. CoreBluetooth provides a background execution mode for BLE, but it has strict limitations – your app must declare specific background modes, and iOS may still throttle connection events. Design your DFU to handle suspension gracefully: save transfer state, detect reconnection, and resume.
Android fragmentation.
BLE behavior on Android varies dramatically across manufacturers, chipsets, and OS versions. Samsung, Google Pixel, Xiaomi, and Huawei devices each have BLE stack quirks that can affect MTU negotiation, DLE support, connection stability, and PHY selection. We test DFU on a matrix of at least 8–10 Android devices across manufacturers before declaring a DFU implementation production-ready.
Flutter abstraction gaps.
Flutter BLE plugins provide a convenient cross-platform abstraction, but DFU pushes the boundaries of what these abstractions can handle. MTU negotiation, PHY selection, Write Without Response queuing behavior, and background execution all differ between the iOS and Android native stacks beneath the plugin. For DFU-critical products, we often implement the DFU transport in native code (Swift/Kotlin) and bridge it to Flutter – accepting the added complexity in exchange for reliable, platform-optimized behavior.
Windows and Linux edge cases.
Windows BLE stack behavior for large data transfers (DLE, MTU negotiation) has improved significantly in recent versions but still has edge cases around connection stability during long transfers. On Linux, BlueZ handles DFU well when properly configured, but the default settings may not request DLE or negotiate optimal MTU – requiring explicit configuration in your initiator application.
Common OTA DFU Mistakes We See in the Field
Shipping with debug signing keys. The most dangerous mistake. If your production device accepts images signed with the SDK’s default debug key, anyone with the SDK can push arbitrary firmware to your device. Generate your own key pair, store the private key in a hardware security module (HSM) or at minimum a secure, access-controlled location, and burn only the public key into the device during manufacturing.
No rollback mechanism.
A single-bank bootloader that overwrites the running firmware in place. If the new image is corrupted or fails to boot, the device is bricked. Always use a dual-bank (A/B) or swap-based architecture with automatic revert. The flash cost of the second slot is insignificant compared to the cost of a field recall.
Ignoring battery level.
Starting a DFU at low battery and losing power during a flash write. The device is now in an undefined state with a partially written image. Check battery before accepting the update. Reject if below threshold.
Not optimizing throughput.
Shipping with default BLE parameters (23-byte MTU, no DLE, 1M PHY, Write With Response) because “it works.” It works – at 2 KB/s. For a 256 KB image, that’s over two minutes. For a 512 KB image, over four minutes. Users won’t wait. Optimize MTU, DLE, PHY, and use Write Without Response. The difference between a 30-second update and a 5-minute update is the difference between a feature users love and one they avoid.
No transfer resume.
Starting over from byte zero every time the connection drops. On a busy commute, in a crowded office, or when the user’s phone switches between Wi-Fi and cellular, BLE connections drop frequently. Resume from last confirmed offset. Save progress to flash if necessary.
Testing only on one phone.
A DFU that works perfectly on an iPhone 15 and fails silently on a Samsung Galaxy A14. Mobile BLE behavior varies enormously. Test on a representative matrix of devices and OS versions – especially the inexpensive Android phones that your actual users carry.
The Regulatory Reality: OTA DFU Is Now Mandatory
The EU Cyber Resilience Act makes OTA update capability a practical necessity for most connected products sold in the EU:
Automatic security updates. The CRA’s Annex explicitly requires that vulnerabilities be addressed through security updates – including automatic updates installed within an appropriate timeframe. For BLE devices operating without a persistent internet connection, OTA DFU via a companion app is the only practical mechanism to deliver these updates. Manual, cable-based updates at scale are not feasible.
Lifecycle security obligation. Manufacturers must provide security updates for the entire declared support period of the product. If your wearable has a three-year expected lifecycle, you need to be able to patch critical vulnerabilities for all three years. This means your OTA infrastructure – bootloader, DFU protocol, companion app, firmware distribution backend – must be designed for long-term operation, not just the initial product launch.
Secure-by-default configuration. The CRA requires that security fixes not be lost when a device is factory reset. If your device reverts to an old firmware version on reset, it must immediately check for and apply available updates. This has direct implications for your bootloader design and your DFU trigger logic.
Vulnerability reporting. From September 2026, manufacturers must report actively exploited vulnerabilities. The ability to quickly develop, sign, distribute, and install a security patch – the entire OTA pipeline – becomes a compliance timeline, not just an engineering convenience.
Products that ship without a robust OTA mechanism may face significant challenges during CRA conformity assessment – and risk delays or difficulties in EU market access.
Making the Right DFU Decisions: A Practical Checklist
- Sign every image. ECDSA-P256 or Ed25519. Generate your own key pair. Protect the private key. Burn the public key into the bootloader during manufacturing. Never ship with a debug key.
- Use dual-bank with automatic rollback. If the new firmware doesn’t confirm itself after boot, the bootloader reverts to the previous known-good image. Test this path – deliberately flash a broken image and verify the device recovers.
- Enforce version monotonicity. The bootloader rejects any image with a version number less than or equal to the currently installed version. This prevents downgrade attacks.
- Check battery before updating. Reject DFU if battery is below a safe threshold. This single check prevents the most catastrophic failure mode.
- Optimize BLE parameters for throughput. Negotiate maximum MTU (502 is optimal), enable DLE, request 2M PHY, use Write Without Response for data transfer, and use a short connection interval (7.5–15 ms) during DFU.
- Implement transfer resume. Track the last confirmed offset. Resume from that point after reconnection. Don’t start over.
- Test on real phones. At least 3–4 iOS devices across generations and 5–6 Android devices across manufacturers. Include at least one budget Android phone.
- Encrypt the image if your firmware contains IP. Signing is mandatory. Encryption is an additional layer – add it if your firmware contains proprietary algorithms or trade secrets worth protecting.
- Plan for the full lifecycle. Your DFU infrastructure must work for years after launch – firmware signing pipeline, distribution mechanism, companion app maintenance. Design it for the long term from day one.
What BLE Version Are You On?
For devices still on BLE 4.0/4.1 BLE 4.0/4.1 with a 27-byte payload and no DLE, your DFU transfers are 5–10x slower than they could be – and your users are suffering for it.
When using BLE 4.2+, make sure DLE is actually enabled and negotiated (many implementations have it available but don’t request it).
If you’re on BLE 5.0+, verify that 2M PHY is being used during DFU transfers – not just supported by the chipset.
And if your device doesn’t have OTA DFU at all – the EU Cyber Resilience Act timeline is approaching. Main obligations apply from December 2027, with vulnerability reporting required from September 2026. Now is the time to add OTA capability, not after your first critical CVE.
When to Call needCode
DFU architecture design. You’re building a new product and need to choose the right bootloader (MCUboot, Nordic secure DFU, or custom), signing strategy, slot layout, and companion app approach. We design the full OTA pipeline – device, app, and distribution.
DFU hardening. Your product has OTA but it’s unsecured (debug keys, no rollback, no downgrade prevention) or unreliable (no resume, abandoned transfers, bricked devices). We audit and fix the full chain.
Companion app development. You need the mobile or desktop app that drives the update – optimized for throughput, resilient to disconnections, tested across iOS and Android devices. This is a core needCode deliverable.
Chain update implementation. Your system includes multiple MCUs connected over I2C, UART, SPI, or CAN, and only one has BLE. We design the multi-image update architecture with routing, independent verification, and coordinated rollback.
CRA readiness. You need to demonstrate secure update capability for EU Cyber Resilience Act conformity assessment. We deliver the OTA implementation and the supporting technical documentation.
Bartek Kling – bartek.kling@needcode.io

