top of page

Research Topics Survey

  • shinkhong97
  • 2 days ago
  • 2 min read

FIRMWARE

2024

Firmware update vulnerabilities

https://www.usenix.org/conference/usenixsecurity24/presentation/wu-yuhao

Problem: Firmware updating verification procedures may spawn new vulnerabilities

Solution: ChkUp

  • Resolve the program execution path = cross-language inter-process control flow analysis + program slicing

Microcontroller Unit-based IoT devices' security

https://www.usenix.org/conference/usenixsecurity24/presentation/nino

Problem: The security problem of MCU-based IoT devices is less well-known than Unix-based ones

Solution: Create a large dataset of real MCU-based IoT devices' firmware + Static analysis

CO3: Concolic Co-execution for Firmware

https://www.usenix.org/conference/usenixsecurity24/presentation/liu-changming

Problem: Incomplete & inaccurate concolic execution for MCUs

Solution: CO3: run firmwares concretely on a real MCU, doesn't use debugging interfaces, and instrument the firmware source code => send the info of runtime to serial port => workstation (where run symbolic execution) => analyse & detect vulnerabilities of MCU-based firmware

FFXE: Dynamic Control Flow Graph Recovery for Embedded Firmware Binaries

https://www.usenix.org/conference/usenixsecurity24/presentation/tsang

Problem: Accurate CFGs for embedded firmware binaries are crucial for adapting many valuable software analysis techniques to firmware

Solution: dynamic forced execution + Unicorn framework

MultiFuzz: A Multi-Stream Fuzzer For Testing Monolithic Firmware

https://www.usenix.org/conference/usenixsecurity24/presentation/chesser

Problem: Firmware fuzzers use single-file inputs, unlike real multi-peripheral input handling.

Solution: MultiFuzz uses multi-stream inputs per peripheral, improving coverage and finding more bugs.

SHiFT: Semi-hosted Fuzz Testing for Embedded Applications

https://www.usenix.org/conference/usenixsecurity24/presentation/mera

Problem: Rehosting firmware for fuzzing causes low fidelity, false positives, and poor compatibility with real hardware.

Solution: SHiFT fuzzes firmware natively on MCUs using a semi-hosted approach, offering high fidelity, faster performance (up to 100×), and real hardware compatibility—discovering 5 new vulnerabilities without false positives.

A Friend's Eye is A Good Mirror: Synthesising MCU Peripheral Models from Peripheral Drivers

https://www.usenix.org/conference/usenixsecurity24/presentation/lei

Problem: Existing MCU rehosting methods struggle with scalability, accuracy, and universality when modelling hardware peripherals.

Solution: Perry automatically synthesises accurate and extendable peripheral models from firmware drivers, improving emulation fidelity and scalability—achieving a 74% test pass rate and uncovering 7 new vulnerabilities.

Operation Mango: Scalable Discovery of Taint-Style Vulnerabilities in Binary Firmware Services

https://www.usenix.org/conference/usenixsecurity24/presentation/gibbs

Problem: Existing firmware analysis tools must limit the number of binaries to stay scalable, causing many taint-style vulnerabilities (like command injection or buffer overflow) to be missed.

Solution: binary data-flow analysis combining value and dependency tracking. It scales static analysis to all binaries, running 27× faster and finding 56 additional real vulnerabilities missed by prior tools.

2025

AidFuzzer: Adaptive Interrupt-Driven Firmware Fuzzing via Run-Time State Recognition

https://www.usenix.org/conference/usenixsecurity25/presentation/wang-jianqiang

Problem: Interrupt mismanagement in firmware fuzzing leads to crashes and low code coverage.

Solution: AidFuzzer adaptively handles interrupts by recognising firmware run-time states, triggering only necessary interrupts. It improves coverage and discovers 8 new vulnerabilities.

NeuroScope: Reverse Engineering Deep Neural Network on Edge Devices using Dynamic Analysis

https://www.usenix.org/conference/usenixsecurity25/presentation/wu-ruoyu

DNN binary (Deep Neural Network binary) is an executable file generated when a deep learning model is compiled from a high-level description (e.g., PyTorch, TensorFlow, ONNX) into machine code to run directly on hardware.

Details:

  • The model is first defined in a high-level framework.

  • A DL compiler (e.g., TVM, XLA, TensorRT) then converts it into optimised machine code for specific hardware such as CPU, GPU, FPGA, or AI accelerators.

  • The output is a binary file containing all model logic, called the DNN binary.

Key characteristics:

  • The source code and model structure are lost—only encoded operators, weights, and functions remain.

  • Highly optimised for hardware efficiency and speed.

  • Difficult to reverse engineer, as layer names, architectures, and activation types are no longer visible.


DNN BINARY

PWNGPT


Comments


Drop Me a Line, Let Me Know What You Think

Thanks for submitting!

© 2035 by n33r9. Powered and secured by me.

bottom of page