STM32H7 Fatigue Detection: 1M Rows → 85k Rows, 512KB RAM, <100ms Inference — Is 4Hz Resampling The Right Move?

Building a real-time fatigue detection system for STM32H7 deployment.

Constraints:

  • 512KB RAM
  • <100ms inference
  • preprocessing on laptop
  • inference on-device only

Dataset:
~1M rows from asynchronous wearable sensors.

Sensor Native Frequency Notes
ACC 32 Hz wrist accelerometer
EDA 4 Hz electrodermal activity
Temp 4 Hz skin temperature
HR 1 Hz heart rate
Breathing 1 Hz respiration
IBI ~0.59 Hz irregular inter-beat interval

Labels:

  • fatigue
  • activity
  • baseline

Current preprocessing strategy:
Resample everything to 4Hz.

Signal Strategy
ACC 32→4Hz mean over 8 samples
EDA/Temp native 4Hz
HR 1→4Hz linear interpolation
Breathing 1→4Hz linear interpolation
IBI ~0.59→4Hz forward-fill

Result:
~1M rows → ~85k synchronized rows.

Current doubts:

  1. ACC to 4Hz: Using only the mean feels too lossy. Should I also include:
  • std
  • max/min
  • magnitude
  • energy

per 250ms window?

  1. IBI: Forward-fill feels mathematically dirty for HRV-related information. Would it be better to:
  • keep IBI irregular
  • compute RMSSD/SDNN at native timing
  • feed only HRV features downstream?
  1. HR/Breathing: Does interpolating 1Hz → 4Hz introduce fake temporal resolution? Would keeping them at 1Hz be cleaner?

Considering switching to a multi-rate pipeline:

Signal Group Frequency
ACC 8 Hz
EDA/Temp 4 Hz
HR/IBI/Breathing 1 Hz

Question:
For embedded ML / TinyML deployment, is multi-rate worth the added pipeline complexity, or is synchronized 4Hz generally the better engineering tradeoff?

Would appreciate advice from anyone working with:

  • wearable signals
  • HRV
  • TinyML
  • embedded inference
  • multimodal physiological data

submitted by /u/Aziz_2002
[link] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *