Hi everyone 👋,
We’re working on Sentess, an open protocol that transforms raw, multimodal mobile sensor data (camera, LiDAR, GPS, IMU) into structured, annotated datasets designed for spatial intelligence and embodied AI.
🔑 Why it matters:
- AI startups struggle with messy real-world data—it’s noisy, unstructured, and expensive to label.
- Sentess acts as a data infrastructure layer that cleans, structures, and validates real-world sensor streams.
- Our goal is to make datasets permissionless and crypto-incentivized, so anyone can contribute and benefit.
📈 Current Progress:
- Live testnet with 1,200+ early contributors
- Closed alpha web app for capturing verifiable spatial data
- Building a pipeline that outputs AI-ready datasets compatible with robotics and AR/VR startups
💡 Looking for feedback:
- What dataset formats or annotations are most valuable for spatial AI?
- How do you currently source and structure sensor data?
- Would you find a decentralized pipeline for generating structured spatial datasets useful?
We’re still early and would love feedback from this community on how to make this most valuable to dataset builders and users.
Thanks in advance for your thoughts! 🙏
submitted by /u/rasheed106
[link] [comments]