Machine vision visualization in Nominal Connect
Nominal Connect is solving perception's toughest problem with real-time indexing & sensor fusion of LiDAR, telemetry, etc.
How modern robots see the world
Modern robotic perception relies on integrated data. Today’s systems routinely combine camera feeds, LiDAR, IMUs, and GPS, all of which must be precisely aligned in both time and space. A single run can produce thousands of synchronized frames that need to be replayed, inspected, and understood within their operational context.
LiDAR point clouds are especially critical because they provide information that cannot be adequately expressed in video. They are essential for accurate object detection, SLAM, and other 3D perception workflows.
But point clouds and machine vision are hitting a breaking point. Hardware development cycles are accelerating, robotic fleets are growing, and engineers are struggling with massive, continuous streams of spatial data. Engineers require tools that support rapid iteration and immediate feedback rather than slow, outdated batch processing. They need to move faster than ever with more data than ever.
Without a system capable of efficiently ingesting and analyzing this data, progress stalls. Visualizations choke, engineers lose visibility and insights, and timelines slow across development, production, and operations.
With Nominal, robotics engineers finally have a solution that works for them:
Nominal Core is purpose-built to handle this scale, enabling fast and organized access to years of multimodal sensor information.
Nominal Connect now offers full support for time series point clouds, integrating them seamlessly alongside telemetry, video, and event data.
Nominal equips engineers to leverage high-resolution point clouds at scale – a critical capability to improve perception, validate performance, and de-bug autonomous systems with precision and context. Let’s dive in.
Scan of tree growth, toggled by different years
The unsolved problem with time series spatial measurements
The data floods in: A single high performance LiDAR sensor can generate gigabytes of 3D data every minute. That kind of firehose quickly overwhelms traditional tools.
It’s not like video: Point clouds don’t sit neatly on a grid like pixels. They’re scattered in space. Mostly empty, noisy, and lacking the structure most algorithms rely on. Efficient handling means thinking in sparse formats, not dense arrays.
Everything must align: Depth sensing is just one piece of the puzzle. To make sense of what a robot “saw,” you need to align 3D scans precisely with the camera feed, inertial measurements, and control logs. Even a few milliseconds of drift between sensors can throw everything off.
Speed matters: Whether you’re navigating live environments or replaying a test run, engineers expect responsive tools. Waiting minutes to load a single scan or scroll a timeline breaks the feedback loop.
No standard format: Point cloud data comes in a zoo of file types, LAS, PCD, E57, and more. Stitching them together requires significant effort, especially in a time series context. Most traditional databases weren’t designed to handle 3D geometry in motion.
These challenges strain the entire pipeline, from initial collection and storage to in-depth analysis. Without the right tools, engineers spend more time wrangling data than building systems. They’re bogged down in compute costs rather than delivering features.
Drone taking point cloud scan of warehouse
The solution: Nominal’s unified 4D sensor platform
Nominal now treats temporal scans as first class citizens, fully integrated into Connect, our edge platform. The system is purpose-built to operate at scale, starting with high throughput ingestion. Machine Vision sources can connect directly, with each scan automatically tagged, timestamped, and streamed into cloud storage, thereby eliminating the need for manual handling or custom scripts.
Time alignment
Once ingested, point clouds are precisely time aligned with other sensor streams, video feeds, actuator commands, encoder logs, and more. This synchronized timeline gives engineers a clear window into what the robot ‘experienced’ at any given moment, allowing them to correlate sensor input with behavior or decision-making.
Differentiated environments
To bring this data into context, Nominal supports visualization of URDF datasets. Point clouds are rendered within the robot’s own frame of reference, giving structure to what would otherwise be unstructured spatial data. Engineers can see exactly which part of the environment produced which cluster of points, making it easier to interpret scenes and validate system responses.
Flexible queries
The platform also supports rich, flexible queries. Users can search by timestamp, location, sensor, or even event tags, then run on demand analytics such as point density calculations, bounding box occlusion, or comparisons against known object positions. These tools surface insights immediately, without requiring offline post processing.
Performance at scale
All of this runs on infrastructure built for performance at scale. Whether you’re working with a brief test run or years of archived missions, Nominal’s indexing engine and parallel compute keep interactions responsive and data accessible. The result is a unified perception workflow that eliminates friction and keeps pace with the speed of development.
Querying & toggling massive datasets in real time
Supporting machine vision today
Nominal’s point cloud support is already transforming how teams interact with complex spatial data across industries.
Flight test engineers can now replay Structure from Motion (SfM) models side by side with inertial data, allowing them to precisely correlate visual reconstructions with real world motion and system behavior.
Robotics teams use Nominal to investigate failures in detail by inspecting the exact 3D environments their machines encountered. Either frame by frame or aggregated over time. This lets them compare what actually happened against known object positions or expected behaviors, improving both debugging and validation.
Fleet operators rely on continuous spatial streams to maintain and refine high precision maps. With Nominal, they can detect positional drift automatically and keep maps up to date without relying on slow, manual stitching or post processing workflows.
Enabling the future of autonomy
As autonomy systems grow more complex, sensor fusion must become repeatable, transparent, and collaborative. Nominal’s tooling helps bridge the last mile between rich data and reliable insight, turning sensor logs into something teams can leverage at scale. Helping both robots & engineers understand the world.
If you’re building robots or vehicles and want to explore how time series spatial data management can slot into your workflow, schedule a demo with us and send over your toughest point-cloud dataset.