Physical AI Unit · Spritle Software

Robots that can see and understand.

Plug in real-time Vision AI and your robot arm grasps correctly on the first try: no pre-sorting, no fixed jigs, no cloud dependency, no months-long integration.

above 97.5% accuracy  ·  30 ms latency  ·  30fps live

Loading · 0%
GRIPPER conf: 0.97
30ms Inference
0.42m Depth · stereo
Above 97.5% Accuracy
01 · Object Detection
Every object.
Identified.
02 · Depth Intelligence
Every distance.
Measured.
03 · Edge Inference
All onboard.
Zero cloud.
scroll to explore
30ms
Latency
above 97.5%
Accuracy
30fps
Live feed
Object Intelligence

No more failed grasps
on the first pick.

Most arms fail when parts arrive out-of-position or mixed with others. Ours runs real-time object intelligence across cluttered, unstructured environments, classifying, locating, and orienting every object before the arm moves.

  • Real-time bounding box detection at 30fps
  • Multi-class object classification
  • 6DoF pose estimation & tracking
  • Works with partial occlusion
Precision Depth

Millimetre-perfect
placement. Zero trial runs.

Sub-millimetre stereo depth means your arm computes the exact grasp on the first attempt, no calibration jigs, no manual positioning, no wasted cycle time.

  • Sub-millimetre spatial resolution
  • Stereo + structured light fusion
  • Occlusion-aware reconstruction
  • Continuous live depth maps
0.2m 1.0m 2.0m
Object Detection
Depth Estimation
6DoF Pose Tracking
Edge Inference
ROS2 Integration
Manufacturing Vision AI
Real-time Segmentation
Grasp Planning
Multi-Camera Fusion
Anomaly Detection
NVIDIA Jetson / Hailo
Object Detection
Depth Estimation
6DoF Pose Tracking
Edge Inference
ROS2 Integration
Manufacturing Vision AI
Real-time Segmentation
Grasp Planning
Multi-Camera Fusion
Anomaly Detection
NVIDIA Jetson / Hailo
Object Intelligence in Action

Demos that speak
louder than slides.

Six deployments across manufacturing, logistics, and assembly, captured on production hardware, not a demo rig.

Demo 01
Grasp Intelligence

Bin-picking with zero pre-sorting

Flexiv arm identifies, poses, and picks randomly oriented gear components from a mixed bin at production speed.

Demo 02
Quality Control

Real-time defect detection

Sub-millimetre anomaly detection on PCBs and machined parts. Flags defects before they reach assembly.

Demo 03
Assembly Assist

Guided assembly with pose feedback

6DoF pose loop guides the arm through multi-step assemblies with sub-mm placement accuracy.

What We Deliver

Everything your robot needs to see, decide, and act.

Six capabilities. One unit. Camera feed to grasp command, entirely onboard, no cloud round-trip.

01

Object Detection

Real-time multi-class detection at 60fps. Identifies and classifies every object in the workspace with bounding boxes and confidence scores.

02

Depth Perception

Sub-millimetre stereo depth maps in real time. Enables grasps and placements that work in production — not just in the lab.

03

6DoF Pose Estimation

Full 6-degree-of-freedom tracking of objects and end-effectors. Tells the robot not just where an object is — but exactly how it's oriented.

04

Edge Inference

All models run onboard. No cloud, no latency penalty, no data egress. Optimised for NVIDIA Jetson and Hailo. Works fully air-gapped.

05

ROS2 Integration

Native ROS2 nodes out of the box. Publishes object poses, depth clouds, and detection streams directly to your robot's topic graph.

06

Last-Mile Integration

Mounts on Flexiv, UR, Fanuc, KUKA and more. Full SDK. From unboxing to running inference on your arm: measured in hours, not months.

How It Works

From camera to command
in four steps.

Most robots are blind. They follow fixed coordinates regardless of what's in front of them. We close the loop: camera feed in, pick command out, in under 5ms.

01

Perceive

Stereo cameras and structured light build a continuous 3D scene model — capturing geometry, depth, texture, and reflectance in real time. Not just a colour pattern. Full spatial understanding.

Vision Layer
02

Understand

Object Intelligence classifies every item — known or unknown — without prior training. The system analyses an object's shape, orientation, and grasp affordances within milliseconds, adapting on the fly.

Intelligence Layer
03

Decide

Edge inference computes grasp vectors, pick coordinates, and approach angles — all onboard, no cloud round-trip. Structured outputs ready for your robot's control system in under 5ms.

Decision Layer
04
objectmind_sdk — robot_arm_01
> connect --arm robot_01 --stream live
Connecting to arm controller...
✓ Connected. Stream active @ 60Hz
> detect --frame current
Running inference...
✓ 3 objects detected [gear, bolt, casing]
> pick --object "gear" --confidence 0.97
Computing grasp vector...
✓ Grasp: [x:214.3, y:88.1, z:42.7] θ:23°
✓ Arm executing pick sequence...
>
STEP 04 / ACTION INTELLIGENCE
Decisions,
Executed.

Structured outputs feed directly into robot control systems via our low-latency API. Pick coordinates, grasp vectors, and object metadata, streamed at 60Hz, no middleware, no mapping layer.

See your part get picked,
live.

We run the demo on a real robot arm. Bring a part number or just curiosity.
No deck. No SDRs. You talk directly with the engineers.

Typically responds within one business day

No spam, ever. We typically respond within 8hrs or less on business days.