
Technology
RIOS-Net Tech Stack

The superior user experience is powered by RIOS-Net - a massive technology stack that powers our product offerings. RIOS-Net consists of 500+ repos of proprietary software, simulation backends, and dedicated hardware, which not only contribute to this premier experience, but are also real barriers to entry.
Hardware + Software Innovations
Digital Twin Framework
AI Algorithms
Robots 2.0
WE'VE ARCHITECTED FIRST OF ITS KIND INNOVATIONS
-
World’s most advanced tactile sensors for robots – give robots a sense of touch
-
Pioneered haptics intelligence for robots
-
Built highest-performance end-of-arm tooling and food-grade grippers
BEST-IN-CLASS AND DEFENSIBLE TECHNOLOGY STACK
RIOS develops and deploys dexterous, AI-powered robots at scale. Our core premise is that embedding human-like capabilities into robots leads to superior machines. An approach based on developing a central AI platform (brain), AI-driven perception system (eye), and haptic intelligence (touch), supported by RIOS’ patent pending dedicated hardware is the only way to achieve a dexterous multi-purpose robot. A software-only approach on “generic” hardware and reliance on vision only simply doesn’t cut it – and it’s not how humans interact with the world.
Our hardware-software technology stack is the best-in-class and its level of performance is unlike anything that the world has seen so far. We’ve built a sophisticated infrastructure under the hood that allows our robots to perform a diversity of tasks and perform increasingly complex manipulation tasks.
Our robots continuously learn on the job, construct models of the world, and extend or adapt these models to perform other tasks. Our machines’ performance continuously improves over time.

HAPTIC
INTELLIGENCE
RIOS is the first company developing true haptic intelligence for robots (i.e., the intelligence that allows humans to grasp any object). We’ve quietly built the world’s most advanced tactile sensing platform for robots, powered by advanced AI.
Our proprietary hardware, with circuitry as complex as an iPhone’s, has thousands of miniaturized sensors relaying biomimetic tactile information. This rich data set allows the robot to make sense of the world, to learn correct grasping postures through experiential learning using a few training examples, and to construct models that enable it to generalize to previously unseen objects. With dedicated hardware yielding rich data sets, our machine learning (ML) algorithms learn in ways previously not possible.

HARDWARE & SOFTWARE
01010011 01001111 01000110 01010100 01010111 01000001 01010010 01000101
We’ve developed a novel high-precision, smart end-effector for robots, underpinned by our proprietary human-level tactile sensors. The end-effector was engineered from the ground up and built for fast response (sub 2ms reaction time) to sensor feedback (control loop) and for precision manipulation (sub 0.1mm tolerances). The overall performance of our smart end-effector exceeds the specs and performance of any commercial off-the-shelf end-effector.
The sensory information from the platform is processed and synthesized by custom AI running on embedded processors (i.e., computing at the edge). There are several capabilities built-in including, but not limited to, optimal grasp of objects, surface topography mapping, slip detection and object discrimination. It is worth noting that slip detection is one of the hardest problems in robotics and has been hard to pin down for decades – our platform natively detects slip and its direction.
We’re building every capability that one would expect from a human hand and more. The decentralized AI powering the sensors also enables the robot to discriminate between different types of events such as gripper closing, arm movement, slip event, and can even be used for predictive maintenance of our own hardware.
AI-DRIVEN VISION SYSTEM
The advanced AI platform is responsible for vision perception, tactile perception, and real-time motion planning and control of the AI robot. Like the brain of a child, the platform constructs models of the world and evolves with new experiences -- allowing the robot to be rapidly reconfigured to perform different tasks.
RIOS leverages both hardware and software innovation to enable high-speed, high-throughput data collection with minimal human involvement. AI-driven vision systems remain at the heart of robotics systems. Contrary to popular belief, ML algorithms for perceiving and interacting with objects in a 3D world is still in its infancy. Utilizing state-of-the-art deep learning, our robots can quickly learn new objects, adapt to new environments and seamlessly interact with them.
AI-driven tactile perception for industrial robotics is essentially an undeveloped field, owing to the dearth of hardware available to create usable data, and the compute resources to process it. Our XTS hardware changes this by producing high-frequency, high information density data streams comparable to those utilized by vision AI.
Leveraging distributed computation (edge and cloud) and state-of-the-art ML algorithms, our AI platform synthesizes the data relayed by the perception AI systems for vision and haptics (i.e., sensor fusion). This results in a high-fidelity “hand-eye coordination” that enables our robots to gracefully adapt and respond to dynamic events, and makes them robust to changing environments. Beyond object recognition, our perception system is also used for quality control.
RIOS leverages both hardware and software innovation to enable high-speed, high-throughput data collection with minimal human involvement. AI-driven vision systems remain at the heart of robotics systems. Contrary to popular belief, ML algorithms for perceiving and interacting with objects in a 3D world is still in its infancy. Utilizing state-of-the-art deep learning, our robots can quickly learn new objects, adapt to new environments and seamlessly interact with them.

DATA
PROCESSING
PLATFORM
Data is a centerpiece of intelligent machines. Our robot platform generates terabytes of data (ROS nodes, vision data, tactile data, point clouds, system metadata, etc.) that need to be readily accessible for developing ML pipelines and for parallel processing across multiple cloud servers. We’ve developed a proprietary massive database processing platform for accessing and manipulating data at large scale, building engineering pipelines, standardizing ML models, and more.
FULL PHYSICS
SIMULATION PLATFORM
We have our own full physics robot simulation backend. The physics engine allows us to realistically simulate parameters like conveyor belt speed, gravity, joint force / torque of a robotic arm, etc. We’re able to perform realistic simulations and predict to what extent the robot will be able to do the task, without touching the actual robot. This simulation platform enables us to rapidly model our robots in realistic factory environments and test our models and AI algorithms. Thousands of iterations can be performed effortlessly yielding optimized parameters for performing specific tasks.
ENHANCED
REINFORCEMENT
LEARNING
We’re progressively incorporating many techniques of modern AI on the platform and adapting them – concepts like imitation learning and reinforcement learning. These endeavors are especially exciting given the capabilities of our dedicated and superior hardware. We’re gradually transitioning to an infrastructure in which the robots are learning to grasp and manipulate objects on their own, without any human intervention. Analogous to how a child learns about grasping new objects, guided by feedback through both vision and tactile sensory data, it then starts to construct models of grasping, and extend these models to unseen objects. This is a glimpse of the robots of the future.