RIOS DX-1 is a full-stack, multi-purpose dexterous robot that can be reconfigured to perform a wide variety of tasks, in unstructured environments, across different industries from manufacturing, e-commerce, food services, to lab automation, and more. We’ve constructed a library of models for DX-1 to handle tasks ranging from pick-and-place of arbitrary objects to complex component assembly. Also, DX-1 infrastructure can handle thousands of different SKUs. DX-1 has been designed to interact with objects in both static and dynamic applications, such as bin picking and moving conveyor belts. These models can be re-used, adapted, or extended to enable DX-1 to perform both brand new tasks and increasingly complex manipulation tasks, in any environment.

Below are videos demonstrating a subset of DX-1’s capabilities. These videos are representative of what the robot can do, but those are by no means an exhaustive list. It is important to note that all these tasks are not scripted or pre-programmed. They’re underpinned by AI. The robot recognizes the objects and acts according to a built-in model of what should be accomplished (i.e., understand the task at hand), regardless of the dynamic nature of the environment (everything can be in motion in the environment).


DX-1’s ability to recognize parts in any orientation and precisely align parts with sub 0.1 mm tolerances makes it a natural fit for manufacturing. 

This video demonstrates assembly of a component on a moving target. This demo was done for a TIER 2 automotive manufacturer. The robot recognizes an oil filter can, picks the can up, then tracks an oil filter assembly down a conveyor belt to slide the can on top of it  – performing a moving assembly with 1 mm tolerances.  The robot  accurately perform the task even if the speed of the conveyor belt is altered or even if the oil filter comes at different positions on the assembly lines, which shows its robustness to dynamically changing environments.


DX-1’s ability to recognize parts and components in any orientation and precisely align and manipulate them with sub 0.1 mm tolerances makes it a natural fit for lab automation. 

This video demonstrates handling of sample bottles (i.e., moving samples around, placing a bottle on a scale, and storing samples in a transport carrier). This demo was part of automating gas chromatography sample preparation for a chemical company. DX-1 has an extensive library of actions that it can performed, and this library is constantly evolving. DX-1 can be programmed to handle various lab samples such as test tubes, vials, beakers, etc. and perform various tasks like capping/uncapping bottles, inserting test tubes in analytics equipment, pouring liquid and more.


DX-1’s ability to grasp a broad class of objects and even unseen objects makes it also suited for e-commerce applications. 

This video demonstrates pick-and-place of various objects that are randomly placed and randomly oriented on a table. This demo was done for an e-commerce supplier and is representative of the many variants of this pick-and-place task. In this video, the AI robot is instructed to pick up objects of interest and to place them in their respective bins. For this task, our algorithms recognize the objects of interest, compute initial grasp points based on imitation learning models, and subsequently finalize the grasp by leveraging haptic signatures. For unseen objects, DX-1 will grasp it either by relying on existing models of similar class of objects in its database or rebuild new models on the fly


DX-1’s ability to recognize parts and to incorporate tactile perception of hardness, vibrations, and other object physical properties comes in in handy for machine tending applications.

This video demonstrates two key complex actions required for machine tending of 3D printers.  The 3D printer build head is unscrewed and removed, then placed on the table.  The build head is then inserted (with mm tolerances) and tightened into place with the screw.  DX-1’s combination of AI-powered vision and tactile perception enable it to recognize and interact with equipment parts (e.g., build head) and newly fabricated parts.  It can manipulate equipment in ways that require visual and tactile perception, enabling long-term unattended operation.  In the case of this 3D printer, a technician could perform maintenance (thereby altering the build head state and screw state) and DX-1 will simply adapt.