---
pretty_name: AgiBot World
size_categories:
- n>1T
task_categories:
- other
language:
- en
tags:
- real-world
- dual-arm
- Robotics manipulation
extra_gated_prompt: >-
### AgiBot World COMMUNITY LICENSE AGREEMENT
AgiBot World Alpha Release Date: December 30, 2024 All the data and code
within this repo are under [CC BY-NC-SA
4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
extra_gated_fields:
First Name: text
Last Name: text
Email: text
Country: country
Affiliation: text
Phone: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
Research interest: text
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the AgiBot Privacy Policy: checkbox
extra_gated_description: >-
The information you provide will be collected, stored, processed and shared in
accordance with the AgiBot Privacy Policy.
extra_gated_button_content: Submit
---
# Key Features π
- **1 million+** trajectories from 100 robots.
- **100+ real-world scenarios** across 5 target domains.
- **Cutting-edge hardware:** visual tactile sensors / 6-DoF dexterous hand / mobile dual-arm robots
- **Tasks involving:**
- Contact-rich manipulation
- Long-horizon planning
- Multi-robot collaboration
# News π
- **`[2025/1/3]`** Agibot World Alpha sample dataset released.
- **`[2024/12/30]`** AgiBot World Alpha released.
# TODO List π
- [ ] **AgiBot World Beta**: ~1,000,000 trajectories of high-quality robot data (expected release date: Q1 2025)
- [ ] Complete language annotation of Alpha version (expected release data: Mid-January 2025)
- [ ] **AgiBot World Colosseum**:Comprehensive platform (expected release date: 2025)
- [ ] **2025 AgiBot World Challenge** (expected release date: 2025)
# Table of Contents
- [Key Features π](#key-features-)
- [News π](#news-)
- [TODO List π ](#todo-list-)
- [Get started π₯](#get-started-)
- [Download the Dataset](#download-the-dataset)
- [Dataset Structure](#dataset-structure)
- [Explanation of Proprioceptive State](#explanation-of-proprioceptive-state)
- [Dataset Preprocessing](#dataset-preprocessing)
- [License and Citation](#license-and-citation)
# Get started π₯
## Download the Dataset
To download the full dataset, you can use the following code. If you encounter any issues, please refer to the official Hugging Face documentation.
```
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
# When prompted for a password, use an access token with write permissions.
# Generate one from your settings: https://huggingface.co/settings/tokens
git clone https://huggingface.co/datasets/agibot-world/AgiBotWorld-Alpha
# If you want to clone without large files - just their pointers
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/agibot-world/AgiBotWorld-Alpha
```
If you only want to download a specific task, such as `task_327`, you can use the following code.
```
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
# Initialize an empty Git repository
git init AgiBotWorld-Alpha
cd AgiBotWorld-Alpha
# Set the remote repository
git remote add origin https://huggingface.co/datasets/agibot-world/AgiBotWorld-Alpha
# Enable sparse-checkout
git sparse-checkout init
# Specify the folders and files
git sparse-checkout set observations/327 task_info/task_327.json scripts proprio_stats parameters
# Pull the data
git pull origin main
```
To facilitate the inspection of the dataset's internal structure and examples, we also provide a sample dataset, which is approximately 7 GB. Please refer to `sample_dataset.tar`.
## Dataset Preprocessing
Our project relies solely on the [lerobot library](https://github.com/huggingface/lerobot) (dataset `v2.0`), please follow their [installation instructions](https://github.com/huggingface/lerobot?tab=readme-ov-file#installation).
Here, we provide scripts for converting it to the lerobot format. **Note** that you need to replace `/path/to/agibotworld/alpha` and `/path/to/save/lerobot` with the actual path.
```
python scripts/convert_to_lerobot.py --src_path /path/to/agibotworld/alpha --task_id 352 --tgt_path /path/to/save/lerobot
```
We would like to express our gratitude to the developers of lerobot for their outstanding contributions to the open-source community.
## Dataset Structure
### Folder hierarchy
```
data
βββ task_info
β βββ task_327.json
β βββ task_352.json
β βββ ...
βββ observations
β βββ 327 # This represents the task id.
β β βββ 648642 # This represents the episode id.
β β β βββ depth # This is a folder containing depth information saved in PNG format.
β β β βββ videos # This is a folder containing videos from all camera perspectives.
β β βββ 648649
β β β βββ ...
β β βββ ...
β βββ 352
β β βββ 648544
β β β βββ depth
β β β βββ videos
β β βββ 648564
β β β βββ ...
β βββ ...
βββ parameters
β βββ 327
β β βββ 648642
β β β βββ camera
β β βββ 648649
β β β βββ camera
β β βββ ...
β βββ 352
β βββ 648544
β β βββ camera # This contains all the cameras' intrinsic and extrinsic parameters.
β βββ 648564
β β βββ camera
| βββ ...
βββ proprio_stats
β βββ 327[task_id]
β β βββ 648642[episode_id]
β β β βββ proprio_stats.h5 # This file contains all the robot's proprioceptive information.
β β βββ 648649
β β β βββ proprio_stats.h5
β β βββ ...
β βββ 352[task_id]
β β βββ 648544[episode_id]
β β β βββ proprio_stats.h5
β β βββ 648564
β β βββ proprio_stats.h5
β βββ ...
```
### json file format
In the `task_[id].json` file, we store the basic information of every episode along with the language instructions. Here, we will further explain several specific keywords.
- **action_config**: The content corresponding to this key is a list composed of all **action slices** from the episode. Each action slice includes a start and end time, the corresponding atomic skill, and the language instruction.
- **key_frame**: The content corresponding to this key consists of annotations for keyframes, including the start and end times of the keyframes and detailed descriptions.
(`action_text` and `description` *are not available now, to be released by mid-January.*)
```
[ {"episode_id": 649078,
"task_id": 327,
"task_name": "Picking items in Supermarket",
"init_scene_text": "The robot is in front of the fruit shelf in the supermarket.",
"lable_info":{
"action_config":[
{"start_frame": 0,
"end_frame": 435,
"action_text": "Pick up onion from the shelf."
"skill": "Pick"
},
{"start_frame": 435,
"end_frame": 619,
"action_text": "Place onion into the plastic bag in the shopping cart."
"skill": "Place"
},
...
]
"key_frame": [
{"start": 0,
"end": 435,
"comment": "Failure recovery"
}
]
},
...
]
```
### h5 file format
In the `proprio_stats.h5` file, we store all the robot's proprioceptive data. For more detailed information, please refer to the [explanation of proprioceptive state](#explanation-of-proprioceptive-state).
```
|-- timestamp
|-- state
|-- effector
|-- force
|-- position
|-- end
|-- angular
|-- orientation
|-- position
|-- velocity
|-- wrench
|-- head
|-- effort
|-- position
|-- velocity
|-- joint
|-- current_value
|-- effort
|-- position
|-- velocity
|-- robot
|-- orientation
|-- orientation_drift
|-- position
|-- position_drift
|-- waist
|-- effort
|-- position
|-- velocity
|-- action
|-- effector
|-- force
|-- index
|-- position
|-- end
|-- orientation
|-- position
|-- head
|-- effort
|-- position
|-- velocity
|-- joint
|-- effort
|-- index
|-- position
|-- velocity
|-- robot
|-- index
|-- orientation
|-- position
|-- velocity
|-- waist
|-- effort
|-- position
|-- velocity
```
## Explanation of Proprioceptive State
### Terminology
*The definitions and data ranges in this section may change with software and hardware version. Stay tuned.*
**State and action**
1. State
State refers to the monitoring information of different sensors and actuators.
2. Action
Action refers to the instructions sent to the hardware abstraction layer, where controller would respond to these instructions. Therefore, there is a difference between the issued instructions and the actual executed state.
**Actuators**
1. ***Effector:*** refers to the end effector, for example dexterous hands or grippers.
2. ***End:*** refers to the 6DoF end pose on the robot flange.
3. ***Head:*** refers to the robot's head perspectiveοΌwhich has two degrees of freedom (pitch and yaw).
4. ***Joint:*** refers to the joints of the robot's dual arms, with 14 degrees of freedom (7 DoF each).
5. ***Robot:*** refers to the robot's pose in its surrouding environment. The orientation and position refer to the robot's relative pose in the odometry coordinate system, where the origin is set since the robot is powered on.
6. ***Waist:*** refers to the joints of the robot's waist, which has two degrees of freedom (pitch and lift).
### Common fields
1. Position: Spatial position, encoder position, angle, etc.
2. Velocity: Speed
3. Angular: Angular velocity
4. Effort: Torque of the motor. Not available for now.
5. Wrench: Six-dimensional force, force in the xyz directions, and torque. Not available for now.
### Value shapes and ranges
| Group | Shape | Meaning |
| --- | :---- | :---- |
| /timestamp | [N] | timestamp in nanoseconds |
| /state/effector/position (gripper) | [N, 2] | left `[:, 0]`, right `[:, 1]`, gripper open range in mm |
| /state/effector/position (dexhand) | [N, 12] | left `[:, :6]`, right `[:, 6:]`, joint angle in rad |
| /state/end/orientation | [N, 2, 4] | left `[:, 0, :]`, right `[:, 1, :]`, flange quaternion with xyzw |
| /state/end/position | [N, 2, 3] | left `[:, 0, :]`, right `[:, 1, :]`, flange xyz in meters |
| /state/head/position | [N, 2] | yaw `[:, 0]`, pitch `[:, 1]`, rad |
| /state/joint/current_value | [N, 14] | left arm `[:, :7]`, right arm `[:, 7:]` |
| /state/joint/position | [N, 14] | left arm `[:, :7]`, right arm `[:, 7:]`, rad |
| /state/robot/orientation | [N, 4] | quaternion in xyzw, yaw only |
| /state/robot/position | [N, 3] | xyz position, where z is always 0 in meters |
| /state/waist/position | [N, 2] | pitch `[:, 0]` in rad, lift `[:, 1]`in meters |
| /action/*/index | [M] | actions indexes refer to when the control source is actually sending signals |
| /action/effector/position (gripper) | [N, 2] | left `[:, 0]`, right `[:, 1]`, 0 for full open and 1 for full close |
| /action/effector/position (dexhand) | [N, 12] | same as /state/effector/position
| /action/effector/index | [M_1] | index when the control source for end effector is sending control signals |
| /action/end/orientation | [N, 2, 4] | same as /state/end/orientation |
| /action/end/position | [N, 2, 3] | same as /state/end/position |
| /action/end/index | [M_2] | same as other indexes |
| /action/head/position | [N, 2] | same as /state/head/position |
| /action/head/index | [M_3] | same as other indexes |
| /action/joint/position | [N, 14] | same as /state/joint/position |
| /action/joint/index | [M_4] | same as other indexes |
| /action/robot/velocity | [N, 2] | vel along x axis `[:, 0]`, yaw rate `[:, 1]` |
| /action/robot/index | [M_5] | same as other indexes |
| /action/waist/position | [N, 2] | same as /state/waist/position |
| /action/waist/index | [M_6] | same as other indexes |
# License and Citation
All the data and code within this repo are under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). Please consider citing our project if it helps your research.
```BibTeX
@misc{contributors2024agibotworldrepo,
title={AgiBot World Colosseum},
author={AgiBot World Colosseum contributors},
howpublished={\url{https://github.com/OpenDriveLab/AgiBot-World}},
year={2024}
}
```