The official Python SDK for Cyberwave. Create, control, and simulate robotics with ease.
pip install cyberwaveGet your API token from the Cyberwave platform:
- Log in to your Cyberwave instance
- Navigate to Profile → API Tokens
- Create a token and copy it
from cyberwave import Cyberwave
# Configure with your token
cw = Cyberwave(
token="your_token_here",
)
# Create a digital twin from an asset
robot = cw.twin("the-robot-studio/so101")
# Change position and rotation in the environemtn
robot.edit_positon(x=1.0, y=0.0, z=0.5)
robot.edit_rotation(yaw=90) # degrees
# Move the robot arm to 30 degrees
robot.joints.set("1", 30)
# Get current joint positions
print(robot.joints.get_all())from cyberwave import Cyberwave
cw = Cyberwave(
token="your_token_here"
)
# You can also set your token as an environment variable: export CYBERWAVE_TOKEN=your_token_here
# in that case, you can simply do:
cw = Cyberwave()
# List workspaces
workspaces = cw.workspaces.list()
print(f"Found {len(workspaces)} workspaces")
# Create a project
project = cw.projects.create(
name="My Robotics Project",
workspace_id=workspaces[0].uuid
)
# Create an environment
environment = cw.environments.create(
name="Development",
project_id=project.uuid
)# To instantiate a twin, you can query the available assets from the catalog.
# This query will return both the public assets availaable at cyberwave.com/catalog and the private assets available to your organization.
assets = cw.assets.search("so101")
robot = cw.twin(assets[0].registry_id) # the registry_id is the unique identifier for the asset in the catalog. in this case it's the-robot-studio/so101
# Edit the twin to a specific position
robot.edit_position([1.0, 0.5, 0.0])
# Update scale
robot.edit_scale(x=1.5, y=1.5, z=1.5)
# Move a joint to a specific position using radians
robot.joints.set("shoulder_joint", math.pi/4)
# You can also use degrees:
robot.joints.set("shoulder_joint", 45, degrees=True)
# You can also go a get_or_create for a specific twin an environment you created:
robot = cw.twin("the-robot-studio/so101", environment_id="YOUR_ENVIRONMENT_ID")If you are always using the same environment, you can set it as a default with the CYBERWAVE_ENVIRONMENT_ID environment variable:
export CYBERWAVE_ENVIRONMENT_ID="YOUR_ENVIRONMENT_ID"
export CYBERWAVE_TOKEN="YOUR_TOKEN"
python your_script.pyAnd then you can simply do:
from cyberwave import Cyberwave
cw = Cyberwave()
robot = cw.twin("the-robot-studio/so101")This code will return you the first SO101 twin in your environment, or create it if it doesn't exist.
Stream camera feeds to your digital twins using WebRTC. The SDK supports both standard USB/webcam cameras (via OpenCV) and Intel RealSense cameras with RGB and depth streaming.
Install FFMPEG if you don't have it:
# Mac
brew install ffmpeg pkg-config
# Ubuntu
sudo apt-get install ffmpegInstall camera dependencies:
# Standard cameras (OpenCV)
pip install cyberwave[camera]
# Intel RealSense cameras
pip install cyberwave[realsense]import asyncio
from cyberwave import Cyberwave
from cyberwave.sensor import CV2CameraStreamer, RealSenseStreamer, Resolution
cw = Cyberwave()
# CV2 Camera (USB/webcam)
streamer = CV2CameraStreamer(
cw.mqtt,
camera_id=0,
resolution=Resolution.HD,
fps=30,
twin_uuid="your_twin_uuid"
)
async def stream():
await streamer.start()
await asyncio.sleep(60)
await streamer.stop()
asyncio.run(stream())| Resolution | Size | Description |
|---|---|---|
QVGA |
320×240 | Quarter VGA - low bandwidth |
VGA |
640×480 | Standard (default) |
SVGA |
800×600 | Super VGA |
HD |
1280×720 | 720p HD |
FULL_HD |
1920×1080 | 1080p Full HD |
from cyberwave.sensor import CV2VideoTrack, CV2CameraStreamer, CameraConfig, Resolution
# Check supported resolutions for a camera
supported = CV2VideoTrack.get_supported_resolutions(camera_id=0)
print(f"Supported: {[str(r) for r in supported]}")
# Get camera info
info = CV2VideoTrack.get_camera_info(camera_id=0)
print(f"Camera: {info}")
# Using CameraConfig
config = CameraConfig(resolution=Resolution.HD, fps=30, camera_id=0)
streamer = CV2CameraStreamer.from_config(cw.mqtt, config, twin_uuid="...")The SDK supports dynamic discovery of RealSense device capabilities:
from cyberwave.sensor import (
RealSenseDiscovery,
RealSenseConfig,
RealSenseStreamer,
Resolution
)
# Check if RealSense SDK is available
if RealSenseDiscovery.is_available():
# List connected devices
devices = RealSenseDiscovery.list_devices()
for dev in devices:
print(f"{dev.name} (SN: {dev.serial_number})")
# Get detailed device info with all supported profiles
info = RealSenseDiscovery.get_device_info()
print(f"Color resolutions: {info.get_color_resolutions()}")
print(f"Depth resolutions: {info.get_depth_resolutions()}")
print(f"Sensor options: {info.sensor_options}")
# Auto-detect and create streamer from device capabilities
streamer = RealSenseStreamer.from_device(
cw.mqtt,
prefer_resolution=Resolution.HD,
prefer_fps=30,
enable_depth=True,
twin_uuid="your_twin_uuid"
)
# Or use manual configuration with validation
config = RealSenseConfig(
color_resolution=Resolution.HD,
depth_resolution=Resolution.VGA,
color_fps=30,
depth_fps=15,
enable_depth=True
)
# Validate against device
is_valid, errors = config.validate()
if not is_valid:
print(f"Config errors: {errors}")
streamer = RealSenseStreamer.from_config(cw.mqtt, config, twin_uuid="...")Query detailed device capabilities:
info = RealSenseDiscovery.get_device_info()
# Check if a specific profile is supported
if info.supports_color_profile(1280, 720, 30, "BGR8"):
print("HD @ 30fps with BGR8 is supported")
# Get available FPS for a resolution
fps_options = info.get_color_fps_options(1280, 720)
print(f"Available FPS for HD: {fps_options}")
# Get sensor options (exposure, gain, laser power, etc.)
for sensor_name, options in info.sensor_options.items():
print(f"\n{sensor_name}:")
for opt in options:
print(f" {opt.name}: {opt.value} (range: {opt.min_value}-{opt.max_value})")cyberwave/sensor/
├── __init__.py # Base classes + exports
├── config.py # Resolution, CameraConfig, RealSenseConfig, RealSenseDiscovery
├── camera_cv2.py # CV2VideoTrack, CV2CameraStreamer
└── camera_rs.py # RealSenseVideoTrack, RealSenseStreamer
You can change a specific joint actuation. You can use degrees or radiants:
robot = cw.twin("the-robot-studio/so101")
# Set individual joints (degrees by default)
robot.joints.set("shoulder_joint", 45, degrees=True)
# Or use radians
import math
robot.joints.set("elbow_joint", math.pi/4, degrees=False)
# Get current joint position
angle = robot.joints.get("shoulder_joint")
# List all joints
joint_names = robot.joints.list()
# Get all joint states at once
all_joints = robot.joints.get_all()To check out the available endpoints and their parameters, you can refer to the full API reference here.
By default, the SDK will send data marked as arriving from the real world. If you want to send data from a simulated environment using the SDK, you can initialize the SDK as follows:
from cyberwave import Cyberwave
cw = Cyberwave(source_type="sim")You can also use the SDK as a client of the Studio editor - making it appear as if it was just another editor on the web app. To do so, you can initialize it as follows:
from cyberwave import Cyberwave
cw = Cyberwave(source_type="edit")Lastly, if you want to have your SDK act as a remote teleoperator, sending commands to the actual device from the cloud, you can init the SDK as follows:
from cyberwave import Cyberwave
cw = Cyberwave(source_type="tele")Check the examples directory for complete examples:
- Basic twin control
- Multi-robot coordination
- Real-time synchronization
- Joint manipulation for robot arms
Run basic import tests:
poetry install
poetry run python tests/test_imports.py- Documentation: docs.cyberwave.com
- Issues: GitHub Issues
- Community: Discord