Complete reference for the RNX command-line interface, including all commands, options, and examples.
Options available for all commands:
--config <path> # Path to configuration file (default: searches standard locations)
--node <name> # Node name from configuration (default: "default")
--help, -h # Show help for command
--version, -v # Show version information
RNX searches for configuration in this order:
./rnx-config.yml./config/rnx-config.yml~/.rnx/rnx-config.yml/etc/joblet/rnx-config.yml/opt/joblet/config/rnx-config.ymlrnx runExecute a command on the Joblet server.
rnx run [flags] <command> [args...]
| Flag | Description | Default |
|---|---|---|
--max-cpu |
Maximum CPU usage percentage (0-10000) | 0 (unlimited) |
--max-memory |
Maximum memory in MB | 0 (unlimited) |
--max-iobps |
Maximum I/O bytes per second | 0 (unlimited) |
--cpu-cores |
CPU cores to use (e.g., “0-3” or “1,3,5”) | ”” (all cores) |
--network |
Network mode: bridge, isolated, none, or custom | “bridge” |
--volume |
Volume to mount (can be specified multiple times) | none |
--upload |
Upload file to workspace (can be specified multiple times) | none |
--upload-dir |
Upload directory to workspace | none |
--env, -e |
Environment variable (KEY=VALUE, visible in logs) | none |
--secret-env, -s |
Secret environment variable (KEY=VALUE, hidden from logs) | none |
--schedule |
Schedule job execution (duration or RFC3339 time) | immediate |
--workflow |
YAML workflow file for workflow execution | none |
# Simple command
rnx run echo "Hello, World!"
# With resource limits
rnx run --max-cpu=50 --max-memory=512 --max-iobps=10485760 \
python3 intensive_script.py
# CPU core binding
rnx run --cpu-cores="0-3" stress-ng --cpu 4 --timeout 60s
# Multiple volumes
rnx run --volume=data --volume=config \
python3 process.py
# Environment variables (regular - visible in logs)
rnx run --env="NODE_ENV=production" --env="PORT=8080" \
node app.js
# Secret environment variables (hidden from logs)
rnx run --secret-env="API_KEY=dummy_api_key_123" --secret-env="DB_PASSWORD=secret" \
python app.py
# Mixed environment variables
rnx run --env="DEBUG=true" --secret-env="SECRET_KEY=mysecret" \
python app.py
# File upload
rnx run --upload=script.py --upload=data.csv \
python3 script.py data.csv
# Directory upload
rnx run --upload-dir=./project \
npm start
# Scheduled execution
rnx run --schedule="30min" backup.sh
rnx run --schedule="2025-08-03T15:00:00" maintenance.sh
# Custom network
rnx run --network=isolated ping google.com
# Workflow execution
rnx run --workflow=ml-pipeline.yaml # Execute full workflow
rnx run --workflow=jobs.yaml:ml-analysis # Execute specific job from workflow
# Complex example
rnx run \
--max-cpu=200 \
--max-memory=2048 \
--cpu-cores="0,2,4,6" \
--network=mynet \
--volume=persistent-data \
--env=PYTHONPATH=/app \
--upload-dir=./src \
--workdir=/work/src \
python3 main.py --process-data
When using --workflow, Joblet performs comprehensive pre-execution validation:
$ rnx run --workflow=my-workflow.yaml
🔍 Validating workflow prerequisites...
✅ No circular dependencies found
✅ All required volumes exist
✅ All required networks exist
✅ All required runtimes exist
✅ All job dependencies are valid
🎉 Workflow validation completed successfully!
Validation Checks:
Error Example:
Error: workflow validation failed: network validation failed: missing networks: [non-existent-network]. Available networks: [bridge isolated none custom-net]
rnx listList all jobs on the server.
rnx list [flags]
| Flag | Description | Default |
|---|---|---|
--json |
Output in JSON format | false |
Table Format (default):
JSON Format: Outputs a JSON array with detailed job information including all resource limits, volumes, network, and scheduling information.
# List all jobs (table format)
rnx list
# Example output:
# ID NAME STATUS START TIME COMMAND
# ---- ------------ ---------- ------------------- -------
# 1 setup-data COMPLETED 2025-08-03 10:15:32 echo "Hello World"
# 2 process-data RUNNING 2025-08-03 10:16:45 python3 script.py
# 3 - FAILED 2025-08-03 10:17:20 invalid_command
# 4 - SCHEDULED N/A backup.sh
# JSON output for scripting
rnx list --json
# Example JSON output:
# [
# {
# "id": "1",
# "name": "setup-data",
# "status": "COMPLETED",
# "start_time": "2025-08-03T10:15:32Z",
# "end_time": "2025-08-03T10:15:33Z",
# "command": "echo",
# "args": ["Hello World"],
# "exit_code": 0
# },
# {
# "id": "2",
# "name": "process-data",
# "status": "RUNNING",
# "start_time": "2025-08-03T10:16:45Z",
# "command": "python3",
# "args": ["script.py"],
# "max_cpu": 100,
# "max_memory": 512,
# "cpu_cores": "0-3",
# "scheduled_time": "2025-08-03T15:00:00Z"
# }
# ]
# Filter with jq
rnx list --json | jq '.[] | select(.status == "FAILED")'
rnx list --json | jq '.[] | select(.max_memory > 1024)'
rnx statusGet detailed status of a specific job or workflow (unified command).
rnx status [flags] <id>
The status command automatically detects whether the ID refers to a job or workflow:
Workflow Status Features:
| Flag | Description | Default |
|---|---|---|
--workflow |
Explicitly get workflow status | false |
--json |
Output in JSON format | false |
# Get job status (human-readable format)
rnx status 42
# Get workflow status (automatic detection)
rnx status 1
# Explicitly get workflow status
rnx status --workflow 5
# Get status in JSON format (works for both jobs and workflows)
rnx status --json 42 # Job JSON output
rnx status --json 1 # Workflow JSON output
# Check multiple jobs/workflows
for id in 1 2 3; do rnx status $id; done
# JSON output for scripting
rnx status --json 42 | jq .status # Job status
rnx status --json 1 | jq .total_jobs # Workflow progress
# Example workflow status output:
# Workflow ID: 1
# Workflow: data-pipeline.yaml
# Status: RUNNING
# Progress: 2/4 jobs completed
#
# Jobs in Workflow:
# -----------------------------------------------------------------------------------------
# JOB ID JOB NAME STATUS EXIT CODE DEPENDENCIES
# -----------------------------------------------------------------------------------------
# 42 setup-data COMPLETED 0 -
# 43 process-data COMPLETED 0 setup-data
# 0 validate-results PENDING - process-data
# 0 generate-report PENDING - validate-results
# Example JSON output for individual job:
# {
# "id": "42",
# "name": "process-data",
# "command": "python3",
# "args": ["process_data.py"],
# "maxCPU": 100,
# "cpuCores": "0-3",
# "maxMemory": 512,
# "maxIOBPS": 0,
# "status": "COMPLETED",
# "startTime": "2025-08-03T10:15:32Z",
# "endTime": "2025-08-03T10:18:45Z",
# "exitCode": 0,
# "scheduledTime": ""
# }
rnx logView or stream job logs.
rnx log [flags] <job-id>
| Flag | Description | Default |
|---|---|---|
--follow, -f |
Stream logs in real-time | false |
--tail |
Number of lines to show from end | all |
--timestamps |
Show timestamps | false |
# View complete logs
rnx log 42
# Stream logs in real-time
rnx log -f 42
# Show last 100 lines
rnx log --tail=100 42
# With timestamps
rnx log --timestamps 42
rnx stopStop a running or scheduled job.
rnx stop <job-id>
# Stop a running job
rnx stop 42
# Stop multiple jobs
rnx list --json | jq -r '.[] | select(.status == "RUNNING") | .id' | xargs -I {} rnx stop {}
rnx volume createCreate a new volume for persistent storage.
rnx volume create <name> [flags]
| Flag | Description | Default |
|---|---|---|
--size |
Volume size (e.g., 1GB, 500MB) | required |
--type |
Volume type: filesystem or memory | “filesystem” |
# Create 1GB filesystem volume
rnx volume create mydata --size=1GB
# Create 512MB memory volume (tmpfs)
rnx volume create cache --size=512MB --type=memory
# Create volumes for different purposes
rnx volume create db-data --size=10GB --type=filesystem
rnx volume create temp-processing --size=2GB --type=memory
rnx volume listList all volumes.
rnx volume list [flags]
| Flag | Description | Default |
|---|---|---|
--json |
Output in JSON format | false |
# List all volumes
rnx volume list
# JSON output
rnx volume list --json
# Check volume usage
rnx volume list --json | jq '.[] | select(.size_used > .size_total * 0.8)'
rnx volume removeRemove a volume.
rnx volume remove <name>
# Remove single volume
rnx volume remove mydata
# Remove all volumes (careful!)
rnx volume list --json | jq -r '.[].name' | xargs -I {} rnx volume remove {}
rnx network createCreate a custom network.
rnx network create <name> [flags]
| Flag | Description | Default |
|---|---|---|
--cidr |
Network CIDR (e.g., 10.10.0.0/24) | required |
# Create basic network
rnx network create mynet --cidr=10.10.0.0/24
# Create multiple networks for different environments
rnx network create dev --cidr=10.10.0.0/24
rnx network create test --cidr=10.20.0.0/24
rnx network create prod --cidr=10.30.0.0/24
rnx network listList all networks.
rnx network list [flags]
| Flag | Description | Default |
|---|---|---|
--json |
Output in JSON format | false |
# List all networks
rnx network list
# JSON output
rnx network list --json
rnx network removeRemove a custom network. Built-in networks cannot be removed.
rnx network remove <name>
# Remove network
rnx network remove mynet
# Remove all custom networks (keep built-in networks)
rnx network list --json | jq -r '.networks[] | select(.builtin == false) | .name' | xargs -I {} rnx network remove {}
rnx runtime listList all available runtime environments.
rnx runtime list [flags]
| Flag | Description | Default |
|---|---|---|
--json |
Output in JSON format | false |
# List all runtimes
rnx runtime list
# JSON output
rnx runtime list --json
rnx runtime infoGet detailed information about a specific runtime environment.
rnx runtime info <runtime-spec>
# Get runtime details
rnx runtime info python:3.11-ml
rnx runtime info java:17
rnx runtime info nodejs:18
rnx runtime testTest a runtime environment to verify it’s working correctly.
rnx runtime test <runtime-spec>
# Test runtime functionality
rnx runtime test python:3.11-ml
rnx runtime test java:17
rnx monitorMonitor system metrics in real-time.
rnx monitor [subcommand] [flags]
status - Show current system status| Flag | Description | Default |
|---|---|---|
--interval |
Update interval in seconds | 2 |
--json |
Output in JSON format | false |
# Real-time monitoring
rnx monitor
# Update every 5 seconds
rnx monitor --interval=5
# Get current status
rnx monitor status
# JSON output for metrics collection
rnx monitor status --json
# Continuous JSON stream
rnx monitor --json --interval=10
rnx nodesList configured nodes from the client configuration file.
rnx nodes [flags]
| Flag | Description | Default |
|---|---|---|
--json |
Output in JSON format | false |
# List all nodes
rnx nodes
# JSON output
rnx nodes --json
# Use specific node for commands
rnx --node=production list
rnx --node=staging run echo "test"
rnx helpShow help information.
rnx help [command]
# General help
rnx help
# Command-specific help
rnx help run
rnx help volume create
# Show configuration help
rnx help config
#!/bin/bash
# Batch processing script
# Process files in parallel with resource limits
for file in *.csv; do
rnx run \
--max-cpu=100 \
--max-memory=1024 \
--upload="$file" \
python3 process.py "$file" &
done
# Wait for all jobs
wait
# Collect results
rnx list --json | jq -r '.[] | select(.status == "COMPLETED") | .id' | \
while read job_id; do
rnx log "$job_id" > "result-$job_id.txt"
done
# GitHub Actions example
- name: Run tests in Joblet
run: |
rnx run \
--max-cpu=400 \
--max-memory=4096 \
--volume=test-results \
--upload-dir=. \
--env=CI=true \
npm test
# Check job status
JOB_ID=$(rnx list --json | jq -r '.[-1].id')
rnx status $JOB_ID
# Get test results
rnx run --volume=test-results cat /volumes/test-results/report.xml
# Monitor job failures
while true; do
FAILED=$(rnx list --json | jq '[.[] | select(.status == "FAILED")] | length')
if [ $FAILED -gt 0 ]; then
echo "Alert: $FAILED failed jobs detected"
rnx list --json | jq '.[] | select(.status == "FAILED")'
fi
sleep 60
done
version: "3.0"
nodes:
default:
address: "prod-server:50051"
cert: |
-----BEGIN CERTIFICATE-----
...
key: |
-----BEGIN PRIVATE KEY-----
...
ca: |
-----BEGIN CERTIFICATE-----
...
staging:
address: "staging-server:50051"
cert: |
-----BEGIN CERTIFICATE-----
...
# ... rest of credentials
viewer:
address: "prod-server:50051"
cert: |
-----BEGIN CERTIFICATE-----
# Viewer certificate with OU=viewer
...
# ... rest of credentials
# Production jobs
rnx --node=default run production-task.sh
# Staging tests
rnx --node=staging run test-suite.sh
# Read-only access
rnx --node=viewer list
rnx --node=viewer monitor status
rnx monitor to track resource usageSee Troubleshooting Guide for common issues and solutions.