This comprehensive reference documentation provides detailed information for the RNX command-line interface, including complete command syntax, available options, configuration parameters, and practical usage examples for all supported operations.
The following options are available across all RNX commands:
--config <path> # Path to configuration file (default: searches standard locations)
--node <name> # Node name from configuration (default: "default")
--json # Output in JSON format
--version, -v # Show version information for both client and server
--help, -h # Show help for command
RNX resolves configuration files using the following precedence hierarchy:
./rnx-config.yml
./config/rnx-config.yml
~/.rnx/rnx-config.yml
/etc/joblet/rnx-config.yml
/opt/joblet/config/rnx-config.yml
rnx job run
Submits and executes a command or workflow on the target Joblet server instance.
rnx job run [parameters] <command> [arguments...]
Parameter | Description | Default Value |
---|---|---|
--max-cpu |
Maximum CPU usage percentage (0-10000) | 0 (unlimited) |
--max-memory |
Maximum memory in MB | 0 (unlimited) |
--max-iobps |
Maximum I/O bytes per second | 0 (unlimited) |
--cpu-cores |
CPU cores to use (e.g., “0-3” or “1,3,5”) | ”” (all cores) |
--gpu |
Number of GPUs to allocate to the job | 0 (none) |
--gpu-memory |
Minimum GPU memory required (e.g., “8GB”, “4096MB”) | none |
--network |
Network mode: bridge, isolated, none, or custom | “bridge” |
--volume |
Volume to mount (can be specified multiple times) | none |
--upload |
Upload file to workspace (can be specified multiple times) | none |
--upload-dir |
Upload directory to workspace | none |
--runtime |
Use pre-built runtime (e.g., openjdk-21, python-3.11-ml) | none |
--env, -e |
Environment variable (KEY=VALUE, visible in logs) | none |
--secret-env, -s |
Secret environment variable (KEY=VALUE, hidden from logs) | none |
--schedule |
Schedule job execution (duration or RFC3339 time) | immediate |
--workflow |
YAML workflow file for workflow execution | none |
# Simple command
rnx job run echo "Hello, World!"
# With resource limits
rnx job run --max-cpu=50 --max-memory=512 --max-iobps=10485760 \
python3 intensive_script.py
# CPU core binding
rnx job run --cpu-cores="0-3" stress-ng --cpu 4 --timeout 60s
# Multiple volumes
rnx job run --volume=data --volume=config \
python3 process.py
# Environment variables (regular - visible in logs)
rnx job run --env="NODE_ENV=production" --env="PORT=8080" \
node app.js
# Secret environment variables (hidden from logs)
rnx job run --secret-env="API_KEY=dummy_api_key_123" --secret-env="DB_PASSWORD=secret" \
python app.py
# Mixed environment variables
rnx job run --env="DEBUG=true" --secret-env="SECRET_KEY=mysecret" \
python app.py
# File upload
rnx job run --upload=script.py --upload=data.csv \
python3 script.py data.csv
# Directory upload
rnx job run --upload-dir=./project \
npm start
# Scheduled execution
rnx job run --schedule="30min" backup.sh
rnx job run --schedule="2025-08-03T15:00:00" maintenance.sh
# Custom network
rnx job run --network=isolated ping google.com
# Workflow execution
rnx job run --workflow=ml-pipeline.yaml # Execute full workflow
rnx job run --workflow=jobs.yaml:ml-analysis # Execute specific job from workflow
# Using runtime
rnx job run --runtime=python-3.11-ml python -c "import torch; print(torch.__version__)"
rnx job run --runtime=openjdk-21 java -version
# GPU acceleration
rnx job run --gpu=1 python gpu_script.py
rnx job run --gpu=2 --gpu-memory=8GB python distributed_training.py
rnx job run --gpu=1 --gpu-memory=16GB --max-memory=32768 python llm_inference.py
# Complex example with GPU
rnx job run \
--max-cpu=400 \
--max-memory=8192 \
--cpu-cores="0,2,4,6" \
--gpu=1 \
--gpu-memory=8GB \
--network=mynet \
--volume=persistent-data \
--env=PYTHONPATH=/app \
--upload-dir=./src \
--runtime=python-3.11-ml \
python3 gpu_training.py --epochs=100
When using --workflow
, Joblet performs comprehensive pre-execution validation:
$ rnx job run --workflow=my-workflow.yaml
🔍 Validating workflow prerequisites...
✅ No circular dependencies found
✅ All required volumes exist
✅ All required networks exist
✅ All required runtimes exist
✅ All job dependencies are valid
🎉 Workflow validation completed successfully!
Validation Checks:
Error Example:
Error: workflow validation failed: network validation failed: missing networks: [non-existent-network]. Available networks: [bridge isolated none custom-net]
rnx job list
List all jobs or workflows on the server.
rnx job list [flags] # List all jobs
rnx job list --workflow [flags] # List all workflows
Flag | Description | Default |
---|---|---|
--json |
Output in JSON format | false |
--workflow |
List workflows instead of jobs | false |
Table Format (default):
JSON Format: Outputs a JSON array with detailed job information including all resource limits, volumes, network, and scheduling information.
# List all jobs (table format)
rnx job list
# Example output:
# UUID NAME NODE ID STATUS START TIME COMMAND
# ------------------------------------ ------------ ------------------------------------ ---------- ------------------- -------
# f47ac10b-58cc-4372-a567-0e02b2c3d479 setup-data 8f94c5b2-1234-5678-9abc-def012345678 COMPLETED 2025-08-03 10:15:32 echo "Hello World"
# a1b2c3d4-e5f6-7890-abcd-ef1234567890 process-data 8f94c5b2-1234-5678-9abc-def012345678 RUNNING 2025-08-03 10:16:45 python3 script.py
# b2c3d4e5-f6a7-8901-bcde-f23456789012 - - FAILED 2025-08-03 10:17:20 invalid_command
# c3d4e5f6-a7b8-9012-cdef-345678901234 - - SCHEDULED N/A backup.sh
# List all workflows (table format)
rnx job list --workflow
# Example output:
# UUID WORKFLOW STATUS PROGRESS
# ------------------------------------ -------------------- ----------- ---------
# a1b2c3d4-e5f6-7890-1234-567890abcdef data-pipeline.yaml RUNNING 3/5
# b2c3d4e5-f6a7-8901-2345-678901bcdefg ml-pipeline.yaml COMPLETED 5/5
# JSON output for scripting
rnx job list --json
# Example JSON output:
# [
# {
# "id": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
# "name": "setup-data",
# "status": "COMPLETED",
# "start_time": "2025-08-03T10:15:32Z",
# "end_time": "2025-08-03T10:15:33Z",
# "command": "echo",
# "args": ["Hello World"],
# "exit_code": 0
# },
# {
# "id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
# "name": "process-data",
# "node_id": "8f94c5b2-1234-5678-9abc-def012345678",
# "status": "RUNNING",
# "start_time": "2025-08-03T10:16:45Z",
# "command": "python3",
# "args": ["script.py"],
# "max_cpu": 100,
# "max_memory": 512,
# "cpu_cores": "0-3",
# "scheduled_time": "2025-08-03T15:00:00Z"
# }
# ]
# Filter with jq
rnx job list --json | jq '.[] | select(.status == "FAILED")'
rnx job list --json | jq '.[] | select(.max_memory > 1024)'
rnx job status
Get detailed status of a specific job or workflow.
rnx job status [flags] <job-uuid> # Get job status
rnx job status --workflow <workflow-uuid> # Get workflow status
rnx job status --workflow --detail <workflow-uuid> # Get workflow status with YAML content
Workflow Status Features:
--detail
flag to view the original workflow YAML contentFlag | Description | Default | Notes |
---|---|---|---|
--workflow , -w |
Explicitly get workflow status | false | Required for workflow operations |
--detail , -d |
Show original YAML content | false | Only works with --workflow |
--json |
Output in JSON format | false | Available for jobs and workflows |
# Get job status (readable format)
rnx job status f47ac10b-58cc-4372-a567-0e02b2c3d479
# Get workflow status
rnx job status --workflow a1b2c3d4-e5f6-7890-1234-567890abcdef
# Get workflow status with original YAML content
rnx job status --workflow --detail a1b2c3d4-e5f6-7890-1234-567890abcdef
# Get status in JSON format
rnx job status --json f47ac10b-58cc-4372-a567-0e02b2c3d479 # Job JSON output
rnx job status --workflow --json a1b2c3d4-e5f6-7890-1234-567890abcdef # Workflow JSON output
rnx job status --workflow --json --detail a1b2c3d4-e5f6-7890-1234-567890abcdef # Workflow JSON with YAML content
# Check multiple jobs/workflows
for uuid in f47ac10b-58cc-4372-a567-0e02b2c3d479 a1b2c3d4-e5f6-7890-1234-567890abcdef; do rnx job status $uuid; done
# JSON output for scripting
rnx job status --json f47ac10b-58cc-4372-a567-0e02b2c3d479 | jq .status # Job status
rnx job status --workflow --json a1b2c3d4-e5f6-7890-1234-567890abcdef | jq .total_jobs # Workflow progress
rnx job status --workflow --json --detail a1b2c3d4-e5f6-7890-1234-567890abcdef | jq .yaml_content # Extract YAML content
# Example workflow status output:
# Workflow UUID: a1b2c3d4-e5f6-7890-1234-567890abcdef
# Workflow: data-pipeline.yaml
# Status: RUNNING
# Progress: 2/4 jobs completed
#
# Jobs in Workflow:
# -----------------------------------------------------------------------------------------
# JOB ID JOB NAME STATUS EXIT CODE DEPENDENCIES
# -------------------------------------------------------------------------------------------------------------
# f47ac10b-58cc-4372-a567-0e02b2c3d479 setup-data COMPLETED 0 -
# a1b2c3d4-e5f6-7890-abcd-ef1234567890 process-data COMPLETED 0 setup-data
# 0 validate-results PENDING - process-data
# 0 generate-report PENDING - validate-results
# Example JSON output for individual job:
# {
# "uuid": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
# "name": "process-data",
# "nodeId": "8f94c5b2-1234-5678-9abc-def012345678",
# "command": "python3",
# "args": ["process_data.py"],
# "maxCPU": 100,
# "cpuCores": "0-3",
# "maxMemory": 512,
# "maxIOBPS": 0,
# "status": "COMPLETED",
# "startTime": "2025-08-03T10:15:32Z",
# "endTime": "2025-08-03T10:18:45Z",
# "exitCode": 0,
# "scheduledTime": ""
# }
# rnx job status --workflow --json --detail a1b2c3d4-e5f6-7890-1234-567890abcdef
{
"uuid": "a1b2c3d4-e5f6-7890-1234-567890abcdef",
"workflow": "data-pipeline.yaml",
"status": "RUNNING",
"total_jobs": 4,
"completed_jobs": 2,
"failed_jobs": 0,
"created_at": {
"seconds": 1691234567,
"nanos": 0
},
"yaml_content": "jobs:\n setup-data:\n command: \"python3\"\n args: [\"extract.py\"]\n runtime: \"python-3.11-ml\"\n process-data:\n command: \"python3\"\n args: [\"transform.py\"]\n requires:\n - setup-data: \"COMPLETED\"\n",
"jobs": [
{
"id": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
"name": "setup-data",
"status": "COMPLETED",
"exit_code": 0
},
{
"id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"name": "process-data",
"status": "RUNNING",
"dependencies": ["setup-data"]
}
]
}
Key Features:
yaml_content
field contains original workflow YAML when --detail
flag is usedrnx job log
Stream job logs in real-time.
rnx job log <job-uuid>
Streams logs from running or completed jobs. Use Ctrl+C to stop following the log stream.
# Stream logs from a job
rnx job log f47ac10b-58cc-4372-a567-0e02b2c3d479
# Use standard Unix tools for filtering
rnx job log f47ac10b-58cc-4372-a567-0e02b2c3d479 | tail -100
rnx job log f47ac10b-58cc-4372-a567-0e02b2c3d479 | grep ERROR
# Save logs to file
rnx job log f47ac10b-58cc-4372-a567-0e02b2c3d479 > output.log
rnx job stop
Stop a running or scheduled job.
rnx job stop <job-uuid>
# Stop a running job
rnx job stop f47ac10b-58cc-4372-a567-0e02b2c3d479
# Stop multiple jobs
rnx job list --json | jq -r '.[] | select(.status == "RUNNING") | .id' | xargs -I {} rnx job stop {}
rnx job delete
Delete a job completely from the system.
rnx job delete <job-uuid>
Permanently removes the specified job including logs, metadata, and all associated resources. The job must be in a completed, failed, or stopped state - running jobs cannot be deleted directly and must be stopped first.
# Delete a completed job
rnx job delete f47ac10b-58cc-4372-a567-0e02b2c3d479
# Delete using short UUID (if unique)
rnx job delete f47ac10b
rnx job delete-all
Delete all non-running jobs from the system.
rnx job delete-all [flags]
Permanently removes all jobs that are not currently running or scheduled. Jobs in completed, failed, or stopped states will be deleted. Running and scheduled jobs are preserved and will not be affected.
Complete deletion includes:
--json
: Output results in JSON format# Delete all non-running jobs
rnx job delete-all
# Delete all non-running jobs with JSON output
rnx job delete-all --json
Example JSON Output:
{
"success": true,
"message": "Successfully deleted 3 jobs, skipped 1 running/scheduled jobs",
"deleted_count": 3,
"skipped_count": 1
}
Note: This operation is irreversible. Once deleted, job information and logs cannot be recovered. Only non-running jobs are affected.
rnx volume create
Create a new volume for persistent storage.
rnx volume create <name> [flags]
Flag | Description | Default |
---|---|---|
--size |
Volume size (e.g., 1GB, 500MB) | required |
--type |
Volume type: filesystem or memory | “filesystem” |
# Create 1GB filesystem volume
rnx volume create mydata --size=1GB
# Create 512MB memory volume (tmpfs)
rnx volume create cache --size=512MB --type=memory
# Create volumes for different purposes
rnx volume create db-data --size=10GB --type=filesystem
rnx volume create temp-processing --size=2GB --type=memory
rnx volume list
List all volumes.
rnx volume list [flags]
Flag | Description | Default |
---|---|---|
--json |
Output in JSON format | false |
# List all volumes
rnx volume list
# JSON output
rnx volume list --json
# Check volume usage
rnx volume list --json | jq '.[] | select(.size_used > .size_total * 0.8)'
rnx volume remove
Remove a volume.
rnx volume remove <name>
# Remove single volume
rnx volume remove mydata
# Remove all volumes (careful!)
rnx volume list --json | jq -r '.[].name' | xargs -I {} rnx volume remove {}
rnx network create
Create a custom network.
rnx network create <name> [flags]
Flag | Description | Default |
---|---|---|
--cidr |
Network CIDR (e.g., 10.10.0.0/24) | required |
# Create basic network
rnx network create mynet --cidr=10.10.0.0/24
# Create multiple networks for different environments
rnx network create dev --cidr=10.10.0.0/24
rnx network create test --cidr=10.20.0.0/24
rnx network create prod --cidr=10.30.0.0/24
rnx network list
List all networks.
rnx network list [flags]
Flag | Description | Default |
---|---|---|
--json |
Output in JSON format | false |
# List all networks
rnx network list
# JSON output
rnx network list --json
rnx network remove
Remove a custom network. Built-in networks cannot be removed.
rnx network remove <name>
# Remove network
rnx network remove mynet
# Remove all custom networks (keep built-in networks)
rnx network list --json | jq -r '.networks[] | select(.builtin == false) | .name' | xargs -I {} rnx network remove {}
rnx runtime list
List all available runtime environments.
rnx runtime list [flags]
Flag | Description | Default |
---|---|---|
--json |
Output in JSON format | false |
--github-repo |
List runtimes from GitHub repository (owner/repo/tree/branch/path) | none |
# List locally installed runtimes
rnx runtime list
# List available runtimes from GitHub repository
rnx runtime list --github-repo=owner/repo/tree/main/runtimes
# JSON output
rnx runtime list --json
rnx runtime info
Get detailed information about a specific runtime environment.
rnx runtime info <runtime-spec>
# Get runtime details
rnx runtime info python-3.11-ml
rnx runtime info openjdk:21
rnx runtime install
Install a runtime environment from GitHub or local files.
rnx runtime install <runtime-spec> [flags]
Flag | Short | Description | Default |
---|---|---|---|
--force |
-f |
Force reinstall by deleting existing runtime | false |
--github-repo |
Install from GitHub repository (owner/repo/tree/branch/path) | none |
The install command downloads and executes platform-specific setup scripts in a secure builder chroot environment. It automatically detects the host platform (Ubuntu, Amazon Linux, RHEL) and architecture (AMD64, ARM64) to run the appropriate setup script.
When using --force
, the command will:
/opt/joblet/runtimes/<runtime-name>
if it exists# Install from local codebase
rnx runtime install python-3.11-ml
rnx runtime install openjdk-21
# Install from GitHub repository
rnx runtime install openjdk-21 --github-repo=ehsaniara/joblet/tree/main/runtimes
rnx runtime install python-3.11-ml --github-repo=owner/repo/tree/branch/path
# Force reinstall (delete existing runtime first)
rnx runtime install python-3.11-ml --force
rnx runtime install openjdk-21 -f
rnx runtime test
Test a runtime environment to verify it’s working correctly.
rnx runtime test <runtime-spec>
# Test runtime functionality
rnx runtime test python-3.11-ml
rnx runtime test openjdk:21
rnx runtime remove
Remove a runtime environment.
rnx runtime remove <runtime-spec>
# Remove a runtime
rnx runtime remove python-3.11-ml
rnx runtime remove openjdk-21
rnx runtime validate
Validate a runtime specification format and check if it’s supported.
rnx runtime validate <runtime-spec>
# Validate basic spec
rnx runtime validate python-3.11-ml
# Validate spec with variants
rnx runtime validate openjdk:21
rnx version
Display version information for both RNX client and Joblet server.
rnx version [flags]
Flag | Description | Default |
---|---|---|
--json |
Output version info as JSON | false |
# Show version information
rnx version
# Output:
# RNX Client:
# rnx version v4.3.3 (4c11220)
# Built: 2025-09-14T05:17:17Z
# Commit: 4c11220b6e4f98960853fa0379b5c25d2f19e33f
# Go: go1.24.0
# Platform: linux/amd64
#
# Joblet Server (default):
# joblet version v4.3.3 (4c11220)
# Built: 2025-09-14T05:18:24Z
# Commit: 4c11220b6e4f98960853fa0379b5c25d2f19e33f
# Go: go1.24.0
# Platform: linux/amd64
# Show version as JSON
rnx version --json
# Use --version flag (alternative)
rnx --version
vMAJOR.MINOR.PATCH[+dev]
where +dev
indicates development builds after the tagged release+dev
suffixrnx monitor
Monitor comprehensive remote joblet server metrics including CPU, memory, disk, network, processes, and volumes.
rnx monitor <subcommand> [flags]
status
- Display comprehensive remote server status with detailed resource informationtop
- Show current remote server metrics in condensed format with top processeswatch
- Stream real-time remote server metrics with configurable refresh intervalsFlag | Description | Default |
---|---|---|
--json |
Output in UI-compatible JSON format | false |
--interval |
Update interval in seconds (watch only) | 5 |
--filter |
Filter metrics by type (top/watch only) | all |
--compact |
Use compact display format (watch only) | false |
cpu
- Server CPU usage, load averages, per-core utilizationmemory
- Server memory and swap usage with detailed breakdownsdisk
- Server disk usage for all mount points and joblet volumesnetwork
- Server network interface statistics with live throughputio
- Server I/O operations, throughput, and utilizationprocess
- Server process statistics with top consumersEnhanced Remote Server Monitoring:
Remote JSON Data Format:
# Comprehensive remote server status
rnx monitor status
# JSON server data for dashboards/APIs
rnx monitor status --json
# Current server metrics with top processes
rnx monitor top
# Filter specific server metrics
rnx monitor top --filter=cpu,memory
# Real-time server monitoring (5s intervals)
rnx monitor watch
# Faster server monitoring refresh rate
rnx monitor watch --interval=2
# Monitor specific server resources
rnx monitor watch --filter=disk,network
# JSON server streaming for monitoring tools
rnx monitor watch --json --interval=10
# Compact format for server monitoring
rnx monitor watch --compact
# Monitor specific joblet server node
rnx --node=production monitor status
The --json
flag produces UI-compatible output with the following structure:
{
"hostInfo": {
"hostname": "server-name",
"platform": "Ubuntu 22.04.2 LTS",
"arch": "amd64",
"uptime": 152070,
"cloudProvider": "AWS",
"instanceType": "t3.medium",
"region": "us-east-1"
},
"cpuInfo": {
"cores": 8,
"usage": 0.15,
"loadAverage": [0.5, 0.3, 0.2],
"perCoreUsage": [0.1, 0.2, 0.05, 0.3, ...]
},
"memoryInfo": {
"total": 4100255744,
"used": 378679296,
"percent": 9.23,
"swap": { "total": 0, "used": 0, "percent": 0 }
},
"disksInfo": {
"disks": [
{
"name": "/dev/sda1",
"mountpoint": "/",
"filesystem": "ext4",
"size": 19896352768,
"used": 11143790592,
"percent": 56.01
},
{
"name": "analytics-data",
"mountpoint": "/opt/joblet/volumes/analytics-data",
"filesystem": "joblet-volume",
"size": 1073741824,
"used": 52428800,
"percent": 4.88
}
]
},
"networkInfo": {
"interfaces": [...],
"totalRxBytes": 1234567890,
"totalTxBytes": 987654321
},
"processesInfo": {
"processes": [...],
"totalProcesses": 149
}
}
rnx nodes
List configured nodes from the client configuration file.
rnx nodes [flags]
Flag | Description | Default |
---|---|---|
--json |
Output in JSON format | false |
# List all nodes with details
rnx nodes
# Example output:
# Available nodes from configuration:
#
# * default
# Address: localhost:50051
# Node ID: 8f94c5b2-1234-5678-9abc-def012345678
# Cert: ***
# Key: ***
# CA: ***
#
# production
# Address: prod.example.com:50051
# Node ID: a1b2c3d4-5678-9abc-def0-123456789012
# Cert: ***
# Key: ***
# CA: ***
# JSON output
rnx nodes --json
# Use specific node for commands
rnx --node=production job list
rnx --node=staging job run echo "test"
rnx admin
Launch the Joblet Admin UI server.
rnx admin [flags]
Flag | Description | Default |
---|---|---|
--port, -p |
Port to run the admin server | 5173 |
--bind-address |
Address to bind the server to | “0.0.0.0” |
# Start admin UI with default settings
rnx admin
# Use custom port
rnx admin --port 8080
# Bind to all interfaces
rnx admin --bind-address 0.0.0.0 --port 5173
rnx config-help
Show configuration file examples with embedded certificates.
rnx config-help
# Show configuration examples
rnx config-help
rnx help
Show help information.
rnx help [command]
# General help
rnx help
# Command-specific help
rnx help run
rnx help volume create
# Show configuration help
rnx help config
#!/bin/bash
# Batch processing script
# Process files in parallel with resource limits
for file in *.csv; do
rnx job run \
--max-cpu=100 \
--max-memory=1024 \
--upload="$file" \
python3 process.py "$file" &
done
# Wait for all jobs
wait
# Collect results
rnx job list --json | jq -r '.[] | select(.status == "COMPLETED") | .id' | \
while read job_uuid; do
rnx job log "$job_uuid" > "result-$(echo $job_uuid | cut -c1-8).txt"
done
# GitHub Actions example
- name: Run tests in Joblet
run: |
rnx job run \
--max-cpu=400 \
--max-memory=4096 \
--volume=test-results \
--upload-dir=. \
--env=CI=true \
npm test
# Check job status
JOB_UUID=$(rnx job list --json | jq -r '.[-1].uuid')
rnx job status $JOB_UUID
# Get test results
rnx job run --volume=test-results cat /volumes/test-results/report.xml
# Monitor job failures
while true; do
FAILED=$(rnx job list --json | jq '[.[] | select(.status == "FAILED")] | length')
if [ $FAILED -gt 0 ]; then
echo "Alert: $FAILED failed jobs detected"
rnx job list --json | jq '.[] | select(.status == "FAILED")'
fi
sleep 60
done
version: "3.0"
nodes:
default:
address: "prod-server:50051"
cert: |
-----BEGIN CERTIFICATE-----
...
key: |
-----BEGIN PRIVATE KEY-----
...
ca: |
-----BEGIN CERTIFICATE-----
...
staging:
address: "staging-server:50051"
cert: |
-----BEGIN CERTIFICATE-----
...
# ... rest of credentials
viewer:
address: "prod-server:50051"
cert: |
-----BEGIN CERTIFICATE-----
# Viewer certificate with OU=viewer
...
# ... rest of credentials
# Production jobs
rnx --node=default run production-task.sh
# Staging tests
rnx --node=staging run test-suite.sh
# Read-only access
rnx --node=viewer list
rnx --node=viewer monitor status
rnx monitor
to track resource usageSee Troubleshooting Guide for common issues and solutions.