This comprehensive API reference provides detailed technical documentation for the Joblet gRPC interface, including complete service definitions, message schemas, authentication protocols, authorization frameworks, and practical implementation examples for client development.
The Joblet API utilizes gRPC as its communication protocol with Protocol Buffers for efficient message serialization. The API implements enterprise-grade security through mutual TLS authentication and provides comprehensive role-based access control for organizational deployment scenarios.
Server Address: <host>:50051
TLS: Required (mutual authentication)
Client Certificates: Required for all operations
Platform: Linux server required for job execution
The Joblet API enforces mutual TLS authentication for all client connections, requiring valid X.509 client certificates issued by the same Certificate Authority (CA) that signed the server certificate.
Client Certificate Subject Format:
CN=<client-name>, OU=<role>, O=<organization>
Supported Roles:
- OU=admin → Full access (all operations)
- OU=viewer → Read-only access (get, list, stream)
certs/
├── ca-cert.pem # Certificate Authority
├── client-cert.pem # Client certificate (admin or viewer)
└── client-key.pem # Client private key
Role | RunJob | GetJobStatus | StopJob | ListJobs | GetJobLogs |
---|---|---|---|---|---|
admin | ✅ | ✅ | ✅ | ✅ | ✅ |
viewer | ❌ | ✅ | ❌ | ✅ | ✅ |
syntax = "proto3";
package joblet;
service JobletService {
// Create and start a new job
rpc RunJob(RunJobReq) returns (RunJobRes);
// Get job information by ID
rpc GetJobStatus(GetJobStatusReq) returns (GetJobStatusRes);
// Stop a running job
rpc StopJob(StopJobReq) returns (StopJobRes);
// List all jobs
rpc ListJobs(EmptyRequest) returns (Jobs);
// Stream job output in real-time
rpc GetJobLogs(GetJobLogsReq) returns (stream DataChunk);
}
Creates and starts a new job with specified command and resource limits. Jobs execute on the Linux server with complete process isolation.
Authorization: Admin only
rpc RunJob(RunJobReq) returns (RunJobRes);
Request Parameters:
command
(string): Command to execute (required)args
(repeated string): Command arguments (optional)maxCPU
(int32): CPU limit percentage (optional, default: 100)maxMemory
(int32): Memory limit in MB (optional, default: 512)maxIOBPS
(int32): I/O bandwidth limit in bytes/sec (optional, default: 0=unlimited)Job Execution Environment:
Response:
Example:
# CLI
rnx job run --max-cpu=50 --max-memory=512 python3 script.py
# Expected Response
Job started:
ID: f47ac10b-58cc-4372-a567-0e02b2c3d479
Command: python3 script.py
Status: INITIALIZING
StartTime: 2024-01-15T10:30:00Z
MaxCPU: 50
MaxMemory: 512
Network: host (shared with system)
Retrieves detailed information about a specific job, including current status, resource usage, and execution metadata.
Authorization: Admin, Viewer
rpc GetJobStatus(GetJobStatusReq) returns (GetJobStatusRes);
Request Parameters:
id
(string): Job UUID (required)Response:
Example:
# CLI
rnx job status f47ac10b-58cc-4372-a567-0e02b2c3d479
# Expected Response
Id: f47ac10b-58cc-4372-a567-0e02b2c3d479
Command: python3 script.py
Status: RUNNING
Started At: 2024-01-15T10:30:00Z
Ended At:
MaxCPU: 50
MaxMemory: 512
MaxIOBPS: 0
ExitCode: 0
Terminates a running job using graceful shutdown (SIGTERM) followed by force termination (SIGKILL) if necessary.
Authorization: Admin only
rpc StopJob(StopJobReq) returns (StopJobRes);
Request Parameters:
id
(string): Job UUID (required)Termination Process:
SIGTERM
to process groupSIGKILL
if process still aliveSTOPPED
Response:
Example:
# CLI
rnx job stop f47ac10b-58cc-4372-a567-0e02b2c3d479
# Expected Response
Job stopped successfully:
ID: f47ac10b-58cc-4372-a567-0e02b2c3d479
Status: STOPPED
ExitCode: -1
EndTime: 2024-01-15T10:45:00Z
Lists all jobs with their current status and metadata. Useful for monitoring overall system activity.
Authorization: Admin, Viewer
rpc ListJobs(EmptyRequest) returns (Jobs);
Request Parameters: None
Response:
Example:
# CLI
rnx job list
# Expected Response
f47ac10b-58cc-4372-a567-0e02b2c3d479 COMPLETED StartTime: 2024-01-15T10:30:00Z Command: echo hello
6ba7b810-9dad-11d1-80b4-00c04fd430c8 RUNNING StartTime: 2024-01-15T10:35:00Z Command: python3 script.py
6ba7b811-9dad-11d1-80b4-00c04fd430c8 FAILED StartTime: 2024-01-15T10:40:00Z Command: invalid-command
Streams job output in real-time, including historical logs and live updates. Supports multiple concurrent clients streaming the same job.
Authorization: Admin, Viewer
rpc GetJobLogs(GetJobLogsReq) returns (stream DataChunk);
Request Parameters:
id
(string): Job UUID (required)Streaming Behavior:
Response:
DataChunk
messages containing raw stdout/stderr outputExample:
# CLI
rnx job log f47ac10b-58cc-4372-a567-0e02b2c3d479
# Expected Response (streaming)
Logs for job f47ac10b-58cc-4372-a567-0e02b2c3d479 (Press Ctrl+C to exit if streaming):
Starting script...
Processing item 1
Processing item 2
...
Script completed successfully
Core job representation used across all API responses.
message Job {
string id = 1; // Unique job UUID identifier
string name = 2; // Readable job name (from workflows, empty for individual jobs)
string command = 3; // Command being executed
repeated string args = 4; // Command arguments
int32 maxCPU = 5; // CPU limit in percent
string cpuCores = 6; // CPU core binding specification
int32 maxMemory = 7; // Memory limit in MB
int32 maxIOBPS = 8; // IO limit in bytes per second
string status = 9; // Current job status
string startTime = 10; // Start time (RFC3339 format)
string endTime = 11; // End time (RFC3339 format, empty if running)
int32 exitCode = 12; // Process exit code
string scheduledTime = 13; // Scheduled execution time (RFC3339 format)
string runtime = 14; // Runtime specification used
map<string, string> environment = 15; // Regular environment variables (visible)
map<string, string> secret_environment = 16; // Secret environment variables (masked)
// Additional fields
string nodeId = 20; // Unique identifier of the Joblet node that executed this job
}
INITIALIZING - Job created, setting up isolation and resources
RUNNING - Process executing in isolated namespace
COMPLETED - Process finished successfully (exit code 0)
FAILED - Process finished with error (exit code != 0)
STOPPED - Process terminated by user request or timeout
Default values when not specified in configuration (joblet-config.yml
):
DefaultCPULimitPercent = 100 // 100% of one core
DefaultMemoryLimitMB = 512 // 512 MB
DefaultIOBPS = 0 // Unlimited I/O
message RunJobReq {
string command = 1; // Required: command to execute
repeated string args = 2; // Optional: command arguments
int32 maxCPU = 3; // Optional: CPU limit percentage
int32 maxMemory = 4; // Optional: memory limit in MB
int32 maxIOBPS = 5; // Optional: I/O bandwidth limit
}
Response message for job status requests, including node identification.
message GetJobStatusRes {
string uuid = 1; // Job UUID
string name = 2; // Job name (from workflows, empty for individual jobs)
string command = 3; // Command being executed
repeated string args = 4; // Command arguments
int32 maxCPU = 5; // CPU limit in percent
string cpuCores = 6; // CPU core binding specification
int32 maxMemory = 7; // Memory limit in MB
int64 maxIOBPS = 8; // IO limit in bytes per second
string status = 9; // Current job status
string startTime = 10; // Start time (RFC3339 format)
string endTime = 11; // End time (RFC3339 format, empty if running)
int32 exitCode = 12; // Process exit code
string scheduledTime = 13; // Scheduled execution time (RFC3339 format)
string runtime = 14; // Runtime specification used
map<string, string> environment = 15; // Regular environment variables (visible)
map<string, string> secret_environment = 16; // Secret environment variables (masked)
string network = 17; // Network configuration
repeated string volumes = 18; // Volume names
string workDir = 19; // Working directory
repeated FileUpload uploads = 20; // File uploads
repeated string dependencies = 21; // Job dependencies
string workflowUuid = 22; // Workflow UUID if part of workflow
int32 gpuCount = 23; // Number of GPUs allocated
repeated int32 gpuIndices = 24; // GPU indices allocated
int64 gpuMemoryMB = 25; // GPU memory in MB
string nodeId = 26; // Unique identifier of the Joblet node that executed this job
}
Used for streaming job output with efficient binary transport.
message DataChunk {
bytes payload = 1; // Raw output data (stdout/stderr merged)
}
Code | Description | Common Causes |
---|---|---|
UNAUTHENTICATED |
Invalid or missing client certificate | Certificate expired, wrong CA |
PERMISSION_DENIED |
Insufficient role permissions | Viewer trying admin operation |
NOT_FOUND |
Job not found | Invalid job UUID |
INTERNAL |
Server-side error | Job creation failed, system error |
CANCELED |
Operation canceled | Client disconnected during stream |
INVALID_ARGUMENT |
Invalid request parameters | Empty command, invalid limits |
{
"code": "NOT_FOUND",
"message": "job not found: f47ac10b-58cc-4372-a567-0e02b2c3d479",
"details": []
}
# Missing certificate
Error: failed to extract client role: no TLS information found
# Wrong role (viewer trying to run job)
Error: role viewer is not allowed to perform operation run_job
# Invalid certificate
Error: certificate verify failed: certificate has expired
# Job not found
Error: job not found: f47ac10b-58cc-4372-a567-0e02b2c3d479
# Job not running (for stop operation)
Error: job is not running: 6ba7b810-9dad-11d1-80b4-00c04fd430c8 (current status: COMPLETED)
# Command validation failed
Error: invalid command: command contains dangerous characters
# Resource limits exceeded
Error: job creation failed: maxMemory exceeds system limits
# Linux platform required
Error: job execution requires Linux server (current: darwin)
# Cgroup setup failed
Error: cgroup setup failed: permission denied
# Namespace creation failed
Error: failed to create isolated environment: operation not permitted
--server string Server address (default "localhost:50051")
--cert string Client certificate path (default "certs/client-cert.pem")
--key string Client private key path (default "certs/client-key.pem")
--ca string CA certificate path (default "certs/ca-cert.pem")
Create and start a new job with optional resource limits.
rnx job run [flags] <command> [args...]
Flags:
--max-cpu int Max CPU percentage (default: from config)
--max-memory int Max memory in MB (default: from config)
--max-iobps int Max I/O bytes per second (default: 0=unlimited)
Examples:
rnx job run echo "hello world"
rnx job run --max-cpu=50 python3 script.py
rnx job run --max-memory=1024 java -jar app.jar
rnx job run bash -c "sleep 10 && echo done"
Get detailed information about a job by UUID.
rnx job status <job-uuid>
Example:
rnx job status f47ac10b-58cc-4372-a567-0e02b2c3d479
List all jobs with their current status.
rnx job list
Example:
rnx job list
Stop a running job gracefully (SIGTERM) or forcefully (SIGKILL).
rnx job stop <job-uuid>
Example:
rnx job stop f47ac10b-58cc-4372-a567-0e02b2c3d479
Stream job output in real-time or view historical logs.
rnx job log <job-uuid>
Streams logs from running or completed jobs. Use Ctrl+C to stop following.
Examples:
rnx job log f47ac10b-58cc-4372-a567-0e02b2c3d479 # Stream logs
rnx job log f47ac10b | grep ERROR # Filter output
# Connect to remote Linux server from any platform
rnx --server=prod.example.com:50051 \
--cert=certs/admin-client-cert.pem \
--key=certs/admin-client-key.pem \
run echo "remote execution on Linux"
export JOBLET_SERVER="prod.example.com:50051"
export JOBLET_CERT_PATH="./certs/admin-client-cert.pem"
export JOBLET_KEY_PATH="./certs/admin-client-key.pem"
export JOBLET_CA_PATH="./certs/ca-cert.pem"
rnx job run python3 script.py
Resource limits and timeouts are configured in /opt/joblet/joblet-config.yml
:
joblet:
defaultCpuLimit: 100 # Default CPU percentage
defaultMemoryLimit: 512 # Default memory in MB
defaultIoLimit: 0 # Default I/O limit (0=unlimited)
maxConcurrentJobs: 100 # Maximum concurrent jobs
jobTimeout: "1h" # Maximum job runtime
cleanupTimeout: "5s" # Resource cleanup timeout
grpc:
maxRecvMsgSize: 524288 # 512KB max receive message
maxSendMsgSize: 4194304 # 4MB max send message
keepAliveTime: "30s" # Connection keep-alive
The server provides detailed logging for:
# Structured logging with fields
DEBUG - Detailed execution flow and debugging info
INFO - Job lifecycle events and normal operations
WARN - Resource limit violations, slow clients, recoverable errors
ERROR - Job failures, system errors, authentication failures
# Example log entry
[2024-01-15T10:30:00Z] [INFO] job started successfully | jobId=f47ac10b-58cc-4372-a567-0e02b2c3d479 pid=12345 command="python3 script.py" duration=50ms
# Check server health
rnx job list
# Verify certificate and connection
rnx --server=your-server:50051 list
# Monitor service status (systemd)
sudo systemctl status joblet
sudo journalctl -u joblet -f
rnx job list
/sys/fs/cgroup/joblet.slice/
Joblet provides comprehensive workflow orchestration through YAML-defined job dependencies. Workflows enable complex multi-job execution with dependency management, resource isolation, and comprehensive monitoring.
requires
clausesThe API provides multiple services with distinct responsibilities:
Handles regular user jobs with production isolation:
service JobService {
// Job execution with production isolation
rpc RunJob(RunJobReq) returns (RunJobRes);
rpc GetJobStatus(GetJobStatusReq) returns (GetJobStatusRes);
rpc StopJob(StopJobReq) returns (StopJobRes);
rpc ListJobs(EmptyRequest) returns (Jobs);
rpc GetJobLogs(GetJobLogsReq) returns (stream DataChunk);
// Workflow execution
rpc RunWorkflow(RunWorkflowRequest) returns (RunWorkflowResponse);
rpc GetWorkflowStatus(GetWorkflowStatusRequest) returns (GetWorkflowStatusResponse);
rpc ListWorkflows(ListWorkflowsRequest) returns (ListWorkflowsResponse);
rpc GetWorkflowJobs(GetWorkflowJobsRequest) returns (GetWorkflowJobsResponse);
}
Handles runtime building with builder chroot access:
service RuntimeService {
// Runtime installation and management
rpc InstallRuntime(InstallRuntimeRequest) returns (InstallRuntimeResponse);
rpc ListRuntimes(ListRuntimesRequest) returns (ListRuntimesResponse);
rpc GetRuntimeInfo(GetRuntimeInfoRequest) returns (GetRuntimeInfoResponse);
rpc TestRuntime(TestRuntimeRequest) returns (TestRuntimeResponse);
}
Key Differences:
JobType: "standard"
→ minimal chroot with production isolationJobType: "runtime-build"
→ builder chroot with host OS accessRepresents a job within a workflow with dependency information.
message WorkflowJob {
string jobId = 1; // Actual job UUID for started jobs, "0" for non-started jobs
string jobName = 2; // Job name from workflow YAML
string status = 3; // Current job status
repeated string dependencies = 4; // List of job names this job depends on
Timestamp startTime = 5; // Job start time
Timestamp endTime = 6; // Job completion time
int32 exitCode = 7; // Process exit code
}
Job ID Behavior:
jobId
contains actual job UUID assigned by joblet (e.g., “f47ac10b-58cc-4372-a567-0e02b2c3d479”, “
6ba7b810-9dad-11d1-80b4-00c04fd430c8”)jobId
shows “0” to indicate the job hasn’t been started yetProvides comprehensive workflow status with job details.
message GetWorkflowStatusResponse {
WorkflowInfo workflow = 1; // Overall workflow information
repeated WorkflowJob jobs = 2; // Detailed job information with dependencies
}
Workflow jobs have Job names derived from YAML job keys:
# workflow.yaml
jobs:
setup-data: # Job name: "setup-data"
command: "python3"
args: ["setup.py"]
process-data: # Job name: "process-data"
command: "python3"
args: ["process.py"]
requires:
- setup-data: "COMPLETED"
Job ID vs Job Name:
Status Display:
JOB ID JOB NAME STATUS EXIT CODE DEPENDENCIES
---------------------------------------------------------------------------------------------------------------------
f47ac10b-58cc-4372-a567-0e02b2c3d479 setup-data COMPLETED 0 -
6ba7b810-9dad-11d1-80b4-00c04fd430c8 process-data RUNNING - setup-data
Workflow status commands automatically display job names for better visibility:
# Get workflow status with job names and dependencies
rnx job status --workflow a1b2c3d4-e5f6-7890-1234-567890abcdef
# List workflows
rnx job list --workflow
# Execute workflow
rnx job run --workflow=pipeline.yaml