๐ฆฅ Sloth Runner - AI-Powered GitOps Task Orchestration Platform¶
The world's first AI-powered task orchestration platform with native GitOps capabilities. Sloth Runner combines intelligent optimization, predictive analytics, automated deployments, and enterprise-grade reliability into a single, powerful platform.
๐ Quick Start with GitOps¶
Get started with a complete GitOps workflow in under 5 minutes:
1. Install Sloth Runner¶
2. Run the GitOps Example¶
# Clone the repository
git clone https://github.com/chalkan3-sloth/sloth-runner.git
cd sloth-runner
# Execute the complete GitOps workflow
sloth-runner run -f examples/deploy_git_terraform.sloth -v examples/values.yaml deploy_git_terraform
3. Watch the Magic Happen¶
โ
Repository cloned successfully
โ
Terraform initialized automatically
โ
Infrastructure planned and validated
โ
Deployment completed successfully
๐ Featured Examples: Real-World Power¶
๐ Example 1: Deploy Web Cluster with Incus + Goroutines¶
Deploy a complete web cluster in parallel using containers:
task("deploy-web-cluster")
:description("Deploy complete web cluster with Incus")
:delegate_to("incus-host-01")
:command(function()
-- Create isolated network
incus.network({
name = "web-dmz",
type = "bridge"
}):set_config({
["ipv4.address"] = "10.10.0.1/24"
}):create()
-- Deploy 3 web servers in parallel
goroutine.map({"web-01", "web-02", "web-03"}, function(name)
incus.instance({
name = name,
image = "ubuntu:22.04"
}):create()
:start()
:wait_running()
:exec("apt install -y nginx")
end)
log.info("โ
Web cluster deployed!")
return true
end)
:build()
What's happening here: - โก Parallel execution with goroutine.map()
- ๐ Network isolation with Incus bridge - ๐ Fluent API for chaining operations - ๐ฏ Remote execution via :delegate_to()
๐ Full Incus Documentation โ
๐ Example 2: Intelligent Deploy with Facts¶
Make smart deployment decisions based on real-time system state:
task("intelligent-deploy")
:description("Intelligent deployment based on system facts")
:command(function()
-- Collect system information
local info = facts.get_all({ agent = "prod-server-01" })
log.info("๐ Analyzing: " .. info.hostname)
log.info(" Platform: " .. info.platform.os)
log.info(" Memory: " .. string.format("%.2f GB",
info.memory.total / 1024 / 1024 / 1024))
-- Validate requirements
if (info.memory.total / 1024 / 1024 / 1024) < 4 then
error("โ Need at least 4GB RAM")
end
-- Check Docker installation
local docker = facts.get_package({
agent = "prod-server-01",
name = "docker"
})
if not docker.installed then
log.info("๐ฆ Installing Docker...")
pkg.install({ packages = {"docker.io"} })
:delegate_to("prod-server-01")
end
-- Conditional deploy based on architecture
local image_tag = info.platform.architecture == "arm64"
and "latest-arm64"
or "latest-amd64"
log.info("๐ Deploying: myapp:" .. image_tag)
return true
end)
:build()
What's happening here: - ๐ System discovery with facts module - โ
Requirement validation before deploy - ๐ง Conditional logic based on architecture - ๐ Auto-installation of dependencies - ๐ Global modules - no require()
needed!
๐ Full Facts Documentation โ
๐ฅ New: Unified Module API¶
All modules now use a modern, consistent, table-based API for maximum clarity and flexibility:
-- Package Management
task("setup_web_server", {
description = "Setup web server on remote host",
command = function()
-- Update package database
pkg.update({ delegate_to = "web-server" })
-- Install packages
pkg.install({
packages = {"nginx", "certbot", "postgresql"},
delegate_to = "web-server"
})
-- Configure systemd service
systemd.enable({
service = "nginx",
delegate_to = "web-server"
})
systemd.start({
service = "nginx",
delegate_to = "web-server"
})
-- Verify installation
infra_test.service_is_running({
name = "nginx",
delegate_to = "web-server"
})
infra_test.port_is_listening({
port = 80,
delegate_to = "web-server"
})
return true, "Web server configured successfully"
end
})
๐ฏ Key Benefits: - โ
Named parameters for self-documenting code - โ
Consistent API across all modules - โ
Remote execution via delegate_to
- โ
Built-in testing with infra_test
module - โ
Parallel execution with goroutines
๐ See Complete API Examples โ
โจ Revolutionary Features¶
๐ฏ Modern DSL for GitOps¶
Clean, powerful Lua-based syntax designed for infrastructure workflows
-- Complete GitOps workflow in clean, readable syntax
local clone_task = task("clone_infrastructure")
:description("Clone Terraform infrastructure repository")
:workdir("/tmp/infrastructure")
:command(function(this, params)
local git = require("git")
log.info("๐ก Cloning infrastructure repository...")
local repository = git.clone(
values.git.repository_url,
this.workdir.get()
)
return true, "Repository cloned successfully", {
repository_url = values.git.repository_url,
clone_destination = this.workdir.get()
}
end)
:timeout("5m")
:retries(3, "exponential")
:build()
local deploy_task = task("deploy_terraform")
:description("Deploy infrastructure using Terraform")
:command(function(this, params)
local terraform = require("terraform")
-- Terraform init runs automatically
local client = terraform.init(this.workdir:get())
-- Load configuration from values.yaml
local tfvars = client:create_tfvars("terraform.tfvars", {
environment = values.terraform.environment,
region = values.terraform.region,
instance_type = values.terraform.instance_type
})
-- Plan and apply infrastructure
local plan_result = client:plan({ var_file = tfvars.filename })
if plan_result.success then
return client:apply({
var_file = tfvars.filename,
auto_approve = true
})
end
return false, "Terraform plan failed"
end)
:timeout("15m")
:build()
-- Define the complete GitOps workflow
workflow.define("infrastructure_deployment")
:description("Complete GitOps: Clone + Plan + Deploy")
:version("1.0.0")
:tasks({ clone_task, deploy_task })
:config({
timeout = "20m",
max_parallel_tasks = 1
})
:on_complete(function(success, results)
if success then
log.info("๐ Infrastructure deployed successfully!")
end
end)
๐๏ธ Native GitOps Integration¶
Built-in support for Git and Terraform operations
-- Git operations with automatic credential handling
local git = require("git")
local repo = git.clone("https://github.com/company/infrastructure", "/tmp/infra")
git.checkout(repo, "production")
git.pull(repo, "origin", "production")
-- Terraform lifecycle management
local terraform = require("terraform")
local client = terraform.init("/tmp/infra/terraform/") -- Runs 'terraform init'
local plan = client:plan({ var_file = "production.tfvars" })
local apply = client:apply({ auto_approve = true })
-- Values-driven configuration
local config = {
environment = values.terraform.environment or "production",
region = values.terraform.region or "us-west-2",
instance_count = values.terraform.instance_count or 3
}
โ๏ธ External Configuration Management¶
Clean separation of code and configuration using values.yaml
values.yaml:
terraform:
environment: "production"
region: "us-west-2"
instance_type: "t3.medium"
enable_monitoring: true
git:
repository_url: "https://github.com/company/terraform-infrastructure"
branch: "main"
workflow:
timeout: "30m"
max_parallel_tasks: 2
Access in workflows:
-- Load configuration from values.yaml
local terraform_config = {
environment = values.terraform.environment,
region = values.terraform.region,
instance_type = values.terraform.instance_type
}
โก Parallel Execution with Goroutines ๐¶
GAME CHANGER! Execute mรบltiplas operaรงรตes simultaneamente e reduza o tempo de deploy de minutos para segundos!
-
10x Mais Rรกpido
Deploy em 10 servidores em paralelo ao invรฉs de sequencialmente.
Antes: 5 minutos โฑ๏ธ
Agora: 30 segundos โก -
Worker Pools
Controle a concorrรชncia com worker pools para processar grandes volumes.
Perfeito para APIs com rate limiting.
-
Async/Await
Padrรฃo moderno de programaรงรฃo assรญncrona no Lua.
Cรณdigo limpo e fรกcil de entender.
-
Timeout Built-in
Proteรงรฃo contra operaรงรตes travadas com timeout automรกtico.
Seguro e confiรกvel.
๐ก Exemplo Real: Deploy Paralelo¶
local deploy_task = task("deploy_multi_server")
:description("Deploy to 10 servers in parallel - 10x faster!")
:command(function(this, params)
local goroutine = require("goroutine")
-- Lista de servidores para deploy
local servers = {
"web-01", "web-02", "web-03", "api-01", "api-02",
"api-03", "db-01", "db-02", "cache-01", "cache-02"
}
log.info("๐ Starting parallel deployment to " .. #servers .. " servers...")
-- Criar handles assรญncronos para cada servidor
local handles = {}
for _, server in ipairs(servers) do
local handle = goroutine.async(function()
log.info("๐ฆ Deploying to " .. server)
-- Simula deploy (upload, install, restart, health check)
goroutine.sleep(500)
return server, "deployed", os.date("%H:%M:%S")
end)
table.insert(handles, handle)
end
-- Aguardar TODOS os deploys completarem
local results = goroutine.await_all(handles)
-- Processar resultados
log.info("๐ All " .. #results .. " servers deployed successfully!")
return true, "Parallel deployment completed in ~3 seconds!"
end)
:timeout("2m")
:build()
workflow.define("parallel_deployment")
:description("Deploy to multiple servers in parallel")
:tasks({ deploy_task })
Performance Real:
Operaรงรฃo | Sequencial | Com Goroutines | Ganho |
---|---|---|---|
๐ Deploy 10 servidores | 5 minutos | 30 segundos | 10x โก |
๐ฅ Health check 20 serviรงos | 1 minuto | 5 segundos | 12x โก |
๐ Processar 1000 itens | 10 segundos | 1 segundo | 10x โก |
๐ Documentaรงรฃo Completa de Goroutines | ๐งช Mais Exemplos
๐ Core Features¶
๐๏ธ Stack Management¶
Pulumi-style stack management with persistent state, exported outputs, and execution history tracking.
- ๐ Persistent stack state with SQLite in
/etc/sloth-runner/
- ๐ Exported outputs capture from pipeline with JSON support
- ๐ Complete execution history tracking with duration metrics
- ๐ฏ Environment isolation by stack name
- ๐ Unique task and group IDs for enhanced traceability
- ๐ Task listing with detailed relationship view
- ๐๏ธ Stack deletion with confirmation prompts
- ๐จ Multiple output formats: basic, enhanced, modern, json
# Create and run a stack with enhanced output
sloth-runner stack new my-production-stack -d "Production deployment" -f pipeline.sloth
sloth-runner run my-production-stack -f pipeline.sloth --output enhanced
# Run with JSON output for CI/CD integration
sloth-runner run my-stack -f workflow.sloth --output json
# List all stacks with status and metrics
sloth-runner stack list
# Show stack details with outputs and execution history
sloth-runner stack show my-production-stack
# List tasks with unique IDs and dependencies
sloth-runner list -f pipeline.sloth
# Delete stacks with confirmation
sloth-runner stack delete old-stack
sloth-runner stack delete old-stack --force # skip confirmation
๐ Distributed by Design¶
Native master-agent architecture with real-time streaming, automatic failover, and intelligent load balancing.
- ๐ gRPC-based agent communication
- ๐ก Real-time command streaming
- ๐ Automatic failover and recovery
- โ๏ธ Intelligent load balancing
- ๐๏ธ Scalable architecture for enterprise workloads
- ๐ TLS-secured communication
# Start master server
sloth-runner master --port 50053 --daemon
# Start and manage agents
sloth-runner agent start --name worker-01 --master localhost:50053
sloth-runner agent list --master localhost:50053
sloth-runner agent run worker-01 "docker ps" --master localhost:50053
๐จ Web Dashboard & UI¶
Modern web-based dashboard for comprehensive workflow management and monitoring.
- ๐ Real-time monitoring dashboard
- ๐ฏ Agent management interface
- ๐ Performance metrics visualization
- ๐ Centralized logging system
- ๐ฅ Team collaboration features
# Start web dashboard
sloth-runner ui --port 8080
# Access at http://localhost:8080
# Run as daemon
sloth-runner ui --daemon --port 8080
๐ค AI/ML Integration¶
Built-in artificial intelligence capabilities for smart automation and decision making.
- ๐ง OpenAI integration for text processing
- ๐ค Automated decision making
- ๐ Code generation assistance
- ๐ Intelligent analysis of workflows
- ๐ฏ Smart recommendations
-- AI-powered workflow optimization
local ai = require("ai")
local result = ai.openai.complete("Generate Docker build script")
local decision = ai.decide({
cpu_usage = metrics.cpu,
memory_usage = metrics.memory
})
โฐ Advanced Scheduling¶
Enterprise-grade task scheduling with cron-style syntax and background execution.
- โฐ Cron-style scheduling syntax
- ๐ Background execution daemon
- ๐ Recurring tasks management
- ๐ฏ Event-driven triggers
- ๐ Schedule monitoring
# Enable scheduler
sloth-runner scheduler enable --config scheduler.yaml
# List scheduled tasks
sloth-runner scheduler list
# Delete a scheduled task
sloth-runner scheduler delete backup-task
๐พ Advanced State Management¶
Built-in SQLite-based persistent state with atomic operations, distributed locks, and TTL support.
- ๐ Distributed locking mechanisms
- โ๏ธ Atomic operations support
- โฐ TTL-based data expiration
- ๐ Pattern-based queries
- ๐ State replication across agents
-- Advanced state operations
local state = require("state")
state.lock("deploy-resource", 30) -- 30 second lock
state.set("config", data, 3600) -- 1 hour TTL
state.atomic_increment("build-count")
๐๏ธ Project Scaffolding¶
Template-based project initialization similar to Pulumi new or Terraform init.
- ๐ Multiple templates (basic, cicd, infrastructure, microservices, data-pipeline)
- ๐ฏ Interactive mode with guided setup
- ๐ Complete project structure generation
- ๐ง Configuration files auto-generated
# List available templates
sloth-runner workflow list-templates
# Create new project from template
sloth-runner workflow init my-app --template cicd
# Interactive mode
sloth-runner workflow init my-app --interactive
โ๏ธ Multi-Cloud Excellence¶
Comprehensive cloud provider support with advanced automation capabilities.
- โ๏ธ AWS, GCP, Azure native integration
- ๐ Terraform & Pulumi advanced support
- ๐ง Infrastructure as Code automation
- ๐ Security & compliance built-in
- ๐ Cost optimization tools
๐ Enterprise Security¶
Built-in security features for enterprise compliance and data protection.
- ๐ Certificate management
- ๐ Secret encryption and storage
- ๐ก๏ธ Vulnerability scanning
- ๐ Compliance checking
- ๐ Audit logging system
๐ Enhanced Output System¶
Pulumi-style rich output formatting with configurable styles, progress indicators, and structured displays.
- ๐จ Multiple output styles (basic, enhanced, rich, modern, json)
- ๐ Real-time progress indicators
- ๐ฏ Structured output sections
- ๐ Rich color formatting
- ๐ Metrics visualization
- ๐ง JSON output for automation and CI/CD integration
# Enhanced Pulumi-style output
sloth-runner run my-stack -f workflow.sloth --output enhanced
# JSON output for automation
sloth-runner run my-stack -f workflow.sloth --output json
# List tasks with unique IDs
sloth-runner list -f workflow.sloth
๐ง Rich Module Ecosystem¶
Extensive collection of pre-built modules for common automation tasks.
- ๐ Network & HTTP operations
- ๐ฝ Database integrations (MySQL, PostgreSQL, MongoDB, Redis)
- ๐ง Notification systems (Email, Slack, Discord)
- ๐ Python/R integration with virtual environments
- ๐ GitOps advanced workflows
- ๐งช Testing frameworks and quality assurance
๐ Quick Start Examples¶
๐๏ธ Stack Management with Pulumi-Style Output¶
# Create a new project from template
sloth-runner workflow init my-cicd --template cicd
# Deploy to development environment
sloth-runner run dev-app -f my-cicd.sloth --output enhanced
# Deploy to production with stack persistence
sloth-runner run prod-app -f my-cicd.sloth -o rich
# Check deployment status and outputs
sloth-runner stack show prod-app
๐ Stack with Exported Outputs & JSON Output¶
local deploy_task = task("deploy")
:command(function(params, deps)
-- Deploy application
local result = exec.run("kubectl apply -f deployment.yaml")
-- Export important outputs to stack
runner.Export({
app_url = "https://myapp.example.com",
version = "1.2.3",
environment = "production",
deployed_at = os.date(),
health_endpoint = "https://myapp.example.com/health"
})
return true, result.stdout, { status = "deployed" }
end)
:build()
workflow.define("production_deployment", {
tasks = { deploy_task }
})
Run with JSON output for automation:
# Get structured JSON output for CI/CD integration
sloth-runner run prod-deployment -f deploy.sloth --output json
# Example JSON output:
{
"status": "success",
"duration": "5.192ms",
"stack": {
"id": "abc123...",
"name": "prod-deployment"
},
"tasks": {
"deploy": {
"status": "Success",
"duration": "4.120ms"
}
},
"outputs": {
"app_url": "https://myapp.example.com",
"version": "1.2.3",
"environment": "production"
},
"workflow": "production_deployment",
"execution_time": 1759237365
}
๐ CLI Commands Overview¶
Stack Management (NEW!)¶
# Execute with stack persistence (NEW SYNTAX)
sloth-runner run {stack-name} --file workflow.sloth
# Enhanced output styles
sloth-runner run {stack-name} --file workflow.sloth --output enhanced
sloth-runner run {stack-name} --file workflow.sloth --output json
# Manage stacks
sloth-runner stack list # List all stacks
sloth-runner stack show production-app # Show stack details with outputs
sloth-runner stack delete old-env # Delete stack
# List tasks with unique IDs
sloth-runner list --file workflow.sloth # Show tasks and groups with IDs
Project Scaffolding¶
# Create new projects
sloth-runner workflow init my-app --template cicd
sloth-runner workflow list-templates # Available templates
Distributed Agents & Web UI¶
# Start master server
sloth-runner master --port 50053 --daemon
# Start distributed agents
sloth-runner agent start --name web-builder --master localhost:50053
sloth-runner agent start --name db-manager --master localhost:50053
# Start web dashboard
sloth-runner ui --port 8080 --daemon
# Access dashboard at http://localhost:8080
# List connected agents
sloth-runner agent list --master localhost:50053
# Execute commands on specific agents
sloth-runner agent run web-builder "docker ps" --master localhost:50053
Advanced Scheduling¶
# Enable background scheduler
sloth-runner scheduler enable --config scheduler.yaml
# List and manage scheduled tasks
sloth-runner scheduler list
sloth-runner scheduler delete backup-task
๐ Distributed Deployment with Monitoring¶
local monitoring = require("monitoring")
local state = require("state")
-- Production deployment with comprehensive monitoring
local deploy_task = task("production_deployment")
:command(function(params, deps)
-- Track deployment metrics
monitoring.counter("deployments_started", 1)
-- Use state for coordination
local deploy_id = state.increment("deployment_counter", 1)
state.set("current_deployment", deploy_id)
-- Execute deployment
local result = exec.run("kubectl apply -f production.yaml")
if result.success then
monitoring.gauge("deployment_status", 1)
state.set("last_successful_deploy", os.time())
log.info("โ
Deployment " .. deploy_id .. " completed successfully")
else
monitoring.gauge("deployment_status", 0)
monitoring.counter("deployments_failed", 1)
log.error("โ Deployment " .. deploy_id .. " failed: " .. result.stderr)
end
return result
end)
:build()
๐ Multi-Agent Distributed Execution¶
local distributed = require("distributed")
-- Execute tasks across multiple agents
workflow.define("distributed_pipeline", {
tasks = {
task("build_frontend")
:agent("build-agent-1")
:command("npm run build")
:build(),
task("build_backend")
:agent("build-agent-2")
:command("go build -o app ./cmd/server")
:build(),
task("run_tests")
:agent("test-agent")
:depends_on({"build_frontend", "build_backend"})
:command("npm test && go test ./...")
:build(),
task("deploy")
:agent("deploy-agent")
:depends_on({"run_tests"})
:command("./deploy.sh production")
:build()
}
})
๐พ Advanced State Management¶
local state = require("state")
-- Complex state operations with locking
local update_config = task("update_configuration")
:command(function(params, deps)
-- Critical section with automatic locking
return state.with_lock("config_update", function()
local current_version = state.get("config_version") or 0
local new_version = current_version + 1
-- Atomic configuration update
local success = state.compare_and_swap("config_version", current_version, new_version)
if success then
state.set("config_data", params.new_config)
state.set("config_updated_at", os.time())
log.info("Configuration updated to version " .. new_version)
return { version = new_version, success = true }
else
log.error("Configuration update failed - version mismatch")
return { success = false, error = "version_mismatch" }
end
end)
end)
:build()
๐ CI/CD Pipeline with GitOps¶
local git = require("git")
local docker = require("docker")
local kubernetes = require("kubernetes")
-- Complete CI/CD pipeline
workflow.define("gitops_pipeline", {
on_git_push = true,
tasks = {
task("checkout_code")
:command(function()
return git.clone(params.repository, "/tmp/build")
end)
:build(),
task("run_tests")
:depends_on({"checkout_code"})
:command("cd /tmp/build && npm test")
:retry_count(3)
:build(),
task("build_image")
:depends_on({"run_tests"})
:command(function()
return docker.build({
path = "/tmp/build",
tag = "myapp:" .. params.git_sha,
push = true
})
end)
:build(),
task("deploy_staging")
:depends_on({"build_image"})
:command(function()
return kubernetes.apply_manifest({
file = "/tmp/build/k8s/staging.yaml",
namespace = "staging",
image = "myapp:" .. params.git_sha
})
end)
:build(),
task("integration_tests")
:depends_on({"deploy_staging"})
:command("./run-integration-tests.sh staging")
:build(),
task("deploy_production")
:depends_on({"integration_tests"})
:condition(function() return params.branch == "main" end)
:command(function()
return kubernetes.apply_manifest({
file = "/tmp/build/k8s/production.yaml",
namespace = "production",
image = "myapp:" .. params.git_sha
})
end)
:build()
}
})
๐ Module Reference¶
๐ง Core Modules
exec
- Command execution with streamingfs
- File system operationsnet
- Network utilitiesdata
- Data processing utilitieslog
- Structured logging
๐พ State & Monitoring
state
- Persistent state managementmetrics
- Monitoring and metricsmonitoring
- System monitoringhealth
- Health check utilities
โ๏ธ Cloud Providers
aws
- Amazon Web Servicesgcp
- Google Cloud Platformazure
- Microsoft Azuredigitalocean
- DigitalOcean
๐ ๏ธ Infrastructure
kubernetes
- Kubernetes orchestrationdocker
- Container managementterraform
- Infrastructure as Codepulumi
- Modern IaCsalt
- Configuration management
๐ Integrations
git
- Git operationspython
- Python script executionnotification
- Alert notificationscrypto
- Cryptographic operations
๐ฏ Why Choose Sloth Runner?¶
๐ข Enterprise Ready
- ๐ Distributed execution across multiple agents
- ๐ Production-grade security with mTLS
- ๐ Comprehensive monitoring and alerting
- ๐พ Reliable state management with persistence
- ๐ Circuit breakers and fault tolerance
๐ฉโ๐ป Developer Experience
- ๐งฐ Rich Lua-based DSL for complex workflows
- ๐ก Real-time command output streaming
- ๐ Interactive REPL for debugging
- ๐ Comprehensive documentation
- ๐ฏ Intuitive task dependency management
๐ Performance & Reliability
- โก High-performance parallel execution
- ๐ Automatic retry and error handling
- ๐ Built-in performance monitoring
- ๐๏ธ Resource optimization and throttling
- ๐ก๏ธ Robust error recovery mechanisms
๐ง Operational Excellence
- ๐ Prometheus-compatible metrics
- ๐ Distributed tracing support
- ๐ Structured audit logging
- ๐จ Flexible alerting mechanisms
- ๐ GitOps workflow integration
๐ Get Started in Minutes¶
๐ฆ Installation¶
One-line installer for Linux/macOS:
Download and install for Linux AMD64:
Download and install for macOS (Apple Silicon):
โ๏ธ Create Your First Workflow¶
Create a file called hello.sloth
:
task("hello")
:description("My first Sloth Runner task")
:command(function()
log.info("๐ฆฅ Hello from Sloth Runner!")
return true
end)
:build()
workflow.define("greeting")
:description("Simple greeting workflow")
:tasks({"hello"})
โถ๏ธ Run Your Workflow¶
๐ Learn More¶
-
Quick Tutorial
Get up and running with practical examples in 5 minutes
-
Advanced Examples
Production-ready workflows and real-world use cases
-
Core Concepts
Deep dive into Sloth Runner's architecture and features
-
API Reference
Complete documentation of all modules and functions
-
Modern DSL
Learn the modern task definition syntax
-
GitHub Repository
Source code, issues, and contributions
๐พ State Management & Persistence¶
Production Ready
SQLite-based persistent state with enterprise features
Features:
- โ Atomic operations: increment, compare-and-swap, append
- โ Distributed locks with automatic timeout handling
- โ TTL support for automatic data expiration
- โ Pattern matching for bulk operations
- โ WAL mode for high performance
Persistent State Example
๐ Metrics & Monitoring¶
Production Ready
Comprehensive monitoring with Prometheus integration
Capabilities:
- ๐ System metrics: CPU, memory, disk, network monitoring
- ๐ Custom metrics: gauges, counters, histograms, timers
- ๐ฅ Health checks with configurable thresholds
- ๐ Prometheus endpoints for external monitoring
- ๐จ Real-time alerting based on conditions
Monitoring Example
๐ Distributed Agent System¶
Production Ready
Master-agent architecture for distributed execution
Features:
- ๐ Master-agent architecture with gRPC communication
- ๐ก Real-time streaming of command output
- ๐ Automatic agent registration and health monitoring
- โ๏ธ Load balancing across available agents
- ๐ TLS encryption for secure communication
Distributed Execution
๐ Documentation by Language¶
-
English
Complete documentation in English
-
Portuguรชs
Documentaรงรฃo completa em Portuguรชs
-
ไธญๆ
ๅฎๆด็ไธญๆๆๆกฃ
๐ง Module Reference¶
Built-in Modules¶
-
State
Persistent state management
-
Metrics
Monitoring and observability
-
Exec
Command execution
-
FS
File system operations
-
Net
Network operations
-
Data
Data processing utilities
-
Log
Structured logging
-
Pkg
Package management
Cloud Provider Modules¶
-
AWS
Amazon Web Services integration
-
GCP
Google Cloud Platform
-
Azure
Microsoft Azure
-
DigitalOcean
DigitalOcean cloud
Infrastructure Modules¶
-
Docker
Container management
-
Pulumi
Modern Infrastructure as Code
-
Terraform
Infrastructure provisioning
-
Systemd
Service management
๐ Get Started Today¶
Ready to streamline your automation? Install Sloth Runner now!
# One-line install
curl -sSL https://raw.githubusercontent.com/chalkan3-sloth/sloth-runner/main/install.sh | bash
# Create your first workflow
cat > hello.sloth << 'EOF'
task("greet")
:command(function()
log.info("Hello World! ๐")
return true
end)
:build()
workflow.define("hello")
:tasks({"greet"})
EOF
# Run it!
sloth-runner run -f hello.sloth
๐ค Community & Support¶
-
GitHub
Source code, issues, and contributions
-
Discussions
Community Q&A and feature discussions
-
Issues
Bug reports and feature requests
-
Enterprise
Commercial support and services
๐ฆฅ Ready to streamline your automation?
Join developers using Sloth Runner for reliable, scalable task orchestration.