Ratio1 Deeploy blog #1: Decentralized Managed Container Orchestration

Education

Ratio1 1 Deeploy Decentralized Managed Container Orchestration
Ratio1 1 Deeploy Decentralized Managed Container Orchestration

What is containerization? Containerization is a way to package software so it works the same everywhere. Imagine a container like a little box that holds your app, plus everything it needs to run - its code, tools, and settings. No matter where you run the box - on your laptop, in the cloud, or on another computer - it behaves the same. It’s like putting your app in a lunchbox so it doesn’t get mixed up with anything else. This helps avoid the common “it worked on my machine” problem, makes apps start faster, and saves space compared to older methods like virtual machines.

What is orchestration? Orchestration is the process of automatically managing how software containers are deployed, started, stopped, updated, and scaled across multiple machines. Think of it like a conductor leading an orchestra - making sure each container (or “instrument”) plays its part at the right time and in harmony with the others. In large applications, there could be dozens or hundreds of containers that need to work together. Orchestration tools like Kubernetes handle the behind-the-scenes complexity, deciding where containers should run, restarting them if they fail, and scaling them up or down based on demand. It ensures your apps stay available, efficient, and easy to manage - even at large scale.

Container orchestration is vital in cloud-native apps – tools like Kubernetes have become the “de-facto standard” for automating the deployment, scaling, and management of containers. However, traditional orchestrators assume a centrally managed cluster. Ratio1 reimagines this by running orchestration on a decentralized, trustless compute fabric. In Ratio1’s network, any device (server, laptop, edge gateway) can run a “Edge Node” agent that contributes CPU/GPU resources. These Edge Nodes form a peer-to-peer cluster with no single point of control. Smart contracts and on-chain oracles govern task distribution, payments, and security, so applications (AI models or containerized services) can be deployed across pooled, non-custodial compute without requiring users to trust any single operator. In effect, Ratio1 Deeploy serves as the managed API gateway for container/AI workloads: you submit a request to Deeploy, and it securely provisions that container across the Ratio1 Edge Node network.

Decentralized orchestration solves challenges unique to edge/IoT environments. In a distributed edge network there is no central authority and devices can dynamically join or leave. Edge nodes are highly heterogeneous (CPU types, GPUs, memory), and payments or incentives must flow peer-to-peer. Ratio1 Deeploy’s design reflects these realities: it leverages blockchain-based identity and consensus so that no one has to trust a central scheduler, yet each job is verified and routed correctly. This model improves scalability and resilience: as more nodes join, the mesh grows; there is no single failure point, and idle hardware is utilized efficiently. By automating network-wide workload placement and using token‑based incentives, Ratio1 aims to “turn everyday devices into AI servers” and drastically cut AI cloud costs.

Decentralized orchestration through Ratio1 oracle network across multiple nodes eliminates reliance on traditional central orchestrator. This in turn makes the ecosystem dramatically more robust to attacks, outages or other disruptive events, as the failure one ore more controller nodes will not hinder the orchestration process. Concurrently the workers across the ecosystem are formed from a totally heterogeneous infrastructure rather than being limited to a quasi-homogeneous resources. Finally, we have to emphasise that this fully aligns with the anti-censorship and own-your-data Ratio1 policy.

Low-Level Example: Deploying a Containerized Web API

Deploying a Containerized Web API

Before diving into the full Deeploy flow, it helps to see a basic hands-on example. The Ratio1 Python SDK lets you target a single pre-authorized node and deploy a container there. For instance, a toy demo ex18_deploy_container_webapi.py does the following:

  1. Initialize a Session. The script starts by creating a ratio1.Session, which sets up trustless authentication to the Ratio1 network (via dAuth).


  2. Specify the Edge Node. You point the SDK to an Edge Node address (in a dev network this node is already “whitelisted” to accept requests).


  3. Create a Container App. Using a method like session.create_container_app (or similar), the script packages a Docker image or command. For example, it might spin up a simple HTTP server inside a Python container.


  4. Deploy to the Node. Calling app.deploy() sends the container to the node. The node then launches the containerized web API and returns a public URL.


  5. Wait/Monitor (optional). The script can pause and monitor logs or results. It then prints out something like “Web API deployed at: http://” so you can test the service.

from ratio1 import Session
if __name__ == '__main__':
  
  session = Session(silent=True)
  node_address = "0x1234...EdgeNodeAddress"    
  app, _ = session.create_container_web_app( # this uses CONTAINER_APP_RUNNER
    node=node_address,
    name="ratio1_simple_container_webapp",
    image="tvitalii/flask-docker-app:latest",
    port=5000
  )
  try:
    url = app.deploy()
    print("Webapp deployed at: ", url)
  except Exception as e:
    print("Error deploying webapp: ", e)
  session.wait(seconds=120) # optional: wait a bit for startup/logs

This low-level script bypasses the full Deeploy machinery (no on-chain escrow, no multi-node routing, no blockchain authentication, no managed orchestration ). It’s not production-grade, but it’s ideal for learning and prototyping. It shows how the Ratio1 SDK automatically handles container packaging and node communication. In practice, you’d use this in a dev setting or experimentation phase. Once you’re ready to scale or secure the deployment, you would move to the full Deeploy API flow (next section).

Production-Grade Flow: The Deeploy API and Secure Job Routing

In a production deployment, partners and developers use the Deeploy API to run workloads across the Oracle/Edge subnetwork. Conceptually, Deeploy acts like a decentralized “cloud controller” built on blockchain oracles. Clients submit their job request (container image, config, etc.) to Deeploy, and behind the scenes a chain of checks and consensus steps ensures security and validity. The high-level flow is:

  1. Job Submission. The developer calls the Deeploy REST API (or UI) with their job parameters. This request must be properly formatted (e.g. JSON with image name, resource limits, etc.).


  2. Identity & Signing. Deeploy verifies an EVM-compatible digital signature attached to the request. This signature (from your private key) proves you authorized this job. It also ties the request to your on-chain account for payment and auditing. See below JSON request for more in-depth understanding


  3. Escrow (PoAI) Verification. The system checks that you have locked up sufficient funds in a smart contract (often called a Proof-of-Intent or escrow transaction). This ensures nodes will be paid for the work. Deeploy queries the blockchain to confirm the escrow tx is in place before proceeding.


  4. Job Validation. Deeploy’s validator nodes inspect the request for correctness. They might check that the container image is from a trusted registry, that required on-chain licenses or approvals exist, and that the requested resources are reasonable. Any safety filters (e.g. data privacy restrictions) are enforced here.


  5. Scheduling & Consensus. Once the job is signed, paid, and validated, Deeploy broadcasts it to the Oracle/Edge node subnetwork. An Oracle node from a set of supervisor/oracle nodes that uses practical Byzantine Fault Tolerant (pBFT) consensus protocol to agree on worker node availability will select or directly deliver the job to the targeted nodes. Basically these nodes determine which Edge Node(s) should run the container (based on current load, capabilities, or geographic location). Later on, multi-phase consensus ensures that even if some nodes are malicious or failing, the job routing remains correct and trust-minimized.


  6. Deployment & Runtime. The selected Edge Node(s) pull the container and launch it. They report back to the network (and the original client) with status updates or results. Throughout execution, nodes monitor the container and handle issues like restarts or load balancing – all under the broader governance of the Ratio1 network. Importantly, because the entire process is anchored on-chain, each step is auditable and tamper-resistant.

Looking under the hood and comparing with our above SDK example the same Deeploy production-grade request has a similar look to the below figure:

{
    "app_alias": "demo_webapi",
    "plugin_signature": "CONTAINER_APP_RUNNER",
    "return_request": true,
    "nonce": "0x196aa1f7aaf",
    "target_nodes": [
      "0xai_A0RAld97-xHTXm57jf1QB9fEHBi8_mEkE5d2oTafu9k8"
    ],
    "target_nodes_count": 0,
    "app_params": {
      "CONTAINER_RESOURCES": {
        "cpu": 1,
        "memory": "512m"
      },
      "CR": "docker.io",
      "CR_PASSWORD": "password",
      "CR_USER": "user",
      "RESTART_POLICY": "always",
      "IMAGE_PULL_POLICY": "always",
      "NGROK_EDGE_LABEL": null,
      "NGROK_USE_API": false,
      "IMAGE": "ratio1/test-web-app",
      "PORT": 8000
    },
    "pipeline_input_type": "void",
    "pipeline_input_uri": null,
    "chainstore_response": false,
    "EE_ETH_SIGN": "0xccc09ff40e3d9b973491fe0487f29477923a262d40b610d7ba8d88cf861decee6bd7a0a47eb2532511678f4963e980ac0318b155a9406dc85bf55a1b4f7361091c",
    "EE_ETH_SENDER": "0x464579c1Dc584361e63548d2024c2db4463EdE48",
    "EE_ETH_ESCROW_TX" : "0x23131229477923a262dc1c1222123229477923a262d83884829477923a262d29477923a262d",
    "IMAGE": "repo/image:tag",
    "CR": "docker.io"
  }

The above example presents the simplicity.

Partners and customers can interact with Deeploy in two main ways. For ease-of-use, Ratio1 provides a reference web dApp/UI that lets you submit jobs with a few clicks. Alternatively, enterprise users and platforms can call the Deeploy API directly from their own dashboards or CI/CD pipelines. In either case, the underlying protocol flow – signature check, escrow, consensus scheduling – is the same.

Command Sending via Deeploy: A Two-Tier Approach

Deeploy implements a sophisticated command sending system that operates at two distinct levels, providing both precision and convenience for managing distributed applications

Instance-Level Commands

The instance-level command system (send_instance_command) offers granular control over specific application instances. This approach is ideal when you need to target particular nodes or instances with precise commands.

Key Features:

  • Direct targeting of specific instances

  • Requires detailed knowledge of:

    • Target nodes

    • Plugin signatures

    • Instance IDs

    • Application IDs

  • No automatic discovery process

  • Suitable for precise, targeted operations

Application-Level Commands

The application-level command system (send_app_command) provides a more streamlined approach to command distribution. This higher-level abstraction simplifies the management of distributed applications by handling the discovery and routing of commands automatically.

Key Features:

  • Simplified interface requiring only the application ID

  • Automatic discovery of:

    • Running nodes

    • Relevant plugin instances

    • Instance locations

  • Intelligent command distribution

  • Reduced management overhead

Practical Use Cases: Container Management in Deeploy

Container Lifecycle Management

Deeploy's command system is particularly powerful when managing containerized applications. Let's explore how to handle common container operations like restarting or stopping containers across your distributed system.

Stop Command

The stop command provides a direct way to terminate a container instance. When executed, it immediately halts the container's operation, stops the associated log collection process, and terminates any active ngrok tunnel. The command also ensures that all container logs are saved to disk before shutdown. After stopping, the container won't automatically restart unless explicitly commanded to do so.

Restart Command

The restart command offers a way to refresh a container instance. It performs a complete cycle of stopping and starting the container. When executed, it first stops the container (including log collection and ngrok tunnel), saves the logs, and then starts a fresh instance of the container. During the restart, the system follows the image pull policy that was specified during container creation - if the policy is set to "always", it will check for and pull any new versions of the container image before starting the new instance. This ensures that your container can be updated with the latest image version during the restart process.

Both commands can be sent either to specific instances or to all instances of an application, providing flexibility in managing container lifecycles across your distributed system.

Sending a request to Deeploy via CLI

The Ratio1 SDK provides a powerful way to interact with the Deeploy API, enabling developers to manage AI pipelines and applications in a secure and authenticated manner. The ex19_deeploy_example.py provides a ready-to-use CLI tool that enables developers to interact with the Deeploy API. It's not just a learning resource but a practical utility that can be used in real-world scenarios to manage AI pipelines and applications.

Key Features

  1. The tutorial showcases several important capabilities:

  2. Building and signing messages for Deeploy API requests

  3. Sending authenticated requests to the Deeploy API

  4. Handling responses

  5. Managing pipelines and applications

Prerequisites

  • Python 3.x

  • Ratio1 SDK

  • A private key file in PEM format

API Endpoints

The tutorial covers five main endpoints:

  1. get_apps - Retrieve list of applications

  2. create_pipeline - Create new AI pipelines

  3. delete_pipeline - Remove existing pipelines

  4. send_app_command - Send commands to applications

  5. send_instance_command - Send commands to specific instances

Input parameters Guide

Required Parameters

1. --private-key

  • Type: String (file path)

  • Required: Yes

  • Description: Path to your PEM format private key file

  • Example: --private-key /path/to/your/private-key.pem

  • Purpose: Used for authenticating requests to the Deeploy API

  • Security Note: Must be a valid PEM file containing your Ethereum private key

2. --endpoint

  • Type: String

  • Required: Yes

  • Default: 'create_pipeline'

  • Choices:

    • create_pipeline

    • delete_pipeline

    • get_apps

    • send_app_command

    • Send_instance_command

  • Description: Specifies which API endpoint to call

  • Example: --endpoint get_apps

Optional Parameters

1. --key-password

  • Type: String

  • Required: No

  • Default: None

  • Description: Password for the private key file (if the key is encrypted)

  • Example: --key-password "your_password"

  • Note: Only needed if your private key file is password-protected

2. --request

  • Type: String (file path)

  • Required: No

  • Default: None

  • Description: Path to the JSON file containing the request data

  • Example: --request /path/to/request.json

  • Note: Required for all endpoints except get_apps

Example Usage:

1. Get list of apps:

   ```bash
   python3 ex19_deeploy_example.py --private-key path/to/private-key.pem --endpoint get_apps   
   ```

2. Create a pipeline:

   ```bash
   python3 ex19_deeploy_example.py --private-key path/to/private-key.pem --request path/to/request.json --endpoint create_pipeline
   ```

Example request.json for pipeline creation:

   ```json
   {
     "request": {
       "app_alias": "app_deployed_from_tutorials",
       "plugin_signature": "PLUGIN_SIGNATURE",
       "target_nodes": [
         "0xai_target_node_1"
       ],
       "target_nodes_count": 0
       "pipeline_input_type": "void"
     }
   }
   ```

3. Delete a pipeline:

   ```bash
   python3 ex19_deeploy_example.py --private-key path/to/private-key.pem --request path/to/request.json --endpoint delete_pipeline
   ```

Example request.json for pipeline deletion:

   ```json
   {
     "request": {
       "app_id": "target_app_name_id_returned_by_get_apps_or_create_pipeline",
       "target_nodes": [
         "0xai_target_node_1"
       ]
     }
   }
   ```

4. Send app command:

    ```bash
    python3 ex19_deeploy_example.py --private-key path/to/private-key.pem --request path/to/request.json --endpoint send_app_command
    ```

Example request.json for sending app command:

    ```json
    {
      "request": {
        "app_id": "target_app_name_id_returned_by_get_apps_or_create_pipeline",
        "app_command": "RESTART"
      }
    }
    ```

5. Send instance command:

    ```bash
    python3 ex19_deeploy_example.py --private-key path/to/private-key.pem --request path/to/request.json --endpoint send_instance_command
    ```

Example request.json for sending instance command:

    ```json
    {
      "request": {
        "app_id": "target_app_name_id_returned_by_get_apps_or_create_pipeline",
        "target_nodes": [
          "0xai_target_node_1"
        ],
        "plugin_signature": "PLUGIN_SIGNATURE",
        "instance_id": "PLUGIN_INSTANCE_ID",
        "instance_command": "RESTART"
      }
    }
    ```

In summary, this hybrid architecture delivers the best of both worlds: cloud-like ease of deploying containers and decentralized trust and scalability. By distributing orchestration across a blockchain-secured network of nodes, Deeploy avoids single points of failure and leverages idle resources worldwide. It’s especially well-suited for AI and IoT use cases: AI inference services can run on edge servers close to data sources (reducing latency), and any device owner can monetize spare capacity in a transparent way. As one expert notes, oracle-based networks can perform off-chain computation with trust guarantees, offering more performance than on-chain-only logic. Ratio1’s approach turns every eligible machine into a potential “AI server” and makes large-scale app hosting decentralized, secure, and cost-efficient.

Key Takeaways: The Ratio1 Deeploy model (the API, underlying mechanics and the dApp UX) enables scalable, secure, decentralized cloud hosting. It automates containerized workloads across many edge machines, while using blockchain for identity, payment, and consensus. This means developers get the familiar orchestration workflow they need, but without relying on a centralized cloud provider. As edge computing grows (5G, IoT, AI at the edge), such decentralized orchestration platforms are increasingly important for meeting demand in a flexible, resilient way.

Ratio1’s architecture builds on concepts from edge orchestration and blockchain. These principles underpin how Deeploy brings Kubernetes‑style deployment to a distributed, trust-minimized network.

Andrei Ionut Damian

Andrei Ionut Damian

May 8, 2025

The Ultimate AI OS Powered by Blockchain Technology

©Ratio1 2025. All rights reserved.

The Ultimate AI OS Powered by Blockchain Technology

©Ratio1 2025. All rights reserved.

The Ultimate AI OS Powered by Blockchain Technology

©Ratio1 2025. All rights reserved.