Never settle: build, simplify, optimize and repeat

Education

Never settle: build, simplify, optimize and repeat
Never settle: build, simplify, optimize and repeat

If you’ve ever tried harnessing kaizen or pomodoro principles, you know their true value: true meaningful progress and enjoying the ride! Kaizen and pomodoro are two powerful methodologies that, at their core, enable steady, incremental gains and champion true innovation. So, they reward you for iterating, adjusting, and ultimately never settling for "just good enough."

Yet, all too often, I’ve seen these powerful methodologies get co-opted (and twisted) by various mindsets such as trying to make 9 women give birth to a child in 1 month because "it’s taking forever", "dislike"-ing products, dismissing a complex initiative for "sucking the energy" and seeing life-long learning as a burden... The real culprit? I guess bad habits and diluted philosophies that equate "finishing quickly" with "finishing well." We’ve all heard the excuses but the truth is, bad habits and misguided beliefs spread like viruses, undermining genuine creativity and meaningful product evolution but enough of that!

At Ratio1, we’re committed to flipping the script and "never settling" isn't just yet another buzzword. We’re not here to cram. We’re here to innovate, continuously refactor, and simplify. It’s truly a journey and we're committed to continuous refactoring, relentless simplification, and incremental innovation - punctuated by the occasional dramatic leap - because real progress demands a willingness to reimagine, recalibrate, and then push forward again.

Always with an eye on the bigger prize, recently we overhauled our Edge Node onboarding and user experience, making it even more seamless for command-line enthusiasts and UI lovers alike. Below, we’ll walk you through our latest updates, so you can witness how our "never settle" approach translates into real-world features.

Embracing the edge: the Ratio1 Edge Node

Think of the Ratio1 Edge Node as a meta AI Operating System that you can deploy on "everyday hardware" and thus transforming your machine into a contributor (and why not even vital one) within the Ratio1 network. With each Edge Node, you'll be able to provide and manage your device's resources and execute decentralized AI tasks without dealing with a swarm of complex, manual configurations. Plus, you can take advantage of advanced Ratio1 core libraries (formerly known as Naeural Edge Protocol libraries) like naeural_core and naeural_client (renamed to ratio1) - which bring you out-of-the-box ease starting in 2025. 

And don't forget you can be a user providing compute by simply running a node, a developer building no-code apps or ... both!

Why This Matters

  • No local subscriptions or manual tenant setups

  • No fiddling with user accounts or secrets

  • No DevOps overhead - just launch and go

It’s all about simplification and optimization, letting you channel your mental energy into building and innovating, not debugging environment variables.

Just a few clicks (or commands) away

Our constant streamlining means that deploying a Ratio1 Edge Node is now as simple as running a Docker command. Before you dive in, make sure you’ve got the basic steps:

A. Install Docker on your machine (Docker Desktop for Windows and macOS).
    Note: Docker must be running on your machine at all times.

B. Make sure you meet the minimum requirements per Ratio1 Edge Node.

  • A 64-bit CPU

  • 2 cores (vCores are fine)

  • 6GB of RAM

  • 20GB of storage

C. Deploy node/nodes

D. Check node identity and fine-tune your config if you want

C.1. Single node deployment

Easy peasy enough, run this command, and you’re off:

docker run -d --rm --name r1node --pull=always -v r1vol:/edge_node/_local_cache/ ratio1/edge_node:testnet

Key Flags

  • -d: Runs the container in the background.

  • --rm: Removes the container when you stop it.

  • --pull=always: Fetches the latest image version on each run.

  • -v r1vol:/edge_node/_local_cache/: Mounts a persistent volume so your node data isn’t lost between restarts.

:develop specifying the version of the image/it sets the network (Mainnet & Testnet)

  • :develop & :testnet are used to start a node on Testnet

  • :latest & :mainnet are used to start a node on Mainnet

Pro Tip: If your system runs on ARM and you experience compatibility issues with Docker images, use --platform linux/amd64 to ensure you're running the correct x86_64 architecture.

docker run -d --rm --name r1node --platform linux/amd64 --pull=always -v r1vol:/edge_node/_local_cache/ ratio1/edge_node:testnet

Add --gpus all so your node can handle training and inference jobs requiring GPU acceleration.

docker run -d --rm --name r1node --gpus all --pull=always -v r1vol:/edge_node/_local_cache/ ratio1/edge_node:testnet

C.2. Multiple node deployment

Need more than one node on the same machine? Tweak the container names and volumes to avoid conflicts:

docker run -d --rm --name r1node1 --pull=always -v r1vol1:/edge_node/_local_cache/ ratio1/edge_node:testnet

C.3. Going compose-style

For streamlined scaling and management - especially if you’re running multiple nodes - docker-compose can be your best friend. Here’s a quick snippet you can drop into a docker-compose.yml file:

services:
  r1node1:
    image: ratio1/edge_node:develop
    container_name: r1node1
    platform: linux/amd64
    restart: always
    volumes:
      - r1vol1:/edge_node/_local_cache
    labels:
      - "com.centurylinklabs.watchtower.enable=true"         
      - "com.centurylinklabs.watchtower.stop-signal=SIGINT"          
  r1node2:
    image: ratio1/edge_node:develop
    container_name: r1node2
    platform: linux/amd64
    restart: always
    volumes:
      - r1vol2:/edge_node/_local_cache
    labels:
      - "com.centurylinklabs.watchtower.enable=true"         
      - "com.centurylinklabs.watchtower.stop-signal=SIGINT"          
volumes:
  r1vol1:
  r1vol2:

Just run:

docker-compose up -d

…and you’ll have multiple nodes and watchtower (an auto-updating agent) going in no time.

Please do note that running multiple nodes on same box is not advised for machine that do not have multiple GPUs and plenty of RAM. 

If you want to always pull the latest image you can add the --pull=always flag to the docker-compose up command.

docker-compose up -d --pull=always

You can stop the nodes by running in the same folder:

docker-compose down
Pro Tips:
  • If you want to enable automatic updates for your edge nodes, add Watchtower to your docker-compose.yml file.

Note: Watchtower does not fully function on systems running on ARM. We are working on a solution and will update both the documentation and this article accordingly.

services:
  r1node1:
    image: ratio1/edge_node:develop
    container_name: r1node1
    platform: linux/amd64
    restart: always
    volumes:
      - r1vol1:/edge_node/_local_cache
    labels:
      - "com.centurylinklabs.watchtower.enable=true"         
      - "com.centurylinklabs.watchtower.stop-signal=SIGINT"          
  r1node2:
    image: ratio1/edge_node:develop
    container_name: r1node2
    platform: linux/amd64
    restart: always
    volumes:
      - r1vol2:/edge_node/_local_cache
    labels:
      - "com.centurylinklabs.watchtower.enable=true"         
      - "com.centurylinklabs.watchtower.stop-signal=SIGINT"          
  watchtower:
    image: containrrr/watchtower
    platform: linux/amd64
    restart: always
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      - WATCHTOWER_CLEANUP=true
      - WATCHTOWER_POLL_INTERVAL=60
      - WATCHTOWER_CHECK_NEW_IMAGES=true
      - WATCHTOWER_LABEL_ENABLE=true 
volumes:
  r1vol1:
  r1vol2:
  • If you want your node to run on Mainnet use the :mainnet or :latest tags if you want your node to run on Testnet use :testnet or :develop.

services:
  r1node1:
    image: ratio1/edge_node:mainnet
    container_name: r1node1
    platform: linux/amd64
    restart: always
    volumes:
      - r1vol1:/edge_node/_local_cache
    labels:
      - "com.centurylinklabs.watchtower.enable=true"         
      - "com.centurylinklabs.watchtower.stop-signal=SIGINT"          
  r1node2:
    image: ratio1/edge_node:testnet
    container_name: r1node2
    platform: linux/amd64
    restart: always
    volumes:
      - r1vol2:/edge_node/_local_cache
    labels:
      - "com.centurylinklabs.watchtower.enable=true"         
      - "com.centurylinklabs.watchtower.stop-signal=SIGINT"          
  watchtower:
    image: containrrr/watchtower
    platform: linux/amd64
    restart: always
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      - WATCHTOWER_CLEANUP=true
      - WATCHTOWER_POLL_INTERVAL=60
      - WATCHTOWER_CHECK_NEW_IMAGES=true
      - WATCHTOWER_LABEL_ENABLE=true 
volumes:
  r1vol1:
  r1vol2:
  • If your node supports training and inference jobs requiring GPU acceleration and you want to allocate GPU resources, you can specify the deploy section in your docker-compose.yml file.

services:
  r1node1:
    image: ratio1/edge_node:develop
    container_name: r1node1
    platform: linux/amd64
    restart: always
    volumes:
      - r1vol1:/edge_node/_local_cache
    labels:
      - "com.centurylinklabs.watchtower.enable=true"         
      - "com.centurylinklabs.watchtower.stop-signal=SIGINT" 
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all   # Change to a specific number (e.g., `1`) if needed
              capabilities: [ gpu ]
  r1node2:
    image: ratio1/edge_node:develop
    container_name: r1node2
    platform: linux/amd64
    restart: always
    volumes:
      - r1vol2:/edge_node/_local_cache
    labels:
      - "com.centurylinklabs.watchtower.enable=true"         
      - "com.centurylinklabs.watchtower.stop-signal=SIGINT"
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all   # Change to a specific number (e.g., `1`) if needed
              capabilities: [ gpu ]
volumes:
  r1vol1:
  r2vol2:

D. Check node identity and fine-tune your config if you want

Once your node(s) are running, easily inspect their status:

docker exec r1node get_node_info

You’ll see a JSON payload with details such as the node’s self-assigned protocol address and EVM (as in “ETH”) address, alias, version, and more. Example:

{
  "address": "0xai_A2pPf0lxZSZkGONzLOmhzndncc1VvDBHfF-YLWlsrG9m",
  "alias": "5ac5438a2775",
  "eth_address": "0xc440cdD0BBdDb5a271de07d3378E31Cb8D9727A5",
  "version_long": "v2.5.36 | core v7.4.23 | SDK 2.6.15",
  ...
}

“address” : used for internode communication
“eth_address” : used at license linking

Note: The node will shut down in 15 minutes unless an ND (License) is linked to it.

Testnet ND (License) Linking Tutorial

If you want to test ND (License) linking to your node, here is a Testnet tutorial that guides you through the entire process, allowing you to practice before moving to Mainnet:

https://ratio1.ai/blog/from-zero-to-node-runner-testnet

Mainnet ND (License) Linking Tutorial

If you’re ready to learn the complete ND (License) linking process on Mainnet, this tutorial will teach you how to link a license to your node on Mainnet:

https://ratio1.ai/blog/from-zero-to-node-runner-mainnet

Allowing specific addresses

If you want to trust a particular SDK node (did we mention that each SDK client has own internal and EVM address?) or another node to send you computation tasks, simply whitelist its address:

docker exec r1node add_allowed <address> [<alias>

Example:

docker exec r1node add_allowed 0xai_AthDPWc_k3BKJLLYTQMw--Rjhe3B6_7w76jlRpT6nDeX some-node-alias

Performance metrics

Want a peek at CPU usage, GPU memory, or overall load over time?

docker exec r1node get_node_history

This returns a JSON snapshot of what your node’s been up to - perfect for diagnosing performance bottlenecks or seeing if you’re hitting your resource limits.

Reset node keys and alias

If you ever need to reset your node’s identity:

docker exec r1node reset_node_keys
docker restart r1node

Need to change the alias?

docker exec r1node change_alias <new_alias>
docker restart r1node

Stop & remove

When you’re ready to take a break:

docker stop r1node

This gracefully shuts down the container and clears it from your system.

Why simplicity do matter

The continuous pursuit of simplification and optimization might seem like a never-ending quest - but that’s the point. Our dedication to the kaizen and pomodoro spirit means we’ll always look for ways to reduce friction, whether that’s consolidating Docker images under one repository or bundling multi-node setups into a single docker-compose file. Each improvement - big or small - pushes us a step closer to a seamlessly decentralized AI network for everyone.

Key Philosophy:

  • Refactor, simplify, refine: Our code and processes never "arrive" at perfection.

  • Learn, adapt, innovate: We thrive on incremental gains - alongside the occasional massive leap.

  • Stay unstoppable: Even if the path seems long, we never confuse slow, steady progress with a lack of victory.

Conclusion: never-settling

From single-container setups to multi-node heterogenous cluster extravaganzas, Ratio1 has been evolving in the past 3+ years and is constantly evolving to be more accessible, more robust, and more user-friendly. Our Edge Node improvements speak to our core mantra: never settle. It doesn’t matter if you’re adopting a new approach to your workflow, reevaluating your project timeline and running costs, or trying to streamline AI deployment. The cycle of build, simplify, optimize, innovate, repeat is what propels us forward.

So go ahead - give these new features and Docker commands a spin, get your endpoints served via ngrok, train your pytorch models, serve your TensorRT binaries and write your neuro-symbolic complex ruleset. Let us know how they’re making your life simpler, give life to your products, and keep your eyes peeled for even more improvements on the horizon!

After all, nothing stokes creativity like the freedom to experiment without the anchor of unnecessary complexity.

So, race on, build on, and never settle. 

This is the Ratio1 way.

For more detailed documentation or community-driven support, check out our GitHub and join the conversation in our forums and social channels.

Andrei Ionut Damian

Andrei Ionut Damian

Jan 28, 2025

The Ultimate AI OS Powered by Blockchain Technology

©Ratio1 2025. All rights reserved.

The Ultimate AI OS Powered by Blockchain Technology

©Ratio1 2025. All rights reserved.

The Ultimate AI OS Powered by Blockchain Technology

©Ratio1 2025. All rights reserved.