Introducing Multi Node Launcher (R1setup) - GPU Deployment at Scale Made Simple
Education
Deploying GPU-enabled nodes across multiple machines can be a complex and time-consuming task. From installing NVIDIA drivers and Docker on each server to ensuring consistent configurations, scaling up a GPU cluster often demands significant effort and expertise. R1setup (Multi Node Launcher) is a new tool designed to eliminate these hurdles. It streamlines multi-node GPU deployment with an easy-to-use CLI, so anyone can set up a fleet of GPU nodes using just an IP address and SSH access.
R1setup is a free, open-source command-line tool by Ratio1 that automates the provisioning of GPU nodes. With simple interactive prompts, it installs all necessary components (Docker, NVIDIA drivers, and more) on any number of remote machines. In this article, we'll introduce R1setup and walk through how to use it - from one-line installation to a fully configured multi-node deployment - all in a matter of minutes.
Key features of R1setup include:
One-line installer: Set up the R1setup CLI environment with a single shell command.
Interactive configuration: Define your cluster setup (node addresses, credentials, network type) through guided prompts.
Automated GPU setup: Install Docker, NVIDIA drivers, and configure GPU container support on each node automatically.
Scalability: Deploy to one or dozens of nodes in one go - perfect for bootstrapping GPU clusters or Ratio1 edge node networks.
Step 1: Installing R1setup with a One-Line Command
R1setup can be installed on your control machine (e.g., your PC or a master server) using a one-liner script provided in the GitHub repository. Simply run the installation command from the mnl_factory setup instructions on GitHub (which downloads the tool and its dependencies). This will set up R1setup in a Python virtual environment and make the r1setup command available system-wide.
For example, to install R1setup you might run:
This script handles installing prerequisites (like Python and Ansible) and then installs the Multi Node Launcher collection. Once complete, you should see a message indicating that installation is done and you can now use the r1setup command.

The one-line installation script outputs progress and confirms successful setup of the R1setup CLI. In the example above, the installer updates package repositories, sets up a Python virtual environment, and installs the required Ansible collection. The output concludes with an "Installation complete!" message, indicating you can now run the r1setup command to begin configuring your nodes.

Step 2: Running R1setup and Creating a Configuration
After installing, you're ready to launch the tool. Running r1setup for the first time will bring up an interactive prompt to create a new configuration. A configuration in R1setup is a file that stores your deployment settings: which nodes (machines) you want to set up, how to access them, and which network environment they will join.

When you execute r1setup, it detects that you have no configuration yet and asks to create one. Just follow the prompts:
Name your configuration: Give it a descriptive name (e.g., "production-cluster", "gpu-farm-1"). This helps identify different clusters or environments later.
Choose network environment: Select whether these nodes will be on mainnet, testnet, or devnet. (For Ratio1 edge nodes, this corresponds to the blockchain network; you can choose 1 for mainnet in our example.)
Number of GPU nodes: Enter how many nodes you want to configure in this batch. You can start with 1, or specify multiple if you plan to deploy to several servers at once.
Enter node details: For each node, R1setup will prompt for a node alias (a friendly name), the host address (IP or hostname), SSH username, and authentication method (password or SSH key). Provide the SSH password or key as prompted so that R1setup can connect to the machine. You can also specify a sudo password if it's different from the SSH password.

Creating a new configuration in R1setup is straightforward. In the screenshot above, we start a configuration named "my-super-configuration" and choose the mainnet network. We then configure 1 GPU node, providing its alias ("smart-node"), IP address (192.168.0.110), SSH username, and password. R1setup summarizes the info for confirmation (host, user, auth method) and saves the configuration file. The configuration is now active and ready for deployment.
Notably, you can add as many nodes as you need in a single configuration. For instance, if you input "5" for the number of nodes, the tool will repeat the node detail prompts five times (one for each server). This makes it easy to set up a whole cluster in one go. (You can always manage configurations later to add or remove nodes, as we'll see.)

Step 3: Navigating the R1setup Main Menu
Once a configuration is created, R1setup drops you into its main menu interface. This menu is a central hub that shows your current configuration status and available actions.

The R1setup main menu interface after creating a configuration (above). It displays the active configuration name, network (mainnet in this case), and status indicators for configuration and deployment. Since we have not deployed yet, "Deployment" is marked with an ✕ ("Never deployed"). The menu presents various options: for example, (1) to configure nodes (add/edit) or (5) to deploy the full setup. Other options allow viewing the configuration, testing node connectivity, deploying only Docker, removing a node deployment, updating the tool, and more. This intuitive menu makes it easy to perform any action with a numeric selection.
At this point, you can review your settings or even add more nodes:
View configuration: Option 3 will display the details of the currently active configuration (useful to double-check IPs and settings).
Test node connectivity: Option 4 lets you quickly verify that SSH access to each configured node is working before deployment.
Add or edit nodes: By selecting option 1 (Configure nodes), you can add additional nodes to this configuration or update existing ones. This is handy for scaling up later; you don't need to start over, just add new node details and save.
Switch configurations: If you have multiple saved configurations (for different clusters or environments), option 2 allows you to manage and switch between them.
For now, we'll proceed with the full deployment on our configured node(s).
Step 4: Deploying Docker and NVIDIA Drivers (Full Deployment)
The real power of R1setup is in its automated deployment. With our nodes configured, we can deploy all necessary software to each machine by simply selecting the "Deploy full setup" option from the menu (option 5). This triggers the installation of:
Docker Engine & Docker Compose: for container management.
NVIDIA GPU drivers and CUDA toolkit: enabling GPU acceleration.
NVIDIA Container Toolkit: so Docker containers can access the GPUs.
Ratio1 edge node service: (if applicable) to run the node software on the machine once the environment is prepared.
When you choose the full deployment, R1setup will first show a summary of what it's about to do and which nodes it will target.

Figure: Before proceeding, R1setup provides a deployment summary. As shown above, the tool confirms the action ("Docker + NVIDIA drivers + GPU setup"), the chosen network (mainnet), and lists the target node(s) with their alias and SSH info. In our case, it's deploying to 1 node (smart-node at 192.168.0.110). The summary outlines that Docker, NVIDIA drivers, and GPU configuration will be installed, and the Ratio1 edge node service will be deployed. This gives you a final chance to review and ensure everything is correct. When you confirm by typing 'y', R1setup will begin the deployment process automatically.
After confirmation, R1setup uses Ansible under the hood to connect to each node and run all the required setup tasks. You will see a series of status messages as it installs packages and configures the system on the remote machine(s). This may take several minutes per node, depending on what needs to be installed (NVIDIA drivers can be sizable). R1setup handles each step, so you don't have to manually log into any server or run any commands yourself.
Step 5: Deployment Complete - GPU Nodes Ready to Go
Once the deployment finishes, R1setup will indicate that the process is done and whether it succeeded on each node. It provides a detailed log of tasks performed and highlights any issues (failed tasks will be reported, but in a successful run you should see all tasks completed with "ok" or "changed" statuses).

A successful deployment output from R1setup. In this example, all tasks have completed (ok=69 tasks with 0 failures) for the node smart-node. The final lines show a "Full Deployment completed successfully!" message with a green checkmark. The active configuration is updated to record that a deployment has been run. The remote machine now has Docker and the NVIDIA stack installed, and the Ratio1 edge node container is running (with details like node address and version info shown above).
At this stage, your GPU node is fully configured and live. If this was a Ratio1 edge node deployment, the node would now be active on the specified network (mainnet in our case) and ready to contribute computing power. For general use, the machine is prepared to run GPU-accelerated containers immediately. R1setup has essentially condensed dozens of manual setup steps into one automated run - what might have taken hours per machine is now handled for you in minutes.
You can also extend your deployment at any time. For example, if you add new servers to the configuration and run "Deploy full setup" again, R1setup will automatically install the required software on those new machines. It's truly a rinse-and-repeat process that scales with your needs, whether you're adding one node or twenty.
Why It Matters
Managing a GPU cluster or deploying blockchain-based AI nodes doesn't have to be a headache. R1setup simplifies the process to the point where anyone with basic SSH access can get a multi-node GPU environment up and running. This is especially useful for scalability - whether you're an individual setting up a few machines or an organization provisioning dozens of nodes, the same tool and process apply. No special DevOps knowledge is required; R1setup abstracts away the complexity of driver installations, Docker setup, and configuration management.
Another important aspect is that R1setup is free and open source. The project is available on GitHub, which means you can inspect the code, suggest improvements, or even tailor it to your own needs. Being open source also ensures transparency in what the tool is doing on your machines.
For the Ratio1 community, R1setup provides a fast path to becoming a node operator. If you want to join the Ratio1 network by running edge nodes, this launcher automates the entire bootstrap. And even beyond Ratio1, the ability to quickly equip servers with Docker and NVIDIA drivers has broad utility in AI and GPU computing projects.
Get Started Now: If you’re ready to try R1setup, head over to the Multi Node Launcher repository for the latest instructions. The one-liner installation script in the mnl_factory directory makes setup a breeze. In just a few minutes, you can go from zero to a full GPU cluster - no more wrestling with individual server configs. Give R1setup a try and simplify your GPU deployments today!

Vitalii Toderian
May 21, 2025