Free Tool

Can Your GPU Rig Earn on RunPod?

Find out in minutes. We check your GPU tier, network quality, NVMe storage, uptime commitment, and container stack against RunPod’s provider requirements.

💻
Option 1

Browser Wizard

Fill in your specs step by step. We’ll auto-detect what we can from your browser. Best for desktop users.

Option 2

Command Line / API

One command auto-detects everything directly on your server. Best for headless Linux machines with no GUI.

What we evaluate
🎮 GPU Tier & VRAM
📡 Bandwidth & Static IP
💾 NVMe Storage
🕑 24/7 Uptime
Docker + NVIDIA Toolkit
💻 CPU & RAM
← Back to options

Command Line Check

Auto-Detect (Recommended)

Run this on your Linux server. The script auto-detects your GPU, NVMe storage, CPU/RAM, OS, NVIDIA Container Toolkit, and more, then shows your RunPod compatibility score.

curl -sSL https://www.righost.com/check-runpod.sh | bash

⚠ Always review scripts before piping to bash. View source: righost.com/check-runpod.sh

Rate limited to 30 checks per hour per IP address. The script calls /api/runpod-check.php once per run.
Manual API Call

POST JSON to the endpoint if you prefer to skip the auto-detect script. Returns JSON.

curl -X POST https://www.righost.com/api/runpod-check.php \ -H "Content-Type: application/json" \ -d '{"gpu_model":"RTX 4090","gpu_count":1,"vram_gb":24,"bandwidth_mbps":1000,"static_ip":true,"nvme_gb":2000,"uptime_tier":"24_7","docker_nvidia":true,"cpu_cores":16,"ram_gb":64}'
API parameter reference
Field Type Example Notes
gpu_modelstringRTX 4090Data-center > high-end consumer
gpu_countint1Number of GPUs
vram_gbint24VRAM per GPU
bandwidth_mbpsfloat1000Symmetric Mbps
static_ipbooltrueReachable IPv4, open ports
nvme_gbfloat2000NVMe free space in GB
uptime_tierstring24_724_7 / occasional
docker_nvidiabooltrueDocker + NVIDIA Container Toolkit
cpu_coresint16Logical CPU cores
ram_gbfloat64Total system RAM
← Back to options
1
Scan
2
Details
3
Results

Scan Your Browser

We’ll read hardware signals your browser exposes. No data is sent anywhere.

Click below to scan your browser’s hardware signals. Takes less than a second.

Tell Us the Rest

We’ve pre-filled what we could. Complete the remaining fields for an accurate RunPod score.

RunPod prefers data-center GPUs (A100, H100, L40S, A6000) and high-end consumer (RTX 4090).
▶ Run a speed test →
RunPod providers must expose reachable endpoints.
NVMe preferred; SATA SSD scores lower.
RunPod expects dedicated, always-on providers.
nvidia-container-toolkit is required.

Your Results

Here’s how your system stacks up against RunPod’s provider requirements.

out of 100 points

Category Breakdown

Ready to apply as a RunPod provider?

RunPod is an application-based program (unlike Vast.ai’s open marketplace). Providers are reviewed before being accepted.

Apply to Host on RunPod →

Link goes to RunPod account settings — official provider application URL may differ; verify before submitting.

Advertisement 728×90 — righost.com
Advertisement 300×250 — righost.com

Why RunPod requirements differ from Vast.ai

RunPod runs a more curated provider program than Vast.ai’s open marketplace. Workloads skew toward enterprise AI/ML inference and training, so RunPod weights modern data-center GPUs (A100, H100, L40S, A6000) and high-end consumer GPUs (RTX 4090) more heavily, and requires reliable network reachability (static IPv4, open ports, symmetric bandwidth) plus NVMe storage and 24/7 uptime commitments. Providers apply and are reviewed — acceptance isn’t automatic. If you’re an occasional home-rig operator, Vast.ai is usually the better fit; if you have a dedicated rig with enterprise-grade networking, RunPod may offer better rates for suitable hardware. Start the application process on the RunPod console.