Can Your GPU Rig Earn on RunPod?
Find out in minutes. We check your GPU tier, network quality, NVMe storage, uptime commitment, and container stack against RunPod’s provider requirements.
Browser Wizard
Fill in your specs step by step. We’ll auto-detect what we can from your browser. Best for desktop users.
Command Line / API
One command auto-detects everything directly on your server. Best for headless Linux machines with no GUI.
Command Line Check
Run this on your Linux server. The script auto-detects your GPU, NVMe storage, CPU/RAM, OS, NVIDIA Container Toolkit, and more, then shows your RunPod compatibility score.
curl -sSL https://www.righost.com/check-runpod.sh | bash
⚠ Always review scripts before piping to bash. View source: righost.com/check-runpod.sh
/api/runpod-check.php once per run.
POST JSON to the endpoint if you prefer to skip the auto-detect script. Returns JSON.
curl -X POST https://www.righost.com/api/runpod-check.php \
-H "Content-Type: application/json" \
-d '{"gpu_model":"RTX 4090","gpu_count":1,"vram_gb":24,"bandwidth_mbps":1000,"static_ip":true,"nvme_gb":2000,"uptime_tier":"24_7","docker_nvidia":true,"cpu_cores":16,"ram_gb":64}'
API parameter reference
| Field | Type | Example | Notes |
|---|---|---|---|
| gpu_model | string | RTX 4090 | Data-center > high-end consumer |
| gpu_count | int | 1 | Number of GPUs |
| vram_gb | int | 24 | VRAM per GPU |
| bandwidth_mbps | float | 1000 | Symmetric Mbps |
| static_ip | bool | true | Reachable IPv4, open ports |
| nvme_gb | float | 2000 | NVMe free space in GB |
| uptime_tier | string | 24_7 | 24_7 / occasional |
| docker_nvidia | bool | true | Docker + NVIDIA Container Toolkit |
| cpu_cores | int | 16 | Logical CPU cores |
| ram_gb | float | 64 | Total system RAM |
Scan Your Browser
We’ll read hardware signals your browser exposes. No data is sent anywhere.
Click below to scan your browser’s hardware signals. Takes less than a second.
Tell Us the Rest
We’ve pre-filled what we could. Complete the remaining fields for an accurate RunPod score.
Your Results
Here’s how your system stacks up against RunPod’s provider requirements.
Category Breakdown
Ready to apply as a RunPod provider?
RunPod is an application-based program (unlike Vast.ai’s open marketplace). Providers are reviewed before being accepted.
Apply to Host on RunPod →Link goes to RunPod account settings — official provider application URL may differ; verify before submitting.
Why RunPod requirements differ from Vast.ai
RunPod runs a more curated provider program than Vast.ai’s open marketplace. Workloads skew toward enterprise AI/ML inference and training, so RunPod weights modern data-center GPUs (A100, H100, L40S, A6000) and high-end consumer GPUs (RTX 4090) more heavily, and requires reliable network reachability (static IPv4, open ports, symmetric bandwidth) plus NVMe storage and 24/7 uptime commitments. Providers apply and are reviewed — acceptance isn’t automatic. If you’re an occasional home-rig operator, Vast.ai is usually the better fit; if you have a dedicated rig with enterprise-grade networking, RunPod may offer better rates for suitable hardware. Start the application process on the RunPod console.