How Vast.ai Hosting Works
Vast.ai is a two-sided marketplace for GPU compute. On one side are renters: machine-learning researchers, inference services, 3D render farms, crypto-adjacent compute jobs, and hobbyists training models on a weekend budget. On the other side are hosts — people with one or more GPU-equipped Linux boxes who are willing to lease that hardware by the hour.
If you own a gaming PC with a reasonably modern NVIDIA card, a stable internet connection, and the patience to learn some Docker, you are close to being the kind of machine Vast.ai's platform is designed around. Whether that rig will actually earn meaningful income is a separate question, and one we cover in the profitability article. This piece is strictly about mechanics: what you sign up for, what runs on your box, and how payouts are structured.
Who rents your GPU
A renter on Vast.ai lands on a search page, filters for a GPU type and specs, and picks an instance from the listings. Your machine shows up in that search if your host software is online, the GPU is idle, and your asking price is competitive for its tier.
Most renter workloads fall into three buckets:
- Training and fine-tuning. Typically jobs that run for several hours to several days. These renters care about VRAM and interconnect speed. A 24 GB consumer card like the RTX 3090 or 4090 is the sweet spot for smaller LLMs and diffusion models.
- Inference services. Longer-lived containers that serve model predictions over HTTP. Uptime matters more than raw speed.
- Short-burst jobs. Render queues, batch scoring, academic experiments. These renters are price-sensitive and will hop between hosts for small savings.
The short version: the more predictable and cheap your machine looks, the more often it gets rented.
What actually runs on your rig
Vast.ai instances are Docker containers. When a renter starts a job, the platform pulls a Docker image to your machine and launches the container, mapping the GPU through with the NVIDIA Container Toolkit. You don't control what they run — but you also aren't exposed to their host OS, because everything is contained.
There is a standard install path: you run Vast.ai's host agent on a Linux host (Ubuntu 22.04 is the usual recommendation), you have Docker and the NVIDIA Container Toolkit set up correctly, and you have NVIDIA drivers installed on the host. See our Docker + NVIDIA setup guide if any of that is unfamiliar.
How payouts work
Vast.ai takes a cut of each rental hour; the rest is credited to your host account. Exact percentages and payout minimums change over time, so rather than quote a number here that will drift out of date, we recommend checking the current terms on the Vast.ai host dashboard and FAQ before you commit to any earnings projections.
What is worth understanding structurally:
- You set your own hourly price. Vast.ai shows you a suggested rate based on comparable machines, but the price is yours to set. Price too high and you sit idle; price too low and you under-earn on the hours you do get rented.
- Earnings accrue per second of active rental. There is no minimum rental length from a host earnings standpoint; a five-minute rental earns five minutes of credit.
- Payouts are manual withdrawals. Earnings sit in your host balance until you request a payout, subject to whatever minimums the platform sets.
What a "good host" looks like
Vast.ai's internal reliability metric is one of the most important numbers attached to your listing. It reflects how often your machine is available when a renter wants it, and whether your prior rentals completed without errors. Hosts with poor reliability scores get filtered out of the default search, which kills utilization.
Three things move the reliability needle more than any others:
- Uptime. A rig that's only online evenings and weekends earns less per watt than a 24/7 rig, and also signals "unreliable" to the matching algorithm.
- Storage speed. NVMe SSDs shave minutes off every rental because the Docker image has to pull and unpack before the renter can start work. A slow mechanical disk delays every job and invites complaints.
- Upload bandwidth. Renters frequently download their data into the container, do work, then upload results. A residential connection with 20 Mbps upload will visibly underperform a fiber line with 500 Mbps symmetrical.
Friction you should expect
A few honest caveats that don't appear in marketing copy:
Docker pulls take time. Popular ML images run 10 GB and up. On a gigabit link, that's a minute or two. On a slower connection, it can be five or ten. Renters sometimes cancel while waiting.
Unexpected GPU types show up in listings. The platform lists exactly what your nvidia-smi reports, which occasionally includes OEM variants or mobile parts your rig actually has but you didn't realize you were exposing.
Idle time is real and ongoing. Especially in slower hours (weekday daytime in certain regions) your rig may sit unrented. Your electricity meter keeps running whether or not a renter is paying. Idle-vs-active power is the gap that decides whether your rig is profitable or a slow leak.
Price drift. The suggested hourly rate for a given GPU changes with supply. Check your listings weekly and retune if you've fallen behind competing hosts.
Is it worth doing?
If you already own the hardware and have low electricity costs, adding Vast.ai as a side revenue stream is usually straightforward and worth trying. If you're considering buying GPUs specifically to host them, the math gets tighter — run the numbers before you spend. Our profitability piece walks through the formulas, and choosing a GPU for rental covers the hardware-tier decision.
Check your rig first
Before you sign up as a host, run your machine through the RigHost compatibility checker. It's free, no sign-up, and tells you whether your rig clears the Vast.ai minimums.
Run the Compatibility Checker →