GPU Hosting Profitability in 2026
If you have spent any time reading about Vast.ai or RunPod hosting, you have probably seen claims in the neighborhood of "make $300 a month per GPU" or "pay off your RTX 4090 in six months." These numbers are not invented out of nothing — they describe what some hosts earn, some of the time, in some locations. They are also wildly misleading if you treat them as a forecast.
This article is structured around the variables that actually determine profit, with a template you can plug your own assumptions into. We deliberately avoid quoting current hourly rates because they shift with marketplace supply and demand; the live numbers on the Vast.ai marketplace are the authoritative source.
The four levers
Every profitability calculation boils down to four numbers:
- Hourly rental rate — what the marketplace will pay for your specific GPU tier.
- Utilization rate — the fraction of hours your card is actually rented, not just available.
- Electricity cost — watts pulled, multiplied by hours, multiplied by your kWh rate.
- Platform fee — the cut Vast.ai or RunPod takes before money hits your account.
Plus depreciation on the hardware itself, and bandwidth overage charges if you are on a capped internet plan. But those four are the primary drivers.
GPU tier sets the ceiling
Hourly rates vary dramatically by GPU class. In general terms, from lowest to highest earnings potential:
| Tier | Example GPUs | What they attract |
|---|---|---|
| Entry consumer | RTX 3060, 3060 Ti | Small hobbyist jobs, limited ML usefulness |
| Mid consumer | RTX 3070 Ti, 3080, 4070 Ti | Mid-size models, inference |
| High-end consumer | RTX 3090, 4080, 4090 | LLM fine-tuning, image gen, long jobs |
| Data-center | A100, H100, L40S | Enterprise training, enterprise budgets |
VRAM is the single most important spec. A card with 24 GB of VRAM accepts workloads that a 12 GB card cannot run at all, which means the higher-VRAM card has access to a much larger pool of potential renters. This is why the RTX 3090 (24 GB) has remained commercially viable as a rental card long after its retail obsolescence, while the 3080 Ti (12 GB) has become borderline. The GPU selection article digs into this in more detail.
Utilization is the silent killer
Assume a renter pays your listed hourly rate. Your actual revenue is:
Note what that doesn't include: your card sitting idle. A rig that gets rented 70% of the month earns twice what one with 35% utilization earns. And utilization is shaped by things you can only partially control:
- Geography. Servers in the US and EU generally see more demand than in regions with latency or reliability question marks.
- Reliability score. Vast.ai's internal metric filters low-reliability hosts out of default search results. One bad week can hurt utilization for weeks after.
- Pricing. If your hourly rate drifts above comparable machines, you get passed over.
- GPU age and VRAM. Older cards with less VRAM simply have fewer compatible jobs.
Plan for utilization in the 30–70% range for most hobbyist rigs. Rates above that exist but require a combination of good geography, competitive pricing, solid uptime, and a GPU tier currently in high demand. Rates below can happen during marketplace slumps.
The electricity math
This is the most overlooked expense, especially in regions with high residential rates. The formula:
Plus a smaller number for idle draw during the hours your card is not rented. A couple of things to notice:
- Watts under load are not the same as the TDP stamped on the box. Many modern GPUs pull above their stated TDP under sustained rental workloads. Measure with a wall-plug power meter if you want an accurate number.
- Your whole rig draws power, not just the GPU — the CPU, fans, drives, and PSU inefficiency all add up. A 350 W GPU in a rig typically pulls more like 450–550 W at the wall under load.
- Electricity rates vary enormously. A cent of kWh rate translates to real money over thousands of hours. Check your actual utility bill for your marginal rate, not the average.
As a sanity check: a 500 W rig running at load for 500 hours a month (roughly 70% utilization) consumes 250 kWh. At US$0.15/kWh that's US$37.50/month. At US$0.30/kWh (common in parts of California and the Northeast, or in Europe) it's US$75.
Break-even template
Here is a worksheet you can fill in for your own rig. Fill in the blanks, then compare the revenue line to the cost line:
If monthly_profit is below your hardware depreciation (the card's purchase price divided by its useful life in months), you are net-negative on a total-cost basis even if you're cash-positive on a month-to-month view.
The honest conclusion
GPU hosting can be profitable under the right conditions: modern high-VRAM NVIDIA hardware, low electricity rates, high-uptime residential or small-business connectivity, and decent marketplace luck. It is not a passive goldmine. Treat any hosting income as variable — it is more like running a small equipment-rental business than collecting dividend income.
Two decision heuristics we would stand behind:
- If you already own the card: it's almost always worth trying. Your downside is limited to your electricity bill and some setup time.
- If you're buying hardware to host: run the break-even math with conservative assumptions (40% utilization, a rate at the low end of the current range) before spending. If the numbers only work at optimistic assumptions, pass.
Check before you commit
Whether you already own a GPU or are shopping for one, the RigHost compatibility checker will tell you if your specs meet the Vast.ai host minimums before you invest time in setup.
Run the Compatibility Checker →