Choosing a GPU for Rental
If you are buying a GPU specifically to rent out, you are making a two- or three-year bet on a hardware category. Get the tier right and the card pays for itself and then some. Get it wrong and you own a brick that earns pennies per hour. This guide won't tell you exactly which model to buy — prices and availability shift too fast — but it will give you the framework for deciding.
The tier map
| Tier | Representative cards | VRAM | Primary demand |
|---|---|---|---|
| Data-center | A100, H100, H200, L40S | 40–141 GB | Enterprise training, large LLM workloads |
| High-end consumer | RTX 4090, RTX 3090, RTX 4080, RTX 5090 | 16–32 GB | LLM fine-tune, image gen, long-running jobs |
| Mid-range consumer | RTX 4070 Ti, RTX 3080, RTX 3080 Ti | 10–16 GB | Inference, small-model training |
| Entry consumer | RTX 3060, 3060 Ti, 3070 | 8–12 GB | Hobbyist jobs, batch inference (limited) |
What moves between tiers is not just raw compute — it's which workloads can run at all. A 12 GB card cannot fine-tune a 13-billion-parameter model because the weights don't fit. A 24 GB card can. This is the VRAM cliff, and it's the dominant factor in rental demand.
VRAM is the gating factor
If you take one thing from this article: for modern ML rental workloads, prioritize VRAM over raw speed. A slower 24 GB card will out-earn a faster 12 GB card most months, because the 24 GB card has access to the entire high-VRAM demand pool.
Practical thresholds as of 2026:
- 8 GB and below: hobbyist-only. Many popular jobs won't fit.
- 10–12 GB: viable for inference and small training jobs, but increasingly borderline for LLM work.
- 16 GB: usable for most mid-size workloads, a reasonable floor for a new rental purchase.
- 24 GB: the sweet spot for consumer cards — handles most open-source LLM fine-tunes and diffusion workloads.
- 48 GB and up (pro or data-center): commands premium rates but with higher up-front cost and often stricter hosting requirements.
Used vs. new
Used high-end consumer cards, especially 3090s, have been a popular entry point because the 24 GB VRAM stays useful long after the card has been eclipsed in gaming benchmarks. But there are real risks:
- Prior mining use. Cards that ran crypto mining for years have often had their thermal pads degrade, VRAM chips stressed, and fans worn down. A used 3090 at a tempting price might be a mined card with a month of useful life left.
- No warranty. A new card typically has 2–3 years of manufacturer warranty. A used card has whatever the seller honors, which is usually nothing.
- Thermal history. GPUs that lived in hot environments (cramped cases, poor airflow, ambient temperatures above 30°C year-round) degrade faster.
If you go used: buy from sellers with a return policy, stress-test immediately on arrival (run GPU-burn or equivalent for several hours), and budget for the possibility that the card will die inside your planned depreciation window.
Multi-GPU rigs
Putting multiple GPUs in one machine is a common optimization, but it comes with real complications:
- PCIe lanes. Consumer motherboards rarely provide x16 lanes to every slot once you have more than one card. Some renter workloads are bandwidth-sensitive and will run slower on a card in an x4 slot, which hurts your reliability score.
- Cooling. Three triple-fan cards in a consumer case will produce dead-air zones. Open-air mining frames are the common workaround.
- PSU headroom. A 4x RTX 3090 rig can pull 1800+ W under load. Oversize your PSU by at least 20% over peak draw, and plan the house circuit accordingly — many residential 15-amp circuits can't sustain it.
- Electrical. If you are drawing 2 kW continuously on one circuit, you need to think about breaker capacity and wiring. This is a point where you should consult a licensed electrician rather than guess.
A purchase flow
A rough decision process if you are buying hardware today specifically to host:
- Pin down your electricity rate. If your marginal kWh rate is above roughly US$0.25, the math is much harder. Consider whether a lower-wattage card makes more sense than a flagship.
- Set a budget ceiling and a payback target. For example: "I want this card to break even within 18 months at 50% utilization." That ratio sets the maximum you should pay.
- Check live marketplace rates. The Vast.ai search page shows what each card class currently earns per hour. Divide your budget by the hourly rate, multiplied by target utilization and hours per month, to get payback months.
- Prefer high-VRAM over faster-low-VRAM. When picking between two cards in the same budget, the one with more VRAM almost always wins.
- Run the break-even math. Our profitability article has the template. If the math only works at optimistic utilization, revisit your assumptions.
What not to buy
A few categories we would steer clear of for rental purposes:
- Laptop GPUs. Thermal throttling under sustained load makes them uncompetitive. See the FAQ for the fuller answer.
- AMD or Intel GPUs. Vast.ai and most equivalent marketplaces require NVIDIA cards because the ML software ecosystem (CUDA, cuDNN) is NVIDIA-specific.
- Sub-8 GB VRAM cards. Even at bargain prices, there is too little demand to justify the space and power.
- Newly released halo cards at launch prices. MSRP+ pricing rarely pays off before the card's position in the market gets repriced.
Already have a card? Check it first
Run your rig through the RigHost compatibility checker to see whether your current GPU meets Vast.ai's hosting minimums before deciding to upgrade.
Run the Compatibility Checker →