Hardware

Choosing a GPU for Rental

Picking the right card is the single biggest lever on rental revenue. This guide covers VRAM thresholds, consumer vs data-center tiers, used-card risks, and how to match a card to your budget and electricity cost.

If you are buying a GPU specifically to rent out, you are making a two- or three-year bet on a hardware category. Get the tier right and the card pays for itself and then some. Get it wrong and you own a brick that earns pennies per hour. This guide won't tell you exactly which model to buy — prices and availability shift too fast — but it will give you the framework for deciding.

The tier map

TierRepresentative cardsVRAMPrimary demand
Data-center A100, H100, H200, L40S 40–141 GB Enterprise training, large LLM workloads
High-end consumer RTX 4090, RTX 3090, RTX 4080, RTX 5090 16–32 GB LLM fine-tune, image gen, long-running jobs
Mid-range consumer RTX 4070 Ti, RTX 3080, RTX 3080 Ti 10–16 GB Inference, small-model training
Entry consumer RTX 3060, 3060 Ti, 3070 8–12 GB Hobbyist jobs, batch inference (limited)

What moves between tiers is not just raw compute — it's which workloads can run at all. A 12 GB card cannot fine-tune a 13-billion-parameter model because the weights don't fit. A 24 GB card can. This is the VRAM cliff, and it's the dominant factor in rental demand.

VRAM is the gating factor

If you take one thing from this article: for modern ML rental workloads, prioritize VRAM over raw speed. A slower 24 GB card will out-earn a faster 12 GB card most months, because the 24 GB card has access to the entire high-VRAM demand pool.

Practical thresholds as of 2026:

Context matters. The VRAM threshold that counts as "borderline" creeps upward every year as model sizes grow. A card that is mid-pack today may be entry-tier in two years. Factor this into your depreciation math.

Used vs. new

Used high-end consumer cards, especially 3090s, have been a popular entry point because the 24 GB VRAM stays useful long after the card has been eclipsed in gaming benchmarks. But there are real risks:

If you go used: buy from sellers with a return policy, stress-test immediately on arrival (run GPU-burn or equivalent for several hours), and budget for the possibility that the card will die inside your planned depreciation window.

Multi-GPU rigs

Putting multiple GPUs in one machine is a common optimization, but it comes with real complications:

A purchase flow

A rough decision process if you are buying hardware today specifically to host:

  1. Pin down your electricity rate. If your marginal kWh rate is above roughly US$0.25, the math is much harder. Consider whether a lower-wattage card makes more sense than a flagship.
  2. Set a budget ceiling and a payback target. For example: "I want this card to break even within 18 months at 50% utilization." That ratio sets the maximum you should pay.
  3. Check live marketplace rates. The Vast.ai search page shows what each card class currently earns per hour. Divide your budget by the hourly rate, multiplied by target utilization and hours per month, to get payback months.
  4. Prefer high-VRAM over faster-low-VRAM. When picking between two cards in the same budget, the one with more VRAM almost always wins.
  5. Run the break-even math. Our profitability article has the template. If the math only works at optimistic utilization, revisit your assumptions.
Skip the impulse buy. GPU prices move. A card at MSRP today might be at a discount in six weeks or at double-MSRP in six weeks — both happen. Patience and watching retail trends pays off.

What not to buy

A few categories we would steer clear of for rental purposes:

Already have a card? Check it first

Run your rig through the RigHost compatibility checker to see whether your current GPU meets Vast.ai's hosting minimums before deciding to upgrade.

Run the Compatibility Checker →