If you’ve worked with CUDA, PyTorch, TensorRT, or AI tools like ComfyUI, you’ve probably seen labels like sm_80, sm_86, sm_90, or sm_120.
If you’re thinking about purchasing a new GPU, we’d greatly appreciate it if you used our Amazon Associate links. The price you pay will be exactly the same, but Amazon provides us with a small commission for each purchase. It’s a simple way to support our site and helps us keep creating useful content for you. Recommended GPUs: RTX 5090, RTX 5080, and RTX 5070. #ad
For the longest time, these numbers popped up in ComfyUI nodes, and I honestly didn’t know what they meant—until recently. After digging into it, I realized the explanation is actually pretty simple.
What “SM” Means
SM stands for Streaming Multiprocessor, the core building block inside an NVIDIA GPU.
When software refers to sm_80 or sm_120, it’s talking about the GPU’s compute capability—basically the GPU’s feature level.
Think of compute capability like a version number on a game console.
Higher number → newer features, more instructions, better performance.
Breaking Down the SM Versions (sm_80 → sm_120)
sm_80 – Ampere (A100, workstation/server variants)
Released around 2020. Brought improvements to Tensor Cores and FP16 performance.
sm_86 – Ampere Consumer (RTX 30 Series)
3080, 3090, 3070, 3060, etc.
Slightly different features from sm_80—this is why some model wheels don’t work across both.
sm_89 – Ada (Laptop/Workstation Variants)
Used in mobile 4090/4080 and Ada workstation cards.
sm_90 – Hopper (H100)
A massive jump for AI workloads:
-
FP8 support
-
Transformer Engine
-
Huge improvements for LLMs
sm_100 – Blackwell (Data Center)
Next-generation AI architecture.
Designed for extremely large-scale training.
sm_120 – Blackwell Consumer (RTX 50 Series)
This is the one that confused many people (including me).
sm_120 is used by the GeForce RTX 5080/5090 and other upcoming 50-series GPUs.
It’s Blackwell architecture tuned for consumer cards.
Why These Numbers Matter
Most users only notice SM versions when something breaks.
For example:
-
A PyTorch wheel doesn’t support your GPU
-
CUDA tools throw “unsupported architecture sm_XX” errors
-
A ComfyUI node fails to compile kernels
-
A model requires a higher compute capability than your card supports
A few common situations:
-
RTX 30 Series (sm_86) → won’t run binaries built only for sm_90
-
RTX 50 Series (sm_120) → may need updated CUDA versions for compatibility
-
Older builds often don’t include sm_120 yet, causing early driver/tool issues
Knowing your GPU’s SM version makes troubleshooting much easier.
Quick Reference Table
| SM Version | Architecture | GPUs |
|---|---|---|
| sm_80 | Ampere | A100 |
| sm_86 | Ampere (consumer) | RTX 3080 / 3090 / 3070 |
| sm_89 | Ada (mobile/workstation) | Laptop 4090 / Ada 4000 |
| sm_90 | Hopper | H100 |
| sm_100 | Blackwell Data Center | B100 / GB200 |
| sm_120 | Blackwell Consumer | RTX 5080 / 5090 |
The Bottom Line
I used to think sm_80, sm_90, and so on were just internal GPU codes.
But after looking into it recently, I finally understood:
-
They’re version numbers for GPU features
-
They decide which software builds your GPU can run
-
They determine CUDA compatibility
-
They explain why some models run on one GPU but fail on another
Once you understand SM versions, troubleshooting AI tools becomes much easier—especially with newer architectures like sm_120 entering the scene.
NVIDIA Official Docs
-
Compute Capability Overview
https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#compute-capabilities
(Explains SM versions and architecture mapping.) -
List of All CUDA GPUs and Their SM Versions
https://developer.nvidia.com/cuda-gpus
(A clean chart showing all GPU models and their compute capability.) -
CUDA Release Notes
https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html
(Useful when new GPUs like sm_120 arrive and older CUDA versions don’t support them.)
Developer and Open-Source References
-
PyTorch Get Started (GPU Compatibility)
https://pytorch.org/get-started/locally/
(Explains which CUDA builds support which GPUs.) -
TensorRT Developer Guide
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html
(Good for readers doing model optimization on new architectures.) -
Hugging Face Hardware Compatibility Notes
https://huggingface.co/docs
(Helps readers understand why some models fail to load on older GPUs.)
Leave a Reply