Skip to main content
Category

Tech Tips

Which GPU to Buy for AI? How I Learned to Love/Hate the CUDA Monopoly

By AI, Tech Tips

Nightmares! I’ve regularly experienced nightmares that are far from normal or healthy, all revolving around AI and GPU benchmarking. However, these nightmares don’t involve benchmarking the speed of an AI model or the GPUs performance in superposition. Instead, they center around benchmarking the AI performance of standard consumer-grade GPUs. Unfortunately, there’s only one benchmark out there that offers a reasonable level of accuracy and modernity, which is the Tom’s Benchmark linked below.

Most other articles and benchmarks conclude with a hodgepodge of older GPUs, such as a 2080 Ti, a 3080 Max-Q, and occasionally, if we’ve been good little gamers or college data scientists, a 3090. However, these cards are now outdated, their warranties have expired, and AI generation is constantly improving on newer hardware. So, where can we find more benchmarks and opinions on today’s hardware? They’re right…

Here. This chart is a python/conda made benchmark by AIBenchmark.com. It cycles through various, different AI tests. Link down below. If these results appear unusual, it’s because this is a synthetic benchmark—a benchmark crafted around unconventional conditions. Think of it as the classic TimeSpy tests for standard gaming computing. This artificial test reveals an intriguing twist in AI benchmarking. Surprisingly, this benchmark strongly favors AMD, significantly so. 

Why is this the case? Perhaps this test was designed with AMD GPUs in mind. Synthetic benchmarks have the flexibility to focus on specific GPUs and their unique architectures. Maybe this older test isn’t optimized for 4000 series cards. However, the actual reason behind the benchmark’s bias isn’t crucial. The key point to remember is that AI strength cannot be accurately determined through synthetic tests alone. While it’s intriguing that AMD outperformed NVIDIA in this particular test, it doesn’t hold much significance in the broader context.

AI is a finicky field, prone to bugs and often requiring a range of esoteric steps to function correctly. It’s like the wild west, and the sheriff of this small, unpredictable town is NVIDIA. AI simply works with NVIDIA, flat out. On the other hand, with AMD, it doesn’t, at least not on the consumer level. It’s akin to comparing gaming on a modern Windows PC to a Mac or Linux machine from 2008.

Not only does real-world AI performance suffer on AMD or Intel hardware, but the support isn’t there either. Why spend three days troubleshooting an AMD-specific issue when NVIDIA products work right from the start? Let’s say you want to use Shark AI, an AMD-compatible offshoot of Stable Diffusion. Oops! There seems to be an issue with this particular PC configuration, making it seemingly impossible to use. Do you want to spend a week trying to fix it? Even if you do, the program might launch and still not function properly.

So, why is this the case? It’s quite simple. NVIDIA has made a deliberate focus on AI and has maintained a generation-leading position over its competitors, almost like a monopoly. CUDA serves as the bridge between productivity/AI and NVIDIA GPUs. It’s NVIDIA GPUs general computing and programming model. With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs. When CUDA initially launched, it had its share of issues, being buggy and unreliable. However, it’s been ten years since its launch, and it’s now polished to perfection.

AMD has its equivalent of CUDA, called ROCm. While ROCm launched on Linux shortly after NVIDIA’s CUDA, it didn’t have Windows support until the summer of 2023. Which means it’s buggy. Terribly so. Well, big deal. It’ll soon catch up to NVIDIA. AMD simply needs a little time. Wrong. AMD’s gonna need a LOT of time. NVIDIA entered the CUDA AI market as the sole option. They settled the land and made it work for them and them alone. Programs are designed around CUDA. It works and runs great. So why would companies and developers spend the dev time trying to make ROCm working, when CUDA is a gen ahead with regular support.

Here’s a real-world example of CUDA working great with Stable Diffusion. Below are the testing specs I’ve lifted/aped off of the Tom’s Hardware AI Benchmarks:

Resolution:  2048×1152

Positive Prompt: postapocalyptic steampunk city, exploration, cinematic, realistic, hyper detailed, photorealistic maximum detail, volumetric light, (((focus))), wide-angle, (((brightly lit))), (((vegetation))), lightning, vines, destruction, devastation, wartorn, ruins

Negative Prompt: (((blurry))), ((foggy)), (((dark))), ((monochrome)), sun, (((depth of field)))

Steps: 100

Classifier Free Guidance: 15.0

Sampling Algorithm: Euler on NVIDIA, Shark Euler Discrete on AMD

I, at Cutting Edge Gamer, don’t have access to many older gen cards, but it’s clear to see NVIDIA has obviously, generationally improved their AI. The 4080 is 20% faster than the 3080ti. Other than a lengthy install process, stable diffusion worked wonderfully on NVIDIA GPUs. AMD’s results? I have no idea. That example from earlier utilizing the Shark AI, it wasn’t a random test. That was half a month of august wasted. I did my damnedest to wrangle together some AMD tests. After two weeks of troubleshooting, I gave up.

If you, the reader, are interested in AI on a consumer level…let me end with some final teachings. AI nowadays wants three things: VRAM, Tensor Cores, and a general computation/programming model.

AI’s are hungry for GPU memory a.k.a. VRAM. Why, because AI’s need training. They need to be taught to perform tasks. This teaching comes in the form of datasets/batches. The larger the batch, the faster an ai learns and processes. VRAM increases the batch size, thus speeds up AI learning. If a classroom only has 6 textbooks, It’ll take some time to teach 24 kids. If a GPU only has 6 GB of VRAM, AI’s won’t have much learning room.

Tensor Cores are ai specific cores separate from CUDA cores. Instead of doing graphical work, Tensor Cores specialize in multi-dimensional, mathematical, AI computation. AI’s function by doing extremely fast/huge mathematical equations. Thus, doing faster, bigger math allows for faster, bigger AI’s. This doesn’t mean Tensor Cores will replace CUDA Cores. Tensor’s are faster, but not as accurate/reliable as CUDA. It’s the difference between generalized and specialized cores. Here’s a visual to help with understanding the difference between CUDA and Tensor.

 

The first set of boxes, pascal, are the 1000 series GPUs. They don’t have access to tensor cores. Only CUDA. Since CUDA isn’t geared towards processing heavy sets of large numbers instantly, their AI performance is limited. When tensor cores were added with the 2000 series, the GPUs mathematical capabilities octupled. Then doubled and doubled again. Newer is faster. Newer is better.

I’ve already spoken of how general computation/programming models work. CUDA and ROCm. So what GPUs would I recommend nowadays? If you wanna try out any sort of AI work, then you’ll need an NVIDIA GPU with tensor cores. That mean 2000 series and above. Though gen 1 tensor cores are much slower than 3000 and 4000 series GPUs. 8 GB’s will function, but will need workarounds for models to properly function. 12 GB is fine, 16 GB is better, and 24 GB is the beginning of professional AI work.

Try it out AI use (Trying out Stable Diffusion once or twice for fun)

2060 (Super), 2070 (Super), 2080 (Super), 3050, 3060 8 GB, 3070, 4060 8 GB

Actual AI use (Utilizing AI in a workflow or chatting with a model)

2080ti, 3080 (10/12 GB), 4060 ti, 4060 16 GB, 4070, 4070 ti

Professional AI use (AI college students/professionals or those unable to access multi-thousand dollar server GPU’s)

3080 ti, 3090, 4080, 4090

AMD vs NVIDIA: Which one is right for your new build?

By Gaming, Tech Tips No Comments
Introduction:

For years the two giants of the PC gaming world of graphics cards, AMD and NVIDIA, have enjoyed competition with their continuous innovation and cutting edge design. Comparisons of these companies and the GPU’s they produce are guaranteed to bring any observer to conclude this though, NVIDIA is on another level. Their top tier GPUs, the RTX Titan, the 2080 Ti and the 2080 simply put are better. NVIDIA’s GPUs produce higher frame rates, draw less power, produce less heat and come with better optimized software. However, this does not mean that AMD should be dismissed, they produce some excellent GPUs at a more affordable price killing it in the mid and low tier of cards. Priced at $449.99, AMD’s new 5700’s (pictured in the image below) are a great deal and are quite capable of rendering some quality frames.

AMD Radeon RX 5700 XT 50th Anniversary
Price vs Performance:

Diving deeper into this comparison, we begin to consider just how much bang for your buck the respective GPUs provide. NVIDIA’s best performing GPU, the RTX Titan blows all the competition out of the water, these GPUs boast 4608 NVIDIA CUDA cores running at 1770 MegaHertZ boost clock on NVIDIA Turing architecture featuring 72 RT cores for ray tracing, 576 Tensor Cores for AI acceleration and 24 GB of GDDR6 memory running at 14 Gigabits per second for up to 672 GB/s of memory bandwidth. All at the insane price tag of $2,499.99. For comparison the best AMD consumer GPU you can buy is the RX 5700 which costs a whopping $449.99 if you opt for the AMD Radeon™ RX 5700 XT 50th Anniversary and can be purchased for as low as $349.99 if you go with the standard 5700 edition. This card still boasts a boost frequency up to 1980 MHz running on 2560 stream processors  and 8GB of GDDR6 256-bit memory and works great for 2K gaming.

Software:

While AMD software is catching up with NVIDIA, launching competing versions of most software NVIDIA offers, AMD’s software is a little behind the ball. NVIDIA software just works better, for example let’s compare their different software products for adaptive sync and video streaming.

G Sync vs. Free Sync

Both products are a version of adaptive sync designed to reduce screen tearing and improve video quality. G Sync is NVIDIA’s stellar program for this and it is a better product. Better optimized designed with better quality control, it does have one draw back. Same as always with NVIDIA, its just plain more expensive, because it only works with their own specialized monitors designed for this software and specially equipped to handle it. These monitors are not surprisingly more expensive than most but they seem to boast some of the best graphics we can render currently. AMD’s software has the distinct advantage of working with any monitors, it is just not as well optimized and does not seem to work as well, once again it comes down to a question of how much of your hard earned paycheck you are willing to drop on your new rig.

ShadowPlay VS ReLive

Not too much to say about these two products, once again NVIDIA is on top, as their streaming software again takes the cake. Both are capable of streaming your gaming sessions using your GPU without a capture card, but ShadowPlay has better video quality and a higher bit rate, ranging from 1-18mbps vs AMD’s ReLive which can only run at a bit rate between 1-10 mbps. Both are capped at 60 FPS and 1080p though so you may want a capture card for serious 4K streaming.

Power vs Optimization:

By now I’m sure that you’ve heard about AMD’s GPUs and their problem with high temperatures, and while this is true, the simple explanation for this is a difference in the architecture of the cards. NVIDIA’s Turing architecture is better designed and uses less power more efficiently in order to push out more frames. Meanwhile AMD GPUs are juiced up and need the extra power to make up for the lack of efficiency in their cards due to poor architecture. It’s as simple as that, it doesn’t make one necessarily better but when it comes to computing lower temperatures are generally preferred as heat and computers do not mix well. Funny enough it is a fact that, when your PC over heats it freezes, and that is something we all hope to avoid.

MSI RTX 2080 Ti LIGHTNING Z
Conclusion:

In the end, it comes down to your personal preference and price range when determining which GPU to choose. Whether you choose the RTX 2080 Ti which will provide the best performance that money can buy, or an AMD 5700 which will give you exceptional performance without breaking the bank, you’re sure to enjoy endless hours of high-performance pc gaming.

Troubleshooting Tips

By Gaming, Tech Tips No Comments
General Tips

Graphics card won’t post? Artifacting issues? Over heating? Computer Crashes? Loud Fan Sounds? Driver Crashes? Sound familiar? We have seen it all during our last nine years in business. Graphics cards have issues, break or plain stop working, that’s why are here to help. Check out our guide below and hopefully some of these tips can resolve your card issues without need for an RMA!

Here are some troubleshooting tips to help isolate the issue to the graphics card:

1) First, try re-seating the graphics card and ensure the power and video cables are installed properly.
2) If a second PCI-E slot is available, try installing the graphics card in another PCI-E slot and re-test.
3) Check your video cables to make sure they are not faulty and that they are the same video standard as the graphics card (DisplayPort 1.4 / 1.4a versus 1.2, HDMI 2.0 versus 1.4, etc.).
4) Check your monitor’s video input standards to make sure they are the same as the graphics card.
5) Install a known good graphics card in your system to insure that there is nothing else wrong (or install the potentially bad graphics card in another known good system to see if the issues replicate).
6) Sweep all old drivers and install new ones.
7) If that does not help, re-image entire system.
8) Other items that will cause artifacting: bad PSU, bad memory, bad video cable and/or video adapter.

No Display - Updating BIOS on Your Nvidia GPU

Here are some steps for flashing the BIOS of your NVIDIA GPU:

Tools that you will need:

-**GPU-Z** found here: http://www.techpowerup.com/downloads/1709/TechPowerUp_GPU-Z_v0.3.8.html

-**NiBiTor** found here:  https://www.guru3d.com/files-details/nvidia-bios-editor-download-nibitor.html

-**nvflash** found here:  https://www.techpowerup.com/download/nvidia-nvflash/

**Once you have these tools downloaded, follow this guide for flashing the BIOS:** http://www.techpowerup.com/forums/threads/guide-for-flashing-bios-of-nvidia-gpu.119955/

It’s very important to save a copy of your current BIOS (Step 1) in case anything happens during the flashing process.

No Video - Run Display Driver Uninstaller W10 (AMD)

1) Please run Display Driver Uninstaller: http://www.guru3d.com/files-details/display-driver-uninstaller-download.html

2) Run it in Safe Mode. To put Windows 10 into Safe Mode: http://www.digitalcitizen.life/4-ways-boot-safe-mode-windows-10

3) After clearing drivers, re-install the AMD graphics card into system and download the AMD driver for W10 64-bit: https://www.amd.com/en/support

4) Reboot system.

No Video - Run Display Driver Uninstaller W10 (NVIDIA)

1) Please run Display Driver Uninstaller:
http://www.guru3d.com/files-details/display-driver-uninstaller-download.html

2) Run it in Safe Mode. To put Windows 10 into Safe Mode:
http://www.digitalcitizen.life/4-ways-boot-safe-mode-windows-10

3) After clearing drivers, re-install the NVIDIA graphics card into system and download the NVIDIA driver for Windows 10:
http://www.geforce.com/drivers

4) Reboot system.

 

Hopefully one of those tips was able to solve your issue. If not, you can always reach out to us via our support tickets and we can help you with our 3-5 day RMA service.

Compare Products
Compare ×
Let's Compare! Continue shopping