Howestreet.com - the source for market opinions

ALWAYS CONSULT YOUR INVESTMENT PROFESSIONAL BEFORE MAKING ANY INVESTMENT DECISION

November 26, 2025 | Training AI vs. Inference AI

Martin Straith

Trend News Inc. was founded in 2002 by Martin Straith. Martin had been a successful investor in the markets for over 20 years & after the DOT COM stock market crash, he felt that there needed to be an investment newsletter that helped educate investors on how to protect their wealth, & become better, more successful investors.

Artificial intelligence (AI) isn’t one big process – it’s like a two-part job: training (teaching the AI) and inference (using what it learned). Each part needs different computer hardware, called chips, which is why companies like Nvidia and AMD lead in some areas, while others compete elsewhere. For investors, understanding this split helps spot opportunities in the booming AI market.

Training: The Intense Learning Phase

Imagine training an AI like cramming for a massive exam with endless textbooks. The AI ‘studies’ huge piles of data (think billions of photos or words) to spot patterns, tweaking its internal ‘brain’ (millions or billions of settings called parameters). This is super demanding – it requires tons of raw computing muscle, vast memory, and chips that juggle complex math across hundreds or thousands of processors at once. Training can take weeks or months on giant server farms. Nvidia rules here with its H100 chip (and the newer GB200), plus its user-friendly CUDA software that lets developers build AI easily. AMD’s MI300 chips are a strong challenger, delivering solid speed at lower costs. Nvidia’s edge? Its hardware and software sync perfectly, making it the go-to for big cloud services and AI firms.

Image of Nvidia GB200 Blackwell chip

Inference: Quick, Everyday Use

Once trained, the AI goes to work – like your phone recognizing your face or ChatGPT answering queries. This ‘inference’ phase must be lightning-fast, cheap to run, and energy-saving, handling millions of daily requests without overheating or draining power. It doesn’t need monster setups. Smaller GPUs, everyday computer chips (CPUs), or custom ’ASIC’ chips work fine. Competition is fierce: Nvidia and AMD play here too, but Intel, Qualcomm, Google’s TPU chips, and Apple’s Neural Engine shine for efficiency.

Image of Google Ironwood TPU

What It Means for Investors

Training chips fetch high prices for their power, giving Nvidia (with AMD ‘chipping away’) fat profits fueling the AI boom. But inference will explode as AI apps – like smart assistants or self-driving cars -go mainstream, potentially dwarfing training in size. Watch for diversified plays beyond just Nvidia.

Stay tuned!

Martin

STAY INFORMED! Receive our Weekly Recap of thought provoking articles, podcasts, and radio delivered to your inbox for FREE! Sign up here for the HoweStreet.com Weekly Recap.

November 26th, 2025

Posted In: The Trend Letter

Post a Comment:

Your email address will not be published. Required fields are marked *

All Comments are moderated before appearing on the site

*
*

This site uses Akismet to reduce spam. Learn how your comment data is processed.