Here's an important principle with some interesting investing implications:
Standardization reduces fixed costs.
This principle is a pattern you can use to identify investment opportunities before others see them. It's worth understanding well; and, as there are many examples of it in the past, allow me to indulge in a bit of history:
In ancient Rome, a wagon going down the road would leave a small divot, which, over time would become a well-worn rut as many wagons fall more easily into the tracks of the wagon before.
This led to the standardization of the spacing between wheels on a Roman chariot, because if you used any other spacing, your wagon wheel and axel would break. The cost of upkeep and repairs for your wagon goes down if you have a standard wagon.
While there are a million different ways to make a slightly different wagon, there are fixed costs involved for every time you do it differently.
The same principle applies to platforms in the Graphics Processing Unit space. The chips themselves do not have a moat, but the compute platform does.
The CUDA platform is a software layer that gives direct access to the GPU's virtual instruction set. It's proprietary to Nvidia, meaning CUDA will only run on supported Nvidia GPUs.
A standardized language to build GPU-accelerated applications becomes the divot for many other wagon wheels to follow. Universities are teaching CUDA, for example.
The developer ecosystem that graduates then creates the moat, as third-party developers don't want to have to learn a new language.
Although we humans can be remarkably inventive, we are also often resistant to change and can be persistently stubborn (or perhaps practical) in trying to reuse old tools for current problems. If a current tool works well enough to solve the problem at hand, it will win out over a newer and better tool that isn't standardized.
This is what seems to be happening with CUDA, which is very beneficial for Nvidia.
There is a nice ecosystem of functionality that you can get easily for a CUDA based project, and $NVDA
provides some very high quality libraries tuned for their hardware.
As development teams form to create more AI powered applications over the next decade, I think CUDA is the divot that turns into a trench for Nvidia.
Or dare I say, a moat.