An Under-Appreciated Moat for Nvidia: CUDA
Here's an important principle with some interesting investing implications:

Standardization reduces fixed costs.

This principle is a pattern you can use to identify investment opportunities before others see them. It's worth understanding well; and, as there are many examples of it in the past, allow me to indulge in a bit of history:

In ancient Rome, a wagon going down the road would leave a small divot, which, over time would become a well-worn rut as many wagons fall more easily into the tracks of the wagon before.

This led to the standardization of the spacing between wheels on a Roman chariot, because if you used any other spacing, your wagon wheel and axel would break. The cost of upkeep and repairs for your wagon goes down if you have a standard wagon.

Post media

While there are a million different ways to make a slightly different wagon, there are fixed costs involved for every time you do it differently.

The same principle applies to platforms in the Graphics Processing Unit space. The chips themselves do not have a moat, but the compute platform does.

The CUDA platform is a software layer that gives direct access to the GPU's virtual instruction set. It's proprietary to Nvidia, meaning CUDA will only run on supported Nvidia GPUs.

A standardized language to build GPU-accelerated applications becomes the divot for many other wagon wheels to follow. Universities are teaching CUDA, for example. The developer ecosystem that graduates then creates the moat, as third-party developers don't want to have to learn a new language.

Although we humans can be remarkably inventive, we are also often resistant to change and can be persistently stubborn (or perhaps practical) in trying to reuse old tools for current problems. If a current tool works well enough to solve the problem at hand, it will win out over a newer and better tool that isn't standardized.

This is what seems to be happening with CUDA, which is very beneficial for Nvidia.

There is a nice ecosystem of functionality that you can get easily for a CUDA based project, and $NVDA provides some very high quality libraries tuned for their hardware.

As development teams form to create more AI powered applications over the next decade, I think CUDA is the divot that turns into a trench for Nvidia.

Or dare I say, a moat.
Ambrose's avatar
Standardization is a false sense of security. Standardization is a double edge sword. When standardization happens, innovation stops. The company wants to standardize stuff to cut costs and increase profits. We have seen this throughout history like Kodak's camera. Cathie Wood can give you many examples like ICE vehicles and such.

I recall my professor once said, "If its in the textbook, it is probably old technology".

But this does not mean standardization is bad. Standardization provides easier access to the public. It bridges the gap between the new technology/knowledge from the old one. This is why schools exist. Learning old stuff makes it easier to learn the new stuff.

Not too sure about this
Let's say the new technology is real-time ray tracing. Students have to learn the CUDA language first then they can explore real-time ray tracing.

In conclusion, standardization is no moat. It's just a part of the process in the lifespan of a technology. $NVDA is just doing what it's supposed to do, trying to increase the usage of its GPU. We should be focusing on innovation than standardization as the only thing constant in this world is change.
Tyler Lastovich's avatar
"When standardization happens, innovation stops." This quote has me shaking my head. I am sure there are many cases where this is true, but I would be wary to broadly apply it. I agree that standardization isn't a bad thing. Standards and protocols help turn potential into reality. They allow people to work together on ambitious projects across the world. Blockchains, TCP/IP, and programming languages are all examples of standards. Are these things holding back innovation?

If CUDA wasn't around you wouldn't have accurate weather prediction, nuclear power safety, genomics advancements, the list goes on and on. Having most of the top scientists and engineers in the world know and use your development standard is tough for me to see as a bad thing.

I fully concede that standards are often replaced over time and the moat they provide gets shallower without significant, on-going r&d improvements.
Ambrose's avatar
Creating a programming language CUDA is innovation. Teaching CUDA as a language is not innovation, hence not a moat.

As I said, standardization is not a bad thing, but it shouldn't be considered a moat.
And I do agree, I may be a bit extreme with my words. Should be like standardization does not equate to innovation
Ambrose's avatar
Hmm. This leads me to realise that AMD uses JavaScript instead. My fault to assume AMD had its own coding language.
I guess $NVDA CUDA language can be considered a moat if it has more functionality than JavaScript.

@tyler You seemed to be an expert in this area. What's your opinion between CUDA vs JavaScript
Tyler Lastovich's avatar
"...standardization is not a bad thing, but it shouldn't be considered a moat." I am still not quite tracking this line of logic. CUDA is a standard that $NVDA has developed and holds proprietary, which locks people into buying and re-buying only their products. To me that is the textbook definition of a moat. I agree that standards more often enable innovation, rather that define it. For CUDA though, I think it is both. ;). Now it is up to Nvidia to keep updating it and innovating on the next thing. Which I believe they understand, since they heavily invest in AI software research (such as creating StyleGAN).
Tyler Lastovich's avatar
"What's your opinion between CUDA vs JavaScript". Well, they couldn't be further apart really. One is a niche infrastructure, enabling hyper parallelism using GPUs and the other is a common coding language that massive tangles, err enables, the web and millions of other devices.
Ambrose's avatar
Forget about standardization part, I just realised that how unaware was I with CUDA. The speed part has already got me hooked. https://preview.tinyurl.com/y2j4n4gk

I believe that there must be some limitation to this, else why AMD GPU performance is putting on a fight
Nathan Worden's avatar
The medium link isn't working for me but I would love to read this. Let me know if there is another way to read this.
Tyler Lastovich's avatar
@zebo Large computers that make use of CUDA are some of the most complicated machines on the planet, often outstripping most NASA-type systems. That is to say they are full of nuance, standards, and optimization problems. Every single one is purpose-built for the owners application load that will run on it. The new AMD chips are awesome (long $AMD too), but that doesn't actually make a material impact if your workload is already optimized for CUDA like so many are today. CUDA itself is just a tailored adaptation of OpenCL, which is open-source and can run just about anywhere. Nvidia tightly controlling both the HW and SW leads to optimization gains that are hard for others to replicate (in the same fashion as Apple vs Android.).

Source: My work with Nvidia, Intel, and ARM directly to help build $1B+ worth of supercomputers.
Nathan Worden's avatar
Yessss good job Ambrose @zebo . I appreciate that you keep researching. I'm glad we're starting to convert you regarding the significance of CUDA for Nvidia. I respect your take on Nvidia possibly being overhyped. But I'm very glad that you keep learning continually. I want to be more like that myself.

And yes, @tyler is quite an expert when it comes to practical applications of AI. He has a ton of experience in that area.

Author

Related