CONNECT WITH US

Nvidia CEO Jensen Huang talks about partners, growing competition

Monica Chen, Taipei; Jessie Shen, DIGITIMES Asia 0

Credit: DIGITIMES

Nvidia CEO Jensen Huang addressed the company's partnerships with Arm, High-Bandwidth Memory (HBM) manufacturers, Cloud Service Providers (CSP), and Taiwan's high-tech supply chains, which are led by TSMC, during a press Q&A session at Computex 2024 in Taipei. He also shared his vision for AI and commented on the competition. The following is a summary of the Q&A session.

Q: Currently, Nvidia's HBM partner is SK Hynix. When will Samsung join as a partner? Rumor has it that Samsung's HBM does not quite meet Nvidia's requirements.

A: For Nvidia, HBM memory is extremely essential. Right now, we're expanding rapidly. We provide the Hopper H200 and H100. Blackwell B100 and B200 are available. Grace Blackwell GB200 is here. The amount of HBM memories that we are ramping up is quite significant. The speed that we need is also very significant. We work with three partners. All three are excellent. SK Hynix, of course, Micron, and Samsung. All three will give us HBM memories. We are working diligently to qualify and integrate them into our manufacturing processes as quickly as possible.

Q: Could you please elaborate on the delay in Samsung and Micron memory becoming HBM-certified? There was a rumor that Samsung's memory failed your energy consumption and heat verification tests. Can you finally disclose whether you've certified any Samsung memory or HBM?

A: No, it did not fail for any of these reasons. Furthermore, there is no relationship between the material you have read and our business. Our relationship with Samsung and Micron is proceeding smoothly. The engineering is the only task that remains. It is yet to be completed. Yes, I wanted to finish it by yesterday, but it has not been completed. Knowing this requires patience. Yes, but there is no story present.

Q: Can you provide an update on your collaboration with Arm?

A: Of course. Our collaboration with Arm is outstanding. We are working on Grace, our CPU designed specifically for AI and high-performance computing as you are aware. We are rather thrilled about the Arm architecture since it lays the foundation of Grace. It's going to be fantastic for our data centers, and it's going to be an incredible product for the industry. Our relationship with Arm is very strong. We're working closely together to bring new technologies to market, and we're very excited about the future of Arm-based CPUs in our product lineup.

Q: CSPs like Google and Microsoft are developing their own AI chips, which could have an impact on Nvidia. The second question is whether Nvidia will conduct business, which you may consider doing, ASIC chip development?

A: Nvidia is quite different. As you know, Nvidia is not an accelerator. Nvidia accelerates computing. Do you know the difference? I explain it every year, but no one understands. The Deep Learning Accelerator cannot handle SQL data. We can. The deep learning accelerator cannot handle photos.

The accelerator cannot be used for fluid dynamic simulations. We can't make sense. Nvidia promotes faster computing. It's quite versatile. It is also quite effective for deep learning. Does that make sense?

Thus, Nvidia offers more versatile accelerated computing. Consequently, more use correlates with greater utility. It has a low effective cost. Allow me to explain with an example. Many believe that a smartphone is far more expensive than a phone. In the past, a smartphone was once more expensive than a phone; at US$100, it was the replacement for the music player. It took the place of the camera. It took the place of your laptop. On occasion. That is correct, isn't it? Therefore, the smartphone was relatively affordable due to its adaptability. It is, in actuality, our most valuable instrument.

Same thing with Nvidia accelerated computing. Second, Nvidia's architecture is incredibly versatile and practical. It is a component of every cloud. It is located in GCP, OCI, Azure, and not AWS. It's located in local clouds. In sovereign clouds, that is. It exists everywhere—on-premises, in private clouds, and everywhere else. Because we have such a large audience, developers come to us first. It makes sense. Because it operates everywhere if you program for or target CUDA. It only operates at the accelerators if you program for one of them. This gets us to the second reason for which our value to the client is quite high: in addition to reducing the CSPs' workload, Nvidia provides them with clients because they are CUDA customers.

Our objective is to migrate consumers to the cloud. As cloud companies expand their Nvidia capability, their revenue increases. When they enhance the capability of their proprietary ASICs, their expenses rise. It is possible that the revenue will not rise.

We attract consumers to our cloud services, and we are really satisfied with this achievement. Nvidia is positioned in a distinct manner. Firstly, our versatility stems from the abundance of exceptional software we possess. Secondly, we exist in every cloud and are present in all locations. We are an attractive prospect for developers and highly valuable to CSPs. We attract and generate clients for them.

Q: Could you tell us more about your thoughts on the future of AI and how Nvidia is positioned in this space?

A: AI is the most potent technological force of our era. Nvidia is at the forefront of this transformation, which will revolutionize every industry. We are offering the platforms, systems, and tools necessary to stimulate AI innovation. We are facilitating the next surge of AI advancements by utilizing a variety of software libraries, including CUDA and TensorRT, as well as our AI platforms, including Clara for healthcare, Isaac for robotics, and Drive for autonomous vehicles. Our collaboration with partners from various sectors guarantees that AI can be implemented efficiently and that we will persist in expanding the limits of what is feasible. Nvidia will be playing a critical role in the journey of AI's impact on every aspect of our lives.

Q: Question about building technology. Last week, Intel and AMD unveiled the UA Link Consortium. They say that your Nvidia is proprietary. Therefore, being open is much better. So what are your thoughts on that? I believe we have used proprietary technology throughout the history of this industry. Intel is x86. ARM is ARM architecture. Now, Nvidia has NVLink. So, what do you think about open vs. proprietary?

A: For end users, performance and cost performance are favorable. End users also tolerate proprietary technology as long as it offers acceptable performance and cost-effectiveness.

Proprietary and open standards have always existed, is that correct? The market has always been both open and proprietary. Intel is x86, AMD is x86, ARM is ARM architecture, Nvidia is NVLink, and so on. The best way to look at it, in my opinion, is in terms of a platform's transparency, its capacity for innovation, and the value it adds to the ecosystem.

The most crucial factors are whether it stimulates innovation, adds value to the ecosystem, and opens up chances for everyone, regardless of whether it is proprietary or open. I believe Nvidia has accomplished that. Not only is NVLink an amazing piece of technology, but our industry partnership is even better. Our networking devices, NVSwitch and Quantum-2, are compatible with PCI Express.

We collaborate with industry to develop a great deal of open technology, but we also innovate and produce proprietary technologies. Actually, it's not either/or. It's all about advancing innovation and adding value to the ecosystem.

Q: How does Nvidia deal with increased competition in the specialized AI chip market?

A: Nvidia is a market maker, not a sharetaker. Does this make sense? We are always inventing the future.

Remember, GeForce was the first graphics card designed for gaming. We played a significant role in the early development of PC gaming. All of our work with accelerated computing was pioneering; we began working on self-driving cars, autonomous vehicles, and robotics over a decade ago and are still working on it, and of course, generative AI.

Nobody could argue that we were there on the first day, inventing the entire category. As a result, some people claim it is their top priority right now. But it's been our top priority for 15 years. As a result, the company's culture, as well as its personality, revolve around inventing the future.

We're dreamers. We are inventors. We are pioneers. We do not mind failing. We do not mind wasting time. We simply want to develop something new. So, that is our company's personality. So I believe our approaches are really different.

As you are aware, we are not simply building GPUs; the systems shown on stage are only half of the total. All of these mechanisms were designed by us, and we then opened them up to the ecosystem so that everyone could build on them. But someone had to build the first one. We completed all of these. We built all of the initial ones and someone needs to write all of the software that makes this all function. We made everything work. So Nvidia is more than simply a chip company. We are actually an AI supercomputer, AI infrastructure, and AI factory company. We're also quite good at developing AI.

How do you determine what computer to create if you don't comprehend AI? Nobody is going to teach it to you. So, 15 years ago, we had to start learning how to design AI so that we could build these computers to operate it. As a result, there are numerous unique aspects to our business. It's difficult to compare us to someone else.

Some argue that the CSPs are competitors because they manufacture chips. Remember, all of our CSPs are Nvidia customers. Nvidia is the only business that provides an accelerated computing platform that is available in every cloud. That is versatile enough to handle everything from deep learning and generative AI to database processing and weather simulation. We're rather a unique company, extremely different.

Q: Regarding Hopper and Blackwell, it appears that there has been a shift in messaging since GTC, with a greater emphasis on value, cost per token, and cost performance. The word value appears to be a frequently mentioned topic. I'm just wondering whether that's a response to customers. Have they, are they concerned about pricing, and how do you approach pricing for these sorts of novel chips?

A: Pricing is always value-based. If a product is priced correctly, demand is phenomenal. There is no such thing as great demand at the incorrect price; it does not exist. If you have the appropriate price and provide the correct value, demand will be incredible. Our demand is great. You could reduce your pricing to nothing if there is no demand. It's not about the price, right? It did not. You did not get low enough. So pricing is definitely determined by market demand. So I believe our price is appropriate.

The way we set it is not an easy exercise, but we have to build the entire system, develop all the software, and then break it up into a whole bunch of parts, and we sell it to you as a chip. But in the end, we're really selling the AI infrastructure. All the software that goes along with it is integrated into your software. So Nvidia, what we build is AI factories.

What we're building are AI factories. We deliver it as a chip. It looks a little like this. Microsoft used to deliver the operating system and Excel office. They wrote the software, but they provided it to you via floppy disk. So the question is: Does Microsoft sell floppy disks? No, that is only a delivery vehicle. So, in many ways, our chips serve as delivery vehicles for the software notion of the AI factory infrastructure. That's the simplest way to think about it.

But, in the end, we deliver AI factories successfully. In terms of cost, we have reduced energy use by 45,000 times during the past eight years. Moore's Law could not have come close to that. Over the last ten years, we have reduced training costs by 350 times. You know, probably around 350 times. Call it 1000 or 1200 times. We're only eight years into the decade. So there are two years remaining at the current rate of growth. There are still many X factors left. Moore's Law, therefore, cannot accomplish this. Not even close, even on his finest days. So we're lowering our energy consumption.

We're driving down the cost of training. Why are we doing that? So that we can enable the next level of breakthroughs. So the reason why the world has trained us is training such giant large language models without ever thinking about it. It's because the marginal cost of training has dropped by 1000 times in ten years. Assume that something you accomplished decreased a thousandfold over the course of a decade. Assume the cost of going from Australia to here is $3 rather than $3,000 or more. Okay. Instead of 24 hours, it takes 20 minutes. I believe that you could go from Australia to Taiwan in 24 minutes or 2 minutes. Yeah, it takes two minutes and costs three dollars. I bet you visit Taiwan frequently. You come here merely to visit the night market and then return home. Do you understand what I am saying? Isn't that correct?

So, by driving down the marginal cost, by driving down the energy consumed, by driving down the cost, the time we are enabling generative AI, if we didn't do that, generative AI would still be a decade away. That's why we're doing it to enable the next breakthrough in AI because we believe in it.

Q: Are you concerned about the geopolitical risks associated with your investments in Taiwan?

A: We invest in Taiwan because TSMC is fantastic. I mean, not typical. TSMC has incredible advanced technology, an exceptional work atmosphere, and excellent flexibility. Our two businesses have been working together for about a quarter of a century. We genuinely get each other's beat. We seem to be working with friends, so it's almost as if we have nothing to say. You say nothing at all. We simply understand each other.

As a result, we can construct extremely complex things in large quantities and at great speeds. This is not common for TSMC. You can't just leave it up to someone else to do it.

The industry ecosystem here is incredible. The ecosystem surrounding TSMC, both upstream and downstream, is quite rich. We've been dealing with them for a quarter-century.

Thus, TSMC and the ecosystem around it, which include Wistron, Quanta, Foxconn, and Wiwynn. Inventec and Pegatron—how many of them? Asus, MSI, and Gigabyte are all fantastic firms that are sometimes overlooked and undervalued. This is actually the case. So, if you're from Taiwan, I think you should be really proud of the ecosystem here. If you are a Taiwanese company, you should be very proud of your accomplishments. This is a fantastic place. I am very proud of all of my partners here. I am incredibly grateful for everything they have done for us and the assistance they have provided over the years. And I'm delighted that this is a new beginning, building on all of the experience that these companies have gained over time. All of these wonderful companies have gathered incredible expertise over the last two and a half to three decades.