NVIDIA built its reputation on gaming GPUs, but now it’s also an undisputed leader in the realm of A.I., being one of the chief suppliers of integral hardware for trending advancements that include ChatGPT. Speaking to CNBC in a story about how his company’s big bet on A.I. is paying off, NVIDIA CEO Jensen Huang suggested that he had seen the revolution coming from far away, telling the publication that green team began going all-in on artificial intelligence over a decade ago. Huang went on to show some modesty, suggesting that there was some luck in timing involved, but NVIDIA is certainly enjoying the business it’s getting these days, which include serving thousands of pricey A100 Tensor Core GPUs to other tech giants that include Microsoft.
“We had the good wisdom to go put the whole company behind it,” CEO Jensen Huang told CNBC in an interview. “We saw early on, about a decade or so ago, that this way of doing software could change everything. And we changed the company from the bottom all the way to the top and sideways. Every chip that we made was focused on artificial intelligence.”
“We just believed that someday something new would happen, and the rest of it requires some serendipity,” Huang added when asked whether his company’s fortunes are the result of luck or prescience. “It wasn’t foresight. The foresight was accelerated computing.”
From a CNBC report:
[…] tech companies scrambling to compete with ChatGPT are publicly boasting about how many of Nvidia’s roughly $10,000 A100s they have. Microsoft said the supercomputer developed for OpenAI used 10,000 of them.
“It’s very easy to use their products and add more computing capacity,” said Vivek Arya, semiconductor analyst for Bank of America Securities. “Computing capacity is basically the currency of the valley right now.”
Huang showed us the company’s next-generation system called H100, which has already started to ship. The H stands for Hopper.
“What makes Hopper really amazing is this new type of processing called transformer engine,” Huang said, while holding a 50-pound server board. “The transformer engine is the T of GPT, generative pre-trained transformer. This is the world’s first computer designed to process transformers at enormous scale. So large language models are going to be much, much faster and much more cost effective.”
Huang said he “hand-delivered” to ChatGPT maker OpenAI “the world’s very first AI supercomputer.”