Recently, NVIDIA released a new driver update (Release 532.03) for the latest GeForce RTX GPUs on Windows. This update along with Olive-optimized models deliver big boosts in AI performance. For example, when you use an Olive-optimized version of the Stable Diffusion text-to-image generator with the popular Automatic1111 distribution, performance almost doubled.
When optimized for GeForce RTX and NVIDIA RTX GPUs, generative AI models can run up to 5x faster than on competing devices.
Leading Windows developers including Adobe, DxO, ON1 and Topaz have already built on NVIDIA AI technology. NVIDIA claims that there are 400 Windows applications and games optimized for RTX Tensor Cores.
To support more efficient AI inferencing, RTX GPUs will add Max-Q low-power inferencing for AI workloads. So, GPU will operate at a fraction of the power for lighter inferencing tasks. The GPU can then dynamically scale up for maximum AI performance when you execute Generative AI workloads.
“AI will be the single largest driver of innovation for Windows customers in the coming years,” said Pavan Davuluri, corporate vice president of Windows silicon and system integration at Microsoft. “By working in concert with NVIDIA on hardware and software optimizations, we’re equipping developers with a transformative, high-performance, easy-to-deploy experience.”