NVIDIA launched TensorRT 8, the eighth generation of the company’s AI software, which slashes inference time in half for language queries — enabling developers to build the world’s best-performing search engines, ad recommendations and chatbots and offer them from the cloud to the edge.
TensorRT 8’s optimisations deliver record-setting speed for language applications, running BERT-Large, one of the world’s most widely used transformer-based models, in 1.2 milliseconds.
In the past, companies had to reduce their model size which resulted in significantly less accurate results. Now, with TensorRT 8, companies can double or triple their model size to achieve dramatic improvements in accuracy.
NVIDIA Vice President Of Developer Programmes Greg Estes said AI models are growing exponentially more complex, and worldwide demand is surging for real-time applications that use AI.
“That makes it imperative for enterprises to deploy state-of-the-art inferencing solutions. The latest version of TensorRT introduces new capabilities that enable companies to deliver conversational AI applications to their customers with a level of quality and responsiveness that was never before possible,” said Greg Estes.
In five years, more than 350,000 developers across 27,500 companies in wide-ranging areas, including healthcare, automotive, finance and retail, have downloaded TensorRT nearly 2.5 million times. TensorRT applications can be deployed in hyperscale data centers, embedded or automotive product platforms.
Latest Inference Innovations
In addition to transformer optimizations, TensorRT 8’s breakthroughs in AI inference are made possible through two other key features.
Sparsity is a new performance technique in NVIDIA Ampere architecture GPUs to increase efficiency, allowing developers to accelerate their neural networks by reducing computational operations.
Quantisation aware training enables developers to use trained models to run inference in INT8 precision without losing accuracy. This significantly reduces compute and storage overhead for efficient inference on Tensor Cores.
Broad Industry Support
Industry leaders have embraced TensorRT for their deep learning inference applications in conversational AI and across a range of other fields.
Hugging Face is an open-source AI leader relied on by the world’s largest AI service providers across multiple industries. The company is working closely with NVIDIA to introduce groundbreaking AI services that enable text analysis, neural search and conversational applications at scale.
Hugging Face product director Jeff Boudier said the company is closely collaborating with NVIDIA to deliver the best possible performance for state-of-the-art models on NVIDIA GPUs.
“The Hugging Face Accelerated Inference API already delivers up to 100x speedup for transformer models powered by NVIDIA GPUs. With TensorRT 8, Hugging Face achieved 1ms inference latency on BERT, and we’re excited to offer this performance to our customers later this year,” said Jeff Boudier.
GE Healthcare, a leading global medical technology, diagnostics and digital solutions innovator, is using TensorRT to help accelerate computer vision applications for ultrasounds, a critical tool for the early detection of diseases. This enables clinicians to deliver the highest quality of care through its intelligent healthcare solutions.
GE Healthcare chief engineer of Cardiovascular Ultrasound Erik Steen said when it comes to ultrasound, clinicians spend valuable time selecting and measuring images.
“During the R&D project leading up to the Vivid Patient Care Elevated Release, we wanted to make the process more efficient by implementing automated cardiac view detection on our Vivid E95 scanner. The cardiac view recognition algorithm selects appropriate images for analysis of cardiac wall motion. TensorRT, with its real-time inference capabilities, improves the performance of the view detection algorithm and it also shortened our time to market during the R&D project,” said Erik Steen.
TensorRT 8 is now generally available and free of charge to members of the NVIDIA Developer program. The latest versions of plug-ins, parsers and samples are also available as open source from the TensorRT GitHub repository.