Friday, December 20, 2024
ad
HomeData ScienceTensorFlow 2.9 now has Intel oneDNN AI Optimizations Enabled by Default 

TensorFlow 2.9 now has Intel oneDNN AI Optimizations Enabled by Default 

OneDNN enables TensorFlow optimizations to speed up critical performance-intensive AI processes such as convolution, matrix multiplication, and batch normalization.

TensorFlow has unveiled the newest 2.9 version, just three months after the release of version 2.8. The key highlights of this version are efficiency improvements using oneDNN and the introduction of DTensor, a new model distribution API that allows for seamless data and model parallelism. 

OneDNN is an open-source cross-platform performance library of deep learning building blocks aimed toward helping developers of deep learning applications and frameworks like TensorFlow. In Tensorflow 2.5, which was launched in May 2021, the oneDNN library first became accessible as a preview opt-in feature. The oneDNN optimization was switched on by default in the latest TensorFlow 2.9 upgrade after a year of testing and excellent feedback from the community, with four times performance enhancement. 

It presently defaults on all Linux x86 packages, as well as on CPUs having neural-network-focused functionality, including AVX512 VNNI, AVX512 BF16, AMX, and others, found on Intel Cascade Lake and newer CPUs.

The promise of oneDNN for organizations and data scientists, according to Intel, is a considerable acceleration of up to three times performance for AI operations using TensorFlow. Intel believes that by using oneDNN, data scientists will be able to enhance model execution time. This performance enhancement is referred to as “software AI acceleration” by Intel and professes to have a meaningful impact in several areas. These areas include applications spanning natural language processing, image and object recognition, autonomous vehicles, fraud detection, medical diagnosis and treatment and others.

On the latest 2nd and 3rd-Generation Intel Xeon Scalable processors, oneDNN additionally supports the int8 and bfloat16 data types to increase compute-intensive training and inference performance. These improvements can decrease the time it takes to run a model by up to 4 times for int8 and 2 times for bfloat16. It also enables them to gain more performance out of AI accelerators like Intel Deep Learning Boost, adds the company. The oneDNN optimizations are also available in other TensorFlow-based applications, such as TensorFlow Extended, TensorFlow Hub, and TensorFlow Serving.

Read More: Google TensorFlow Similarity: What’s New about this Python Library?

With over 100 million downloads, TensorFlow is one of the most popular AI application development platforms in the world. TensorFlow with Intel optimizations is available as a standalone component and as part of the Intel® oneAPI AI Analytics Toolkit, and it’s already being used in a variety of industries, including the Google Health project, animation filmmaking at Laika Studios, language translation at Lilt, natural language processing at IBM Watson, and more. Several other prominent open-source deep learning frameworks, including PyTorch and Apache MXNet, as well as machine learning frameworks like Scikit-learn and XGBoost, already benefit from Intel software upgrades via oneDNN and other oneAPI libraries.

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our WhatsApp Channel and Discord Server to be a part of an engaging community.

Preetipadma K
Preetipadma K
Preeti is an Artificial Intelligence aficionado and a geek at heart. When she is not busy reading about the latest tech stories, she will be binge-watching Netflix or F1 races!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular