Home Blog Page 163

Top Artificial Intelligence Books

artificial intelligence books

Artificial intelligence is an incredible technology that accomplishes several human tasks much faster and more efficiently. With more processing power, AI offers several benefits that make it an excellent tool for modern organizations. The innovations of AI-based technologies are applied universally to deliver healthcare-related services, manufacturing processes, banking, and financial services, life sciences, and more. Given the efficiency of artificial technology, many newcomers and existing companies are shifting their focus to incorporating AI in their operations. To keep up with the trends, professionals seek to know more about the technologies by referring to several artificial intelligence books. This blog will provide a list of some worthy AI books. But, let’s first learn what artificial intelligence is.

What is AI?

Artificial intelligence is the simulation of human intelligence with computers. It is a technology that enables machines, especially computers, to perform human tasks. AI requires a strong foundation of structured hardware and software for building and training algorithms using R, Python, and more. AI systems ingest vast amounts of data for training, analyzing this data to find patterns and, ultimately, using the patterns to predict results and future states. Learning more about artificial intelligence can be challenging if you do not have the correct reference or guidance. Several books talk about AI, but which one is right for you? Have a look at this list of best AI books.

List of top Artificial Intelligence Books

We have divided the list into two parts: one enlists the best artificial intelligence books for beginners, and the other is for people with some experience and ideas about AI.

Best artificial intelligence books for beginners

  1. Architects of Intelligence: The truth about AI from the people building it

Written by Martin Ford, this AI book collects in-depth, one-on-one interviews with some bright minds in the AI community. Ford is a New York Times bestselling author and futurist who speaks at conferences and interviews experts to know more about the future of AI and automation. In this book, he talks about his conversations with 23 researchers and entrepreneurs working on robots and artificial intelligence. Yann LeCun (Facebook), Andrew Ng (AI Fund), and Stuart Russell (UC Berkeley) are a few among 20 others who spoke the ‘truth about AI’ in this AI book. It is a digression from usual AI books about AI projects and technical details. 

Link for the book – Architects of Intelligence: The truth about AI from the people building it

  1. Artificial Intelligence Basics: A Non-technical Introduction

The most efficient way to understand artificial intelligence is to start with the basics. You will find several artificial intelligence books that discuss AI for beginners. Still, this book, ‘Artificial Intelligence Basics: A Non-Technical Introduction’ by Tom Taulli, teaches you how to implement AI in your organization. It also teaches you how to strategize, set realistic goals, and deal with inherent risks like biases, employee resistance, and data quality. The book will introduce you to several concepts like machine learning, natural language processing, deep learning, and many more. The AI book is indispensable for giving you a headstart in artificial intelligence.

Link to the book – Artificial Intelligence Basics: A non-technical introduction

  1. Artificial Intelligence: A Modern Approach by Peter Norvig and Stuart Russel

Once you have a basic understanding of the subfields of AI, it is logical to proceed and learn the ‘common framework’ of how it works. AI is a big field, and so is this AI textbook. The new edition focuses on a modern approach that cultivates an idea of an ‘intelligent agent,’ and AI is the study of such agents that perceive the environment and perform accordingly. Intended for undergraduates, it is one of the best books for AI that will enable you to understand the logic, probability, and continuous mathematics that form the foundation of AI systems. 

Link to the book – Artificial Intelligence

  1. Python Machine Learning: Machine Learning and Deep Learning with Python, Scikit-Learn, and TensorFlow – 3rd Edition

Artificial intelligence works by training specific algorithms to perform human tasks. If you wish to learn Python for artificial intelligence, it is one of the best books on artificial intelligence. Written by Sebastian Raschka, this will help you understand artificial intelligence conversationally. The 3rd edition is a step-by-step guide to help you build your ML systems and has been updated to include TensorFlow 2.0, new Keras API features, and the latest additions to Scikit-Learn. Compared to the previous edition, it has been expanded to cover reinforcement learning and sentiment analysis, a sub-field of natural language processing. 

Link to the book – Python Machine Learning

  1. Artificial Intelligence Engines: a Tutorial Introduction to the Mathematics of Deep Learning

Deep learning is a part of artificial intelligence based on artificial neural networks, and this domain enables computers to undertake classification tasks directly from texts, images, or sounds. Studying deep learning can be challenging as it involves leveraging data analytics to work faster than human minds. But Artificial Intelligence Engines: a tutorial introduction to the mathematics of deep learning is an ideal AI book for beginners. The book explains neural networks informally at first, followed by extensive mathematical analyses, and has been written informally so that readers do not have a hard time comprehending the concepts.

Link to the book – Artificial Intelligence Engines: a tutorial introduction to the mathematics of deep learning

Read More: Top Machine Learning Books

Some other books on Artificial Intelligence

  1. Life 3.0: Being a Human in the Age of Artificial Intelligence 

Written by Max Tegmark, it is one of those books about AI that posit a hypothetical scenario wherein artificial intelligence has exceeded human intelligence and has overtaken society. While other AI textbooks talk about the benefits one can reap with artificial intelligence, ‘Life 3.0’ talks about how the technology might be able to redesign itself someday. The first few chapters talk about the origin of intelligence millions of years ago and project future developments. It discusses the issues and describes several potential outcomes that could be achieved by integrating humans and machines. Both positive and negative scenarios have been portrayed in terms of Friendly AI or an AI Apocalypse. This book will give you a different perspective on how AI impacts your life. 

Link for the book – Life 3.0: Being a human in the age of artificial intelligence

  1. Applied Artificial intelligence: A handbook for Business Leaders

Many enterprises are leveraging artificial intelligence in their operations. Suppose you are a business owner looking for a practical guide to leverage machine learning techniques in your organization. In that case, Applied Artificial Intelligence is one of the best books on artificial intelligence. This book balances technical details and general content about the impact of artificial intelligence and machine learning technology. The first part of this book talks about the educational background of the state of artificial intelligence today. The second part will walk you through some strategic and methodological steps to implement AI projects. The last section discusses real-world examples of AI applications commonly used for standard business functions. 

Link for the book – Applied Artificial intelligence: A handbook for Business Leaders

  1. TensorFlow in 1 Day: Make your own Neural Network

If you are a machine learning enthusiast, TensorFlow in 1 Day: Make your own Neural Network is a great machine learning book for beginners. This book will guide you to build your neural network, a framework that works like the human brain. Written by Krishna Rungta, it aims to educate readers about TensorFlow, an open-source library for deep learning and traditional machine learning applications. The initial chapters discuss deep learning, TensorFlow, and other requisite frameworks. The chapters discuss all the essential packages you will need to build a recurrent neural network (RNN).

Link for the book – TensorFlow in 1 Day: Make your own Neural Network

  1. The Emotion Machine: Commonsense Thinking, AI, and the Future of the Human Mind

The Emotion Machine is a brilliant AI textbook that explains thinking and the human mind by relating it to insightful technologies. Inspired by how human minds always work, Marvin Minsky created this book on artificial intelligence, thinking, and other related areas. Minsky is a computer science and AI expert who developed the book based on intuitions and feelings as forms of ‘thinking.’ The book explains how human minds think and process complex information. Once you understand the thinking process, you can build artificial bits of intelligence and machines to assist you in thinking and making better decisions after following the same patterns. 

Link to the book – The Emotion Machine: Commonsense Thinking, AI, and the Future of the Human Mind

  1. Human + Machine: Reimagining Work in the Age of AI

This AI book by Paul Daugherty and H. James Wilson highlights that artificial intelligence is no longer a futuristic concept and is gaining momentum. In Human + Machine, you will read about the paradigm shift of revolutionizing businesses with AI and making them more fluid and adaptive. If you look forward to incorporating technologies in your workplace, reading this book will convince you that AI transforms businesses into hybrid technological organizations that leverage technology to ensure real-time service provision and management.
Link to the book – Human + Machine: Reimagining Work in the Age of AI

Advertisement

Top NVIDIA GTC Announcements 2022

NVIDIA GTC 2022 announcements

NVIDIA is a leading company involved with artificial intelligence and computing technologies. The company was founded by Jensen Huang in 1993 and has pioneered accelerating computing to overcome challenges that several firms face. NVIDIA has been hosting the GTC (GPU Technology Conference) since 2009. The conference aims to bring together researchers, pioneers, developers, and several IT professionals to discuss the latest technologies. 

The NVIDIA GTC 2022 event commenced on September 19 and ended on September 22. The event hosted several AI experts and tech-invested companies. CEO Jensen Huang addressed several important topics and announcements about AI, Omniverse, GPU chips, NVIDIA’s partnerships, etc. 

DLSS 3

DLSS is a neural graphics technology that scales performance by creating new frames and displaying higher resolution. The technology reconstructs temporal images to produce better ones by rendering pixels in fractions to maximize the frame rate. NVIDIA has announced the third generation, DLSS 3 (deep learning super sampling). The sampling framework clubs three innovations, DLSS Super Resolution, DLSS Frame Generation, and NVIDIA Reflex, into one. The updated technology utilizes the latest Optical Flow Accelerator (OFA) from the Ada Lovelace architecture. It is a substantial improvement as now the technology can predict entire frames, not just pixels. Another significant improvement is its ability to enhance CPU-limited gaming experience by boosting frame rates from 64 FPS to 135 FPS. Try the new Portal RTX game to experience the improvement credited to DLSS 3.

GeForce RTX 40 Series GPUs

Two years after the RTX 30 GPU series, NVIDIA announced the brand-new RTX 40 GPU series at the GTC event. The RTX 40 series is built on the company’s new Ada Lovelace architecture and delivers a significant performance improvement. The GPUs will enhance AI-driven graphics and content creation. With 24GB of GDDR6X memory, NVIDIA claims that the 40 series GPUs are two to four times faster than the previous generation and can deliver up to 100 FPS in 4K gaming. The GPUs are also compatible with DLSS 3, NVIDIA’s new deep-learning sampling framework. 

NVIDIA Drive Thor

Thor is the next-generation system-on-a-chip that centralizes all autonomous vehicle functions on a single computer for better security. NVIDIA unveiled the super chip DRIVE Thor, built on the latest GPU and CPU innovations to deliver world-class performance. Thor is a significant upgrade of the DRIVE Orin, the company’s previous lineup for autonomous vehicle technology. The chip unified several distributed functions like digital cluster infotainment, assisted driving, and parking; for enhanced software iteration and efficient development. 

LLMs and H100 GPUs in volume production

The GTC 2022 event attendees also witnessed the new large language models (LLMs) based on deep learning. Huang explained that these LLMs are one of the most vital AI models that run the digital economy. These models are engines of E-commerce, digital advertising, and searching. Following the same, he announced the new NeMo LLM service that makes AI more accessible by adding a conversational layer to the models. The service takes pre-trained models like the GPT-3 or Megatron and constructs a framework around them, saving model training time. 

These enormous models require good computational power, for which Huang announced the NVIDIA H100 Tensor Core GPU chips with a next-gen transformer engine. The chips are in full production and will soon begin to be shipped. 

NVIDIA DRIVE Concierge and DRIVE Chauffeur

The GTC keynote also talks in detail about the DRIVE Concierge and DRIVE Chauffeur, AI platforms aimed to make driving hassle-free by transforming the digital experience inside the car. With NVIDIA Concierge, people inside the vehicle have continuous access to real-time conversational AI because of DRIVE IX and Omniverse Avatar. Omniverse Avatar enables COncierge to function like a digital assistant and performs tasks like calling, booking reservations, alerting, etc. The company also posted a demo video of the Concierge AI showcasing the assistant on the center dashboard screen and helping the driver reach the destination. 

The concierge is integrated with Chauffeur, AI-assisted driving framework, to provide high-quality 4D visualization with low-latency and 360-degree view so that the driver can sit idly while the Chauffeur drives. The chauffeur is based on the NVIDIA DRIVE AV SDK to tackle urban and highway traffic while following safety rules. 

Jetson Orin Nano for Robotics

Several GTC announcements cater to robotics development. As per Huang, robotic computers are “the newest type of computers” to enable machines to move through the virtual worlds. He announced the new Jetson Orin Nano, a tiny robotics computer, to bring the DRIVE Orin technology to the market. This tiny computer is NVIDIA’s second-gen processor for robotics and is 80x faster than the previous Jetson Nano. Jetson Orin Nano runs on the Isaac robotics stack based on the ROS 2 GPU-driven framework and also features NVIDIA Isaac Sim, a robotics simulation platform. Huang made another significant announcement about containers for the NVIDIA Isaac platform accessible on the AWS marketplace for robotics developers who use AWS RoboMaker.

Omniverse Cloud

The GTC 2022 event also unveiled NVIDIA Omniverse Cloud, the company’s first Saas (software-as-a-service) and Iaas (infrastructure-as-a-service) framework. Omniverse is a complete suite of cloud-native services, including metaverse applications, robotics, and autonomous vehicle simulation. The Omniverse Cloud is powered by the NVIDIA Graphics Delivery Network (GDN), a distributed data center responsible for delivering high-performance and low-latency graphics in the company’s cloud gaming services. Omniverse Cloud allows people to experience collaborative 3D workflows without local computing infrastructure. 

Some other GTC announcements:

  • NVIDIA announced the second-gen NVIDIA OVX powered by the L40 GPU and designed for building complex industrial applications. The new OVX-fitted systems will be shipped to companies including Lenovo, Supermicro, and Inspur by the first quarter of 2023.
  • NVIDIA also announced a partnership with The Broad Institute of MIT and Harvard to bring GPU-driven Clara Parabricks software to the Terra biomedical data platform. The company says the partnership will contribute to its deep learning models for identifying genetic disease-associated genetic variants.
  • NVIDIA announced another collaboration with Booz Allen to use AI by accelerating the GPU power of its cybersecurity platform built on NVIDIA’s Morpheus architecture.

You can refer to the GTC 2022 Keynote for more information.

Advertisement

Crypto mining data center Compute North filed for bankruptcy

Crypto mining data center Compute North filed for bankruptcy

Compute North, one of the largest operators of crypto-mining data centers, filed for bankruptcy and revealed that its CEO stepped down as the rout in cryptocurrency prices weighs on the industry.

According to a filing, the company filed for Chapter 11 in the U.S. Bankruptcy Court for the Southern District of Texas.

Compute North in February announced a capital raise of $385 million, consisting of an $85 million Series C equity round and $300 million in debt financing. But it fell into bankruptcy as miners struggled to survive amid slumping bitcoin (BTC) prices, rising power costs, and record difficulty in mining bitcoin. The filing is likely to have negative implications for the industry. Compute North is one of the largest data center providers for miners and has multiple deals with other larger mining companies.

Read More: Russia To Set Rules For Crypto Cross-Border Payments By December

“The Company has initiated voluntary Chapter 11 proceedings to provide the opportunity to stabilize its business and implement a comprehensive restructuring process that will enable us to continue servicing our customers and partners and make the necessary investments to achieve our strategic objectives,” a spokesperson told CoinDesk.

The spokesperson added that CEO Dave Perrill stepped down earlier this month but will continue to serve on the board. Drake Harvey, a chief operating officer for the last year, has taken the role of president at Compute North, the spokesperson said.

Advertisement

Samsung launches Samsung Innovation Campus in India to upskill youth in AI, IoT, Big Data, Coding and Programming

Samsung launches Samsung Innovation Campus in India

Samsung has launched its global CSR program Samsung Innovation Campus in India, creating industry-relevant skills and job opportunities for youth in technology domains such as Artificial Intelligence, the Internet of Things, Big Data, and Coding & Programming. With this, Samsung is partnering with new India’s growth story and strengthening its commitment to the Government’s Skill India initiative.

The program aims to upskill over 3,000 unemployed youth from 18-25 years of age in future technologies and enhance their employability in the pilot phase. These are critical technology skills for the Fourth Industrial Revolution. Samsung Innovation Campus will also take Samsung a step ahead in its vision of ‘Powering Digital India’ as the country’s most vital partner.

To execute the program, Samsung, India’s largest smartphone and consumer electronics brand, has partnered with India’s premier skill development organization, the Electronics Sector Skills Council of India (ESSCI), a National Skill Development Corporation (NSDC) approved entity. ESSCI will execute the program through its nationwide network of approved training and education partners.

Read More: Samsung Reveals A Second Data Breach This Year: Are You One Of The Affected?

During the program, participants will receive instructor-led blended classroom and online training through approved training and education partners of ESSCI across the country. Youth enrolled in the program will undergo online and classroom training. They will complete their hands-on capstone project work in their selected technology areas from Internet of Things, Big Data, Artificial Intelligence, and Coding & Programming.

They will also be given soft skills training to enhance their employability in relevant organizations. The participants will be mobilized through ESSCI’s education and training partners across India.

Those opting for the AI course will undergo 270 hours of theory training and 80 hours of project work. Those doing the IoT or the Big Data course will receive 160 hours of training and complete 80 hours of project work. Participants taking the Coding & Programming course will do 80 hours of training and be part of a 4-day Hackathon.

Advertisement

Meta sued for bypassing Apple privacy features

Meta sued for bypassing Apple privacy features

Meta has been sued for allegedly building a secret workaround that enabkes the company to bypass privacy features Apple launched earlier last year to keep iPhone users from having their internet activity tracked.

Two Meta users filed the lawsuit in San Francisco, where the same claim was made last week, accusing the company of skirting Apple’s 2021 privacy rules and violating state and federal laws limiting the unauthorized collection of personal data.

The accusations are based on a study published by Felix Krause last August. Krause, a former Google employee, argued that Meta exploits the “in-app browser” — a feature that allows Facebook and Instagram users to visit a third-party website without leaving the platform — to “inject” a tracking code that allows the monitoring of all user interactions.

Read More: Apple’s Privacy Changes Break The Facebook-Google Advertising Monopoly In The Online Search Market

The practice, called Javascript injection, which in most cases is considered a type of malicious attack, enables the tech giant to follow users throughout the web after they click links on the Facebook and Instagram apps.

“This allows Meta to intercept, monitor and record its users’ interactions and communications with third parties, providing data to Meta that it aggregates, analyzes and uses to boost its advertising revenue,” the claimant read.

In response to the allegation, Meta admitted that the Facebook app tracks browser activity but refuted claims that user data was being unlawfully collected.

The lawsuit contends that Meta’s collection of user information via the Facebook and Instagram apps enables the company to get around Apple’s privacy regulations, which require all third-party applications to acquire user consent before tracking users’ online and offline activity.

Starting with iOS 14.5, Apple introduced App Monitoring Transparency, which enables users to choose whether or not to enable app tracking when they first open an app. The feature, according to Meta, has impacted the company’s revenue by more than $10 billion so far.

Advertisement

Tesla recalls nearly 1.1 million cars in the US 

Tesla recalls nearly 1.1 million cars in the US

Tesla recalls nearly 1.1 million cars in the US as the windows might close too fast and hurt people’s fingers. Documents produced by American regulators explain how the windows may not react correctly after detection of an obstruction.

The National Highway Traffic Safety Administration said that it is a safety-standards violation. Tesla says a software update will fix the problem. The world’s largest electric-vehicle manufacturer has had repeated run-ins with federal safety regulators on sevral occasions. 

Previous recalls have been due to rear-view cameras, bonnet latches, seat-belt reminders, and sound-system software. Tesla chief executive Elon Musk criticized the term “recall,” tweeting: “The terminology is outdated & inaccurate. This is a small over-the-air software update. To the best of our knowledge, there have been no injuries.”

Read More: Germany’s KBA Finds Abnormalities In Tesla’s Autopilot Function 

The latest recall includes all four Tesla models, specifically 2017-22 Model 3 sedans and some 2020-21 Model Y SUVs, Model X SUVs, and Model S sedans. Tesla detected the problem with the automatic windows during production testing in August.

Owners will be notified by letter from 15th November. Company documents indicate vehicles made after 13th September already have the updated software to remedy the issue. Tesla said it was unaware of any warranty claims, deaths, crashes, or injuries related to the recall.

Advertisement

Whisper: OpenAI’s Latest Bet On Multilingual Automatic Speech Recognition

Whisper: Openai's Latest Bet On Multilingual Automatic Speech Recognition
Image Credit: Analytics Drift

OpenAI has released Whisper, an open-source automatic speech recognition system that the company says allows for “robust” transcription in various languages as well as translation from those languages into English. 

Due to an increasing use of smartphones and voice assistant devices, multilingual speech recognition is the need of the hour. The demand for multilingual automatic speech recognition that can handle linguistic and dialectal differences in languages is growing as globalization progresses. While most speech recognition tools cater to English-speaking users, English is not the most spoken language in the world. This implies that the lack of language coverage can create a barrier to adoption.

In addition, the use of more than one language in conversation is a typical occurrence in a culture where individuals are bilingual or trilingual, which makes the development of multilingual models a reasonable case. It is also quite possible that most of the languages in a multilingual setting can have the same cultural heritage resulting in identical phonetic and semantic characteristics. Moreover, the absence of a well-known multilingual voice recognition system draws attention to an exceedingly fascinating field of speech recognition research that monolingual systems have long dominated.

The researchers trained Whisper using 680,000 hours of multilingual and multitask supervised data acquired from the web. According to OpenAI’s blog post, using such a large and diverse dataset improves the system’s ability to adapt to accents, background noise, and technical language.   

While the variance in audio quality can aid in the robust training of a model, variability in transcript quality is not as advantageous. Initial examination of the raw information revealed a large number of substandard transcripts. This is why OpenAI created several automatic filtering techniques to enhance the quality of transcripts. The company also noted that many online transcripts were produced by other automatic speech recognition systems rather than by actual humans. A recent study has demonstrated that training on datasets containing both human- and machine-generated data can considerably harm translation system performance. Therefore, OpenAI developed numerous heuristics to find and exclude machine-generated transcripts from the training dataset to prevent the system from picking up “transcript-ese.”

In order to validate that the spoken language matches the language of the text according to CLD2, OpenAI additionally employed an audio language detector that was developed by refining a prototype model trained on a prototype version of the dataset on VoxLingua107. The research team excluded the (audio, transcript) combination as a speech recognition training example from the dataset if the two do not match.

Read More: OpenAI’s DALL-E now offers Outpainting Feature to Extend Existing Images and Artworks

OpenAI selected an encoder-decoder Transformer for the Whisper model’s architecture because it has been proven to scale efficiently. An 80-channel log-magnitude Mel spectrogram representation is generated on 25-millisecond windows with a stride of 10 milliseconds after all input audio was divided into 30-second chunks and re-sampled to 16,000 Hz. This input representation is processed by the encoder using a ‘small stem’ composed of two convolution layers with a filter width of 3 and the GELU activation function. The encoder Transformer blocks are then applied to the output of the stem after adding sinusoidal position embeddings.

Source: OpenAI

The encoder output is subjected to a final layer normalization just after the transformer employs pre-activation residual blocks. The decoder predicts the corresponding text caption using learned position embeddings, tied input-output token representations, and unique tokens that instruct the single model to carry out various tasks. These tasks include language identification, phrase-level timestamping, multilingual speech transcription, and to-English speech translation. The width and quantity of transformer blocks are the same for the encoder and decoder.

Whisper aims to provide an integrated, resilient speech processing system that operates consistently without requiring dataset-specific fine-tuning to get high-quality results on particular distributions. Whisper’s capacity to generalize effectively across domains, tasks, and languages was examined by OpenAI using an extensive collection of existing speech processing datasets. Rather than using the conventional evaluation protocol for these datasets, which includes both a train and a test split, the researchers tested Whisper’s zero-shot performance and discovered that it is far more resilient and has an average relative error reduction of 55% fewer errors when evaluated on other speech recognition datasets. It outperforms the supervised SOTA on CoVoST2 to English translation zero-shot. Whisper can transcribe speech with 50% fewer mistakes than prior models. However, it does not outperform models specializing in LibriSpeech performance, a competitive benchmark in speech recognition, because it was trained on a broad and diverse dataset rather than being tailored to any particular one.

OpenAI claims that about a third of Whisper’s dataset is non-English. Even though this is an impressive feat, there is a strong possibility of a data imbalance if the training data have different quantities of transcribed data available for each language because of the disproportion in the distribution of speakers in different languages. As a result, languages over-represented in the training dataset could have a greater effect on a multilingual automated speech recognition system. Given that the majority of languages have less than 1000 hours of training data, OpenAI hopes to make an intensive effort to increase the amount of data for these more uncommon languages to bring significant improvement in the average speech recognition performance with only a modest increase in the size of the training dataset.

The OpenAI team speculates that optimizing Whisper models for decoding performance more directly via reinforcement learning and fine-tuning them on a high-quality supervised dataset could also minimize long-form transcribing errors. The researchers only examined Whisper’s zero-shot transfer performance in this research since they were more interested in the resilience characteristics of speech processing systems. Although this environment is essential for research since it reflects overall dependability, OpenAI believes it is possible that findings can be tweaked further for many areas where high-quality supervised speech data are provided.

For now, Whisper’s multilingual capabilities would be a huge asset in the fields of international trade, healthcare, education, and diplomacy. The company decided to release Whisper and the inference code as open-source software so that they could be used to develop practical applications and to do more research on effective speech processing.

Advertisement

NVIDIA announces next-generation automotive-grade chip Drive Thor

NVIDIA announces next-generation automotive-grade chip Drive Thor

Nvidia is gearing up to launch Drive Thor, its next-generation automotive-grade chip that the company says will be able to bring together a wide range of in-car technologies, from driver monitoring systems and automated driving features to streaming Netflix in the back for kids.

Thor, which goes into production in the year 2025, is notable not just because it is a step up from the Drive Orin chip. It is also taking Drive Atlan’s spot in the lineup.

Founder and CEO Jensen Huang said on Tuesday at the company’s GTC event that Nvidia is scrapping the Drive Atlan system on chip ahead of schedule for Thor. In a race to develop bigger and better chips, Nvidia is going with Thor, which, according to the company, will deliver twice the compute and throughput at 2,000 teraflops of performance.

Read More: NVIDIA Unveils New GeForce Series Of Graphics Chip That Uses Enhanced AI

Nvidia’s vice president of automotive, Danny Shapiro said that if we look at a car today, advanced driver assistance systems, driver monitoring, camera mirrors, digital instrument cluster, parking, and infotainment are all different computers distributed throughout the vehicle. he added that in 2025, these functions will no longer be separate computers. Rather, Drive Thor will enable manufacturers to efficiently consolidate these functions into a single system, reducing overall system cost.”

Nvidia already has several automotive customers building software-defined fleets using Drive chips. For example, Volvo announced in January at the annual CES tech conference that Drive Orin would power its new automated driving features. 

The automaker said it would power its infotainment system with Qualcomm’s Snapdragon chip. It’s precisely this space-sharing with competitors that likely drove Nvidia to create a more powerful chip.

Advertisement

NVIDIA launches GeForce RTX 4080 and 4090 desktop GPUs

NVIDIA launches GeForce RTX 4080 and 4090 desktop GPUs

Nvidia has revealed its brand-new RTX 40 series GPUs at GTC 2022, two years after the RTX 30 series. The Nvidia GeForce RTX 40 series features RTX 4080 and RTX 4090 desktop GPUs based on Nvidia’s new Ada Lovelace architecture. 

The new graphics cards will deliver a massive leap in performance, a more immersive gaming experience, AI-powered graphics, and fast content creation workflow. The Nvidia GeForce RTX 4080 and 4090 GPUs will significantly improve the gameplay experience, thanks to improved DLSS support and powerful hardware.

The Nvidia GeForce RTX 4090 GPU comes with 24GB of GDDR6X memory, and the company claims it is two to four times faster than its previous-gen flagship GPU, RTX 3090 Ti. Nvidia also says that RTX 4090 can deliver up to 100 FPS in 4K games while consuming 450W power, just like the RTX 3090 Ti. The GPU will support Nvidia’s new deep-learning super-sampling technique DLSS 3, which will improve performance even more.

Read More: Nvidia Announces Omniverse Cloud For Metaverse At GTC 2022

The Nvidia GeForce RTX 4080 will be launched in two configurations. The first one features 16GB GDDR6X memory, 9,728 CUDA cores, and 76 RT cores. The second comes with 12GB GDDR6X memory, 7,680 CUDA cores, and 60 RT cores. 

The 12GB variant also has slower memory with 21Gbps throughput over a 192-bit bus, compared to 22 Gbps throughput over a 256-bit bus on the 16GB variant. Nvidia says the 16GB RTX 4080 variant is up to three times faster than RTX 3080 Ti while using DLSS 3. Meanwhile, the 12GB RTX 4080 variant is faster than RTX 3090 Ti and consumes less power when using DLSS 3.

Nvidia GeForce RTX 4090 is set to launch on October 12th, and RTX 4080 will be launching in November. After some confusion over Indian pricing, Nvidia has finally shared the official price of these GPUs for the country. GeForce RTX 4090 – Rs 1,55,000, GeForce RTX 4080 (16GB) – Rs 1,16,000, and GeForce RTX 4080 (12GB) – Rs 87,000.

Advertisement

NVIDIA unveils new GeForce series of graphics chip that uses enhanced AI

NVIDIA unveils new GeForce series of graphics chip

Nvidia, the leading semiconductor maker in the US, unveiled a new type of graphics chip at the GPU Technology Conference (GTC) 2022 that uses enhanced artificial intelligence to create more realistic images in games. Earlier this year, NVIDIA announced the launch of its new range of RTX professional GPUs during its Spring 2022 GTC.

Codenamed Ada Lovelace, the new architecture underpins Nvidia’s GeForce RTX 40 series of graphics cards which was unveiled by Chief Executive Officer Jensen Huang at an event on Tuesday. The top-of-the-line RTX 4090 will cost $1,599 and go on sale aroung October 12. Other versions that will be launched in November will retail for US $899 and $1,199.

The high-end version of the chip will consist 76 billion transistors. It will be accompanied by 24GB of onboard memory on the RTX 4090, making it one of the most advanced in the industry. Nvidia relies on Taiwan Semiconductor Manufacturing Co. to produce the processor with its 4N technology, whereas Micron Technology is the memory provider. Nvidia has been using Samsung Electronics to make Ada’s predecessor.

Read More: Nvidia Announces Omniverse Cloud For Metaverse At GTC 2022

The new technology assures to speed up the rate at which cards produce images using the traditional procedure of calculating where pixels are situated on the screen while using AI to simulate others simultaneously. It is continuing a shift that Nvidia is pioneering which allows computers to make images appear more natural by generating them using calculations of the path of individual light rays.

The approach could give customers a new reason to upgrade their technology — which Nvidia could use now. The chipmaker is facing a steep slowdown in demand for PC components. Last month, Nvidia reported much lower quarterly sales than it originally predicted and gave a disappointing forecast.

Nvidia has been forced to deliberately slow down shipments to make sure its customers — primarily makers of graphics cards sold as add-ins for high-end computers — work through their stockpiles of unused inventory. Huang has said that process should be completed by the end of the year.

Advertisement