According to NVIDIA, Inverse Rendering is a technique that uses artificial intelligence to mimic how light interacts in the real world, allowing researchers to reconstruct a 3D scene from a collection of 2D photographs collected from various angles.
NVIDIA researchers have developed a highly capable ultra-fast neural network training and rendering system that can perform inverse rendering in a matter of seconds. This method was used by NVIDIA to develop neural radiance fields, or NeRF, a popular new technology.
Vice President of Graphic Research at NVIDIA, David Luebke, said, “If traditional 3D representations like polygonal meshes are akin to vector images, NeRFs are like bitmap images: they densely capture the way light radiates from an object or within a scene.”
He further added that instant NeRF could be as essential to 3D as digital cameras and JPEG compression was to 2D photography, vastly enhancing the speed, convenience, and reach of 3D capture and sharing.
The company claims that this new artificial intelligence-powered solution is the quickest NeRF technology to date, with speedups of up to 1,000x in some circumstances. The model can render the final 3D scene in just a few seconds after a short duration of training on a few dozen still photographs.
Apart from the model, multiple other technologies such as new GPUs, CPU, Autonomous driving tech, and more were unveiled during the GTC event.
The government of Telangana announces the launch of its new Forest AI Grand Challenge program under its artificial intelligence mission initiative.
Telangana government has collaborated with consulting, technology, and digital transformation services providing company Capgemini to offer this new initiative with the aim of revolutionizing the forest department with new innovations.
According to the plan, this program will allow the forest department to collect detailed, large-scale information regarding the estimated number, location, and range-specific behavior of animals in the forest.
Participants will need to develop novel artificial intelligence-powered solutions to help the forest department in various tasks, including wildlife conservation efforts, a matter of prime importance.
“There is tremendous potential for governments and innovators to collaborate through open innovation. I am quite confident that this initiative of T-AIM will help the Telangana Forest Department implement novel AI-based solutions for wildlife conservation,” said Jayesh Ranjan, Principal Secretary, Government of Telangana, during the launch of this one-of-a-kind program.
Interested participants from all across India need to submit their approach notes before April 2022. The authorities will then evaluate the submissions and declare the winners in June 2022.
Telangana government is making numerous efforts to boost artificial intelligence adoption in the region, starting from AI in education, agriculture to initiatives like the Revv accelerator program to assist artificial intelligence startups in scaling their businesses.
The government says that Telangana will take AI to the next level, aiming to become the top AI innovation implementation center by capitalizing on the AI opportunity for which it has collaborated with NASSCOM and multiple other organizations.
Vice President and Head of CSR at Capgemini, Anurag Pratap, said, “Sustainability is core to the commitment of Capgemini to make the world a better place for our people. In the last few years, we have developed and supported some of the most innovative technology-based projects that focus on ensuring Sustainability.”
He further mentioned that This collaboration with T-AIM is part of a larger initiative to foster and leverage an “Open Innovation Ecosystem” to produce, identify, and promote revolutionary ideas in quality education, Sustainability, sustainable cities, communities, and excellent health and wellbeing.
Microsoft has announced an improvement to its translation services that promises significantly enhanced translations across a range of language pairs, thanks to new machine learning algorithms based on Microsoft’s Project Z-Code. These improvements will increase the quality of machine translations and allow Translator, a Microsoft Azure Cognitive Service that is part of Azure AI, to offer more languages than the most common ones with limited training data. Z-Code is part of Microsoft’s XYZ-Code initiative, which aims to combine text, vision, and audio models across various languages to produce more critical and practical AI systems. The update employs a “spare Mixture of Experts” (MoE) technique, which has resulted in new models scoring between 3% and 15% higher than the company’s earlier models in blind evaluations.
Last October, Microsoft updated Translator with Z-code upgrades, adding support for 12 more languages, including Georgian, Tibetan, and Uyghur. Now the recently released new version of Z-code, called Z-code Mixture of Experts, can better grasp “low-resource” linguistic subtleties. According to Microsoft, Z-code uses transfer learning, an AI approach that applies knowledge from one task to another, similar task, to enhance translation for the estimated 1,500 “low-resource” languages worldwide.
Microsoft’s model, like all others, learned from examples in enormous datasets gathered from both public and private archives (e.g., ebooks, websites such as Wikipedia and hand-translated documents). Low-resource languages are those having less than one million example sentences, which makes creating models more difficult as AI models perform better when given more instances.
The new models learn to translate between various languages simultaneously using the Mixture of Experts. Its fundamental function is to split tasks into several subtasks, which are then delegated to smaller, more specialized models known as “experts.” In this scenario, the model uses more parameters while dynamically picking which parameters to employ for a particular input and then deciding which task to delegate to which expert based on its own predictions. This allows the model to specialize a subset of the parameters (experts) during training. The model then employs the appropriate experts for the job during runtime, which is more computationally efficient than using all of the model’s parameters. To put it another way, it’s a model that contains multiple more specialized models.
The new system can immediately translate between ten languages, eliminating the need for multiple systems. Translator formerly required 20 different models for each of the ten languages: one for English to French, French to English, English to Macedonian, Macedonian to English, and so on. Besides, Microsoft has lately begun to use Z-Code models to boost additional AI functions, such as entity identification, text summarization, custom text categorization, and key extraction.
MoEs were initially proposed in the 1990s and subsequent research publications from organizations such as Google detail experiments with MoE language models with trillions of parameters. On the other hand, Microsoft asserts that Z-code MoE is the first MoE language model to be used in production.
For the first time, Microsoft researchers worked collaboratively with NVIDIA to deploy Z-code Mixture of Experts models in production and NVIDIA GPUs. NVIDIA created special CUDA kernels to implement MoE layers on a single V100 GPU efficiently and used the CUTLASS and FasterTransformer libraries. They were able to deploy these models utilizing a more efficient runtime to optimize these new types of models using the NVIDIA Triton Inference Server. Compared to standard GPU (PyTorch) runtimes, the new runtime achieved up to a 27x speedup.
The Z-code team also collaborated with Microsoft DeepSpeed researchers to discover how to train a large Mixture of Experts models, such as Z-code, as well as smaller Z-code models for production settings.
Quality gains of Z-code MoE models over existing models. Languages are ordered by training data sizes. Source: Microsoft
Microsoft used human assessment to compare the new MoE’s quality enhancements to the existing production system, and found that Z-code-MoE systems beat individual bilingual systems, with average increases of 4%. For example, the models upgraded English to French translations by 3.2%, English to Turkish translations by 5.8%, Japanese to English translations by 7.6%, English to Arabic translations by 9.3%, and English to Slovenian translations by 15%.
The Microsoft Z-code-based MoE translation model is now available by invitation to customers to use Translator, for document translation. Document Translation is a feature that allows you to translate whole documents, or large groups of documents, into a number of file formats while maintaining their original formatting. It’s also worth mentioning that Microsoft Translator is the first machine translation company to provide consumers with live Z-code Mixture of Experts models.
Microsoft confirms that its systems were compromised during a cyberattack carried out by LAPSUS$, a hacker group.
According to Microsoft, LAPSUS$ had hacked into one of its accounts, giving it “limited access” to corporate networks but not customer data.
Microsoft provided clarity on this matter through a recently uploaded blog. LAPSUS$, also known as DEV-0537, is famous for employing a pure extortion and destruction strategy with no ransomware payloads.
Microsoft had been conducting an internal investigation regarding the compromise, but the public disclosure of the episode escalated the situation and allowed the team to accelerate their actions.
“Our cybersecurity response teams quickly engaged to remediate the compromised account and prevent further activity. Microsoft does not rely on the secrecy of code as a security measure, and viewing source code does not lead to elevation of risk,” said Microsoft.
The extremely notorious South American hacking group LAPSUS$ had earlier targeted multiple companies, including some giants like NVIDIA, Samsung, Okta, Ubisoft, and others.
NVIDIA validated that its network was compromised in a cyberattack that resulted in a leak of its proprietary data and employee login information. LAPSUS$ claimed to have a GPU driver capable of bypassing NVIDIA’s Ethereum mining limiter on the company’s RTX 3000 graphics cards.
Okta mentioned that an attempt was made to compromise the account of a third-party customer support engineer who worked for one of our subprocessors, which was then investigated and contained.
LAPSUS$ used to attack targeted bitcoin accounts, causing wallets and cash to be compromised and stolen. However, recently, the hacking group has diversified its targets and is now attacking various telecommunication, higher education, and government organizations.
Microsoft says that LAPSUS$ understands the interrelated nature of identities and trust relationships in current technological ecosystems and seeks out companies that may use their access from one organization to gain access to partners or suppliers.
Moreover, the group does not operate anonymously. Instead, it spreads words regarding its cyberattacks on companies.
A team from Collaborations Pharmaceuticals, Inc. repurposed a drug discovery AI in a recent work published in the journal Nature Machine Intelligence. In just 6 hours, it discovered 40,000 new possible chemical weapons, some of which were very comparable to the most deadly nerve agent ever developed. The publication, Dual application of artificial-intelligence-powered drug discovery, is a wake-up call for organizations working in the field of AI in drug development.
The researchers at Collaborations Pharmaceuticals were using AI to search for molecules that could be used to treat diseases, and part of that process included screening out those that could eventually lead to death. The report claims that they previously built MegaSyn, a commercial de novo molecule generator guided by Machine Learning model predictions of bioactivity. The objective was to discover novel therapeutic inhibitors of human disease targets.
As part of a symposium (Convergence conference) request to study new technology’s potentially negative ramifications, the team sought to investigate how fast and readily an artificial intelligence system might be misused if it assigned a detrimental, rather than positive objective. Using their AI MegaSyn, which normally rewards bioactivity (how effectively a drug interacts with a target) while deterring toxicity, they simply flipped the toxicity parameters while keeping the bioactivity reward, giving drugs a higher toxicity score.
According to the researchers, ‘The thought had never previously struck us. We were vaguely aware of security concerns around work with pathogens or toxic chemicals, but that did not relate to us; we primarily operate in a virtual setting. Our work is rooted in building machine learning models for therapeutic and toxic targets to better assist in the design of new molecules for drug discovery.’
Even their studies on Ebola and neurotoxins, which had triggered questions about the possible harmful outcomes of their machine learning models, had not raised their concerns. They were blissfully unaware of the harm AI could be causing when they steered MegaSyn in the direction of nerve-agent-type compounds similar to VX. VX is a banned nerve agent used to assassinate Kim Jong-nam, the half-brother of North Korean leader Kim Jong Un, along with other chemical warfare agents once they focused it towards making nerve agent-like chemicals. It is a tasteless and odorless nerve agent that may cause a human to sweat and twitch even if just a little amount is ingested. It works by inhibiting the enzyme acetylcholinesterase, which aids muscular action. VX is fatal because it prevents your diaphragm, or lung muscles, from moving, causing your lungs to become paralyzed. A higher dose might produce convulsions and possibly cause a person to cease breathing. The MegaSyn AI made several startling advancements over the 6 hours it was operational, one of which was suggesting VX. New compounds were also discovered that were projected to be more hazardous than publicly recognized chemical warfare weapons based on estimated LD50 values.
While no actual molecules were created as part of the experiment, the authors noted that numerous firms provide chemical synthesis services and mentioned a lack of regulation as a reason for this. They expressed their concerns that it would be simple to create new, very dangerous substances that might be deployed as chemical weapons with a few adjustments.
Though Collaborations Pharmaceuticals has removed its death library, and it is now planning to limit the usage of its technologies. The authors propose a hotline for reporting potential misuse to authorities, as well as a code of conduct for everyone working in AI-focused drug discovery, similar to The Hague Ethical Guidelines, which promote ethical behavior in the chemical sciences.
The researchers are now urging a refreshed insight into how artificial intelligence systems could be exploited for malevolent objectives. They believe that more awareness, stronger guidelines, and stricter controls in the research community might help us avoid the dangers that AI capabilities could otherwise lead to.
Advanced business analytics and intelligence software providing company SAS partners with Clemson University to deploy artificial intelligence (AI) and machine learning (ML) software for education and research.
The new partnership will provide SAS’ advanced data science and analytics tools to Clemson students and faculty as a part of a new $3.3 million commitment to assist teaching and academic research.
Students and faculty at Clemson, including those at the Watt Household Innovation Center, will have access to SAS knowledge science and assessment tools for aiding educational training and research.
CEO of SAS Jim Goodnight said, “Clemson’s students, faculty and researchers will be able to use the latest industry-leading SAS AI and advanced analytics software to learn new skills and generate new breakthroughs.”
The company mentioned that its partnership with Clemson gives students who know SAS Viya and other analytics skills a competitive advantage. SAS Viya, the company’s flagship artificial intelligence and data analytics platform, allows users to turn raw data into actionable insights.
Clemson researchers will be able to use the massive data sets generated by SAS Viya to investigate a range of essential topics, including racial inequities in education, addiction, wildlife disease, agriculture, and the human genome. SAS is used in Clemson’s courses to better educate students for employment in multiple sectors, such as healthcare, finance, government, and other fields.
Additionally, SAS will also offer faculty and staff teaching materials to assist them in incorporating SAS into their curriculum and research.
Founding dean of the College of Science, Cynthia Young, said, “By integrating SAS into coursework, we’re helping strengthen literacy in data science.”
She further added that their students become more competitive, their alumni become more successful, and the state and nation gain a workforce that understands SAS and is better prepared to advance various industries through data.
Nvidia GTC 2022 began on March 21 and ended on March 24. Highlights of the event were CEO Jensen Huang’s keynote address on AI advancements, Omniverse, GPU for data centers, etc. Huang’s digital avatar, dressed like him, made several appearances throughout the speech.
Nvidia is the world’s largest graphics processing unit manufacturer, or GPU; a chip first used to accelerate graphics in gaming technology. Since then, Nvidia has steadily found new use cases for the GPU, including artificial intelligence (AI), autonomous vehicles, 3D video rendering, genomics, digital twins, etc.
Following are the major announcements:
RTX Professional GPUs
Nvidia launched seven new GPUs for laptops and desktops based on Nvidia Ampere architecture, including third-gen Tensor Cores, second-gen RT cores, and CUDA Cores. The announced GPUs include RTX A500, Nvidia RTX A5500, RTX A2000 8GB, RTX A3000 12GB, RTX A1000, and RTX A4500. The Nvidia RTX A5500 desktop GPUs are already available through select channel partners from March 23. The GPU will be launched globally by the second quarter of the year.
Grace CPU Superchip
Grace CPU Superchip is the name of Nvidia’s new Arm Neoverse-based discrete data center CPU. It’s specifically designed for AI infrastructure that requires high-performance computing. The company claims that this new Superchip’s performance is twice that of similar options available in the market. Nvidia’s computing software stacks, such as Nvidia HPC, Nvidia AI, Nvidia RTXTM, and Omniverse, will run on the Grace CPU Superchip. This super chip is similar to Grace Hopper Superchip, the first CPU-GPU module released by Nvidia last year. The grace CPU super will offer record performance, energy economy, memory bandwidth, and configuration for the most demanding computer applications like data analytics, scientific computing, hyperscale computing, and artificial intelligence.
DRIVE Sim
One of the new launches at the Nvidia SPRING 2022 GPO technology conference was a new AI tool for its DRIVE Sim. This unique technology can accurately reconstruct and modify real driving scenarios, which have been possible because of Nvidia’S research breakthroughs such as omniverse platforms and drive maps. According to the demonstration by Jensen Haung, the founder and CEO of Nvidia, the tool will enable developers to test driving scenarios with rapid iteration. The system uses LIDAR, camera, and vehicle data from real-world driving scenarios to reproduce incidents that improve the training of autonomous vehicle systems.
DGX H100 System
Nvidia’s DGX 100 System is powered by Nvidia H100 Tensor nCore GPUs. It contains eight Nvidia H100 GPUs, delivering 32 petaflops of AI performance at new FP8 precision. The new DGX100 system is 6x faster than the previous generation. The next-generation Nvidia DGX SuperPOD and Nvidia DGX POD AI infrastructure platforms will be built on DGX H100 systems. Eos, a new, powerful supercomputer launched at the same Nvidia GTC 2022 event, demonstrates these advancements.
EOS
Eos supercomputer is based on DGX H100, the fourth-generation DGX system (also launched at the same event) and is powered by octuple NVLink-connected H100 GPUs. Eos will contain 18 32-DGX H100 Pods for 576 DGX H100 systems; 4,608 H100 GPUs; 500 Quantum-2 InfiniBand switches; and 360 NVLink switches. It will offer an incredible 18 exaflops of AI performance, and Nvidia expects it to be the world’s fastest AI supercomputer when it’s deployed.
Isaac Nova Orin
The newly launched Issac Nova Orin architecture and Jetson AGX Orin developer kit will accelerate the development of autonomous mobile robots. Isaac Nova Orin, built on Nvidia Jetson AGX Orin edge AI system, is a cutting-edge compute and sensor reference platform to help accelerate the creation of autonomous mobile robots (AMRs). It features high-performance artificial intelligence processing capability along with state-of-an-art sensor technology.
Accelerated digital twins platform
The Accelerated digital twins’ platform consists of the Modulus AI framework for developing physics-machine learning neural network models and Nvidia Omniverse 3D virtual world simulation platform. It can create interactive AI simulations in real-time. This platform can accurately reflect the real world and accelerate simulations like computational fluid dynamics up to 10,000 times faster than traditional methods. The goal is to make it easier for researchers to model complex systems at higher accuracy and speed observed with previous models.
DRIVE Hyperion 9
DRIVE Hyperion 9 is the next-generation open platform for automated and autonomous vehicles launched at Nvidia GTC 2022. It is a programmable architecture created on multiple DRIVE Atlan computers to achieve in-cabin functionality and intelligent driving. Hyperion 9 is compatible across versions, and users can migrate to DRIVE Atlan and beyond and leverage current investments in the DRIVE Orin platform.
Leading autonomous vehicle manufacturing company Tesla announced the opening of its first European Gigafactory located near Berlin, Germany.
This milestone marks the commencement of Tesla’s first European hub, which was first announced two years ago. CEO of Tesla, Elon Musk, and German Chancellor Olaf Scholz attend the inauguration event of Tesla’s newly established Gruenheide plant.
Musk has announced Master Plan Part 3 for Tesla, which he claims would skyrocket the company’s scaling. In addition to the opening, Musk also rolled out the first batch of Model Y cars to the customers.
Thirty customers and their families cheered as Musk danced and joked with fans as they caught the first view of their cars via a glittering, neon-lit Tesla-branded tunnel.
Musk wanted the plant to begin its operations eight months ago, but local authorities claim that the factory got established in a remarkable time considering the size of the project.
According to the plans, the Gigafactory will hire more than 12,000 employees, making it the largest employer in the region. Shortly, Tesla’s new plant will achieve a peak manufacturing capacity of 500,000 cars annually and generate 50-gigawatt hours of battery capacity through its battery plant, which will be the highest in Germany.
Moreover, Musk stated that a test version of Tesla’s new “Full Self-Driving” software would be released in Europe next year after getting regulatory permission from concerned authorities.
Hubertus Bardt from the German Economic Institute said to DW, “After one year, Tesla decided to also build a battery factory there, meaning that the approval process had to start from the beginning — so the whole process didn’t actually take too long considering the difficult issues involved.”
However, this episode has also invited criticism over the issue of excessive water consumption as the plant was built over a considerably short period of time.
Multinational camera technology and social media company Snap acquired next-generation brain-computer interface startup NextMind to accelerate the development of future augmented reality (AR) glasses.
The companies did not provide any information regarding the valuation of this acquisition deal. Snap Lab has added NextMind to help drive long-term augmented reality research activities.
Snap Lab is a hardware development team of the company tasked with developing products for Snap’s AR platform.
According to the company, Snap Lab’s products, such as Spectacles, look into the possibilities for the Snap Camera’s future. The hardware team is now working on the future of artificial intelligence in Snap Camera and Spectacles.
Snap AR is aimed at content creators worldwide, with the goal of revolutionizing how digital content is created and explored. This acquisition of NextMind will drastically boost Snap Lab’s offerings by utilizing NextMind’s expertise in the domain.
As a part of the acquisition deal, the first product from NextMind, a $400 headband developer kit released two years ago, will be discontinued. However, the company’s twenty workers will continue to work for Snap Lab in France.
Snap mentioned in a blog, “Spectacles are an evolving, iterative research and development project, and the latest generation is designed to support developers as they explore the technical bounds of augmented reality.”
In the past, Snap has acquired multiple companies working in the augmented reality domain, including WaveOptics, which it bought last year. Additionally, Snap is also hiring extensively for various roles at multiple locations. Interested candidates can apply from the official website of Snap.
Paris-based brain-computer interface company NextMind was founded by Gwendal Kerdavid and Sid Kouider in 2017. NextMind is best known for developing a non-invasive brain-computer interface (BCI) technology to make it simpler to interact with electronic devices without using hands. To date, the startup has raised $4.6 million from investors like Bpifrance, Nordic Makers, and others in its seed funding round.
NVIDIA showcases a new artificial intelligence (AI) tool for its DRIVE Sim to advance autonomous vehicle development during the recently held Spring 2022 GPU Technology Conference (GTC).
The unique technology was able to accurately reconstruct and modify actual driving scenarios. NVIDIA mentioned in a blog that these capabilities are made possible by NVIDIA Research breakthroughs, including the NVIDIA Omniverse platform and NVIDIA DRIVE Map.
The tool was demonstrated by the founder and CEO of NVIDIA, Jensen Huang. According to the demonstration, the tool will enable developers to quickly test multiple road scenarios with rapid iteration.
Any situation can be reproduced in simulation and used as the basis for various changes, such as modifying the trajectory of an approaching car or adding an impediment to the driving path, allowing developers to enhance the AI driver.
Developers can use a camera, lidar, and vehicle data from real-world drives to reproduce incidents, helping them better train autonomous vehicle systems.
Moreover, virtual reconstruction allows engineers to identify potentially tricky conditions in which to train and evaluate the AV system. The technology can drastically help developers boost continuous AV training, testing, and validation pipeline.
Nevertheless, NVIDIA also mentioned that this is a complicated process that requires the expertise of engineers and artists as it takes a long time and a lot of effort to recreate real-world driving scenarios and generate real data from them in simulation.
“NVIDIA has implemented two AI-based methods to seamlessly perform this process: virtual reconstruction and neural reconstruction. The first replicates the real-world scenario as a fully synthetic 3D scene, while the second uses neural simulation to augment real-world sensor data,” mentioned the blog.