Nvidia has announced Nvidia Omniverse Cloud, its first software and infrastructure-as-a-service offering, at Nvidia GTC 2022. It is a suite of cloud services for artists, developers, and enterprise teams to design, publish, operate, and experience metaverse applications anywhere.
The technology uses the cloud to tap the heavy-duty power of data centers to enable Omniverse tools wherever the users happen to be. More than 700 companies and 200,000 people are using Omniverse now.
Using Omniverse Cloud, individuals and teams can experience in one click the ability to design and collaborate on 3D workflows without the need for any local computing power. Omniverse Cloud will leverage Nvidia’s cloud gaming solution, GeForce Now, which has a global graphics delivery network.
“The next evolution of the internet called the metaverse will be extended with 3D,” said Richard Kerris, vice president of the Omniverse at Nvidia. “To understand what the impact of that will be, the traditional internet that we know today connects websites described in HTML and viewed through a browser. The metaverse will be the evolution of that internet connecting virtual 3D worlds using USD, or universal scene description.”
Omniverse Cloud is based on the open Universal Scene Description (USD) standard for interoperable 3D assets.
“The metaverse, the 3D internet, connects virtual 3D worlds described in USD and viewed through a simulation engine,” said Jensen Huang, Nvidia CEO, in a statement. “With Omniverse in the cloud, we can connect teams worldwide to design, build, and operate virtual worlds and digital twins.”
Nvidia has announced NeMo LLM, its first cloud service to make AI less complicated. NeMo LLM will focus on making large language models more accessible for experimentation and deployment across multiple domains.
Ian Buck, GM and VP of Accelerated Computing at Nvidia, said that many AI models need to be turned into more accessible applications so that enterprises can fit them in real-world settings. NeMo LLM adds a layer of intelligence and interactivity to enable user interaction with complex AI models like DALL-E 2. Such language models are trained on billions of parameters, making model tuning a challenging task.
Nvidia’s NeMo LLM service will add a conversational element to many such models across domains like finance, medicine, or technology. Buck said, “This service will help bring large language models to all sorts of different use cases – to generate profit summaries, for product reviews, to build technical Q&A, for medical use cases.”
NeMo LLM takes some pre-trained models like NeMo Megatron (trained on 530 billion parameters), GPT-3 (trained on 175 billion parameters), or T5 (trained on 11 billion parameters); and constructs a domain-based framework around it. This saves the need to train a model from scratch.
Nvidia is also launching the BioNeMo service along with NeMo LLM to provide researchers with access to pre-trained biology and chemistry language models. It is aimed to aid researchers in interacting with and manipulating protein and data for drug discovery. The initial two BioNeMo protein models, ESM-1 and ESM-2, cater to encoding essential biological information of large protein databases and predicting 3D protein structures from amino acid sequences, respectively.
The NeMo LLM cloud service will be the recent addition to Nvidia’s stable software machines, like RIVA and Merlin.
The United Nations Educational, Scientific and Cultural Organization, or UNESCO, has inaugurated the State of the Education Report for India: Artificial Intelligence in Education. This is the fourth edition of the annual flagship report of the New Delhi UNESCO office.
Based on extensive research and study, the report provides insight into the state of artificial intelligence and its market in the country. It talks about AI as a subject and its application in the education sector. As per the report, the Indian AI market will reach a net worth of US $7.8 billion by 2025, showcasing a compound annual growth of 20.2 percent!
The press release mentions 10 recommendations by UNESCO for promoting AI in education. These recommendations specify AI ethics as a priority, the need for a regulatory framework, effective public-private partnerships, expanding AI literacy, work on correcting algorithmic biases, and a few others.
Eric Falt, Director at UNESCO, New Delhi, said, “India has made significant strides in its education system, and strong indicators point to the country’s notable efforts to enhance learning outcomes, including by using Artificial Intelligence-powered education technology.”
He also mentioned that artificial intelligence is one of the areas where the Indian government has advanced and made tremendous strides in the last few years.
Stanford University and researchers at the Harvard Medical School have developed an artificial intelligence model that detects abnormalities and diseases by studying NLP-based reports. The AI model does not rely on standard human annotations of X-rays to learn to predict diseases.
Using AI in medical imaging technologies is not a new advancement. However, many challenges still limit its application to only a handful of clinical applications. A massive amount of data and human annotations must go into training the standard disease prediction models.
However, the model created by Harvard and Stanford, called CheXzero, has shown accurate results by relying on reports created by NLP rather than human annotations. The model is self-supervised, meaning that it can train itself to learn more. Self-supervised algorithms automatically address the issue of over-dependence on labeled data.
Pranav Rajpurkar, assistant professor at HMS, said, “Up until now, most AI models have relied on manual annotation of huge amounts of data—to the tune of 100,000 images—to achieve a high performance. Our method needs no such disease-specific annotations.”
Researchers have used chest X-rays as an example to show CheXzero’s capabilities, but it can be generalized to a vast array of other medical setups that deal with unstructured data. The AI model helps bypass the requirement of large-scale labeling bottlenecks that have been a long-standing challenge in medical machine learning.
Adept, an AI and ML product company, announced a large-scale transformer ACT-1, an AI assistant that can browse, search, and use the web like humans. When provided with instructions, the AI model behaves like a personal assistant in software and navigates the web, scrolls, likes, and types whenever required. The company has released a demo video of how ACT-1 works.
https://t.co/2lGCWzPHfc has unveiled ACT-1, a large Transformer trained to use digital tools such as a web browser. It’s hooked up to a Chrome extension that allows ACT-1 to observe the browser & take actions, like clicking, typing, & scrolling, etc: https://t.co/V6IrhuG8SHpic.twitter.com/eRIWUIbfL3
ACT-1 has been developed to work with digital tools, and has recently learned to use a web browser. It connects to a chrome extension that allows it to observe users’ actions in the browser and performs activities like searching and scrolling. The action space includes UI elements on the page, and the observation is rendered across other websites universally.
Process high-level user requests/queries with only a command text. In this case, getting a single task done necessitates conducting activities and noting observations frequently.
Work with spreadsheets and exhibit real-world knowledge in inferring context and assisting computations.
Combine multiple tools to finish a complex task.
The large-scale transformer ACT-1 is still in its infancy and will become more useful in the future as it is continually seeking advanced training and enhancements. It is incredibly coachable and can fix errors with just one human feedback. However, there is a potential risk for ACT-1 being misappropriated with hateful input commands.
Adept plans to work on preventing any possible misuse by utilizing machine learning techniques and carefully staging deployment.
A country’s ability to develop its technology is one of its greatest assets, and adding AI applications to supercomputers is just the beginning. Supercomputers are high-performance systems where computational power is measured in floating-point operations per second (flops). The inventions of supercomputers date back to 1964, and the age of supercomputers in India started in the 1980s. In November 1987, the Indian Government decided to create C-DAC, the center for Development of Advanced Computing technology. C-DAC started the PARAM supercomputers, led by Vijay P. Bhatkar, the architect of India’s national initiative in supercomputing since the 1990s. Today, supercomputers in India are among the fastest 500 in the world. One of those supercomputers is a series of PARAM. PARAM means ‘supreme’ in Sanskrit, devoting the idea of supreme computer systems. Here is the list of PARAM supercomputers, organized by the year of their launch.
1. PARAM 8000
PARAM 8000 is the first machine in the PARAM supercomputers series built from scratch in 1991. A prototype of the PARAM 8000 supercomputer came second in the 1990 Zurich supercomputing show, where it was introduced and tested. PARAM 8000 was launched in the market in August 1991 with a 64-node machine, making it India’s first supercomputer. It was a collaboration of C-DAC and the Institute of computer aided design (ICAD), Moscow. PARAM 8000 was successful with its Inmos T800/T805 transputers, distributed memory MIMD (Multiple Instruction, Multiple Data) architectures, and a reconfigurable interconnection network. First installed in ICAD, Moscow, it rapidly took over the home market, attracted 14 other buyers, and was later exported to Germany, the UK, and Russia.
2. PARAM 8600
In 1992, PARAM 8600 was designed in the light of C-DAC wanting to make India’s supercomputer more powerful by integrating the Intel i860 processor. PARAM 8600 supercomputer is an upgrade to PARAM 8000, where the node structure was changed from Inmos T800/805 to one i860 and four Inmos T800 transputers. Each PARAM 8600 cluster resulted in as powerful as four times the PARAM 8000 cluster.
3. PARAM 9000
PARAM 9000 was developed in 1994 to merge cluster processing and massively parallel processing computing workloads. The standard PARAM 9000/SS used SuperSPACRC II processor variant, PARAM 9000/US used UltraSPARC processor, and PARAM 9000/AA used DEC Alpha. To accommodate newer processors, the design of PARAM supercomputers changed to modular with this version by scaling up to 200 CPUs with 32-40 processors and using Clos network topology.
PARAM 10000 was launched in 1998 based on SMPs (symmetric multiprocessors) clusters that is a relevantly replicated UNIX OS. It contains independent nodes where each node is based on the Sun enterprise 250 server with two 400 Mhz UltraSPARC II processors. PARAM 10000 was exported to Russia and Singapore with the base system’s best speed recorded at 6.4 GFLOPS (giga-floating point operations per second). The base configuration got three compute nodes and a server node, and the system has 160 CPUs capable of 100 GFLOPS. However, it is easily scalable to the TFLOP (trillion floating point operations) range.
5. PARAM Padma
PARAM Padma is a 1Teraflop supercomputer, which is India’s first supercomputer to earn a place, ranked 171th in the Top500 list of supercomputers of the world in June 2003. This PARAM supercomputer was launched in 2002 with a storage capacity of 1TB, 248 IBM Power4 1GHz processors, IBM AIX 5.1L Unix OS, and PARAMNet for the main connection.
6. PARAM Yuva
PARAM Yuda came out in November 2008 with a peak speed (Rmax) of 38.1 Tflops and a maximum speed (Rpeak) of 54 Tflops. It has a storage capacity of 25TB up to 200TB, 4608 cores, Intel 73XX-2.9 GHz processor, and PARAMNet 3 as the primary connection. PARAM Yuva ranked 69 in the Top500 list of supercomputers in the world after PARAM Padma as India’s supercomputer.
7. PARAM Yuva II
PARAM Yuva II was developed in February 2013, the project took three months and cost ₹160 million. The investment paid off as PARAM Yuva II became the first India’s supercomputer to reach 500 Tflops. PARAM Yuva II is a high-performance computing cluster that uses 35% less energy compared to other PARAM supercomputers and performs ten times quicker at 524 Tflops. It has a hybrid cluster with multiple interconnects, a high storage capacity of 200TB, and supports software for parallel computing. In the series of PARAM supercomputers, Yuva II is an upgrade of PARAM Yuva, which was created for the purpose of a research-oriented computational environment. PARAM Yuva II is a milestone for C-DAC in PARAM supercomputers as it ranked 1st in India, 9th in the Asia Pacific region, and 44th worldwide among the list of most powerful computer systems. Additionally, PARAM Yuva II earned a position on the Green500 list in November 2013 and again in June 2015, and also it ranked 172 in the Top500 supercomputers list in June 2015.
8. PARAM ISHAN
Param ISHAN was developed and launched in September 2016 at the Indian Institute of Technology Guwahati. It has a hybrid high-performance computing system with a peak computing performance of 250 Tflops. PARAM ISHAN has 162 computer nodes, including 126 nodes having 2 Intel Xeon E5-2680 v3, 12 cores, 2.5 GHz processors, and 64 GB RAM per node. Also, four high memory compute nodes, 16 nodes containing 2 NVIDIA Tesla k40 (GPGPU) per node, and the rest 16 nodes having 2 Intel Xeon Phi 7120 (MIC) per node. PARAM ISHAN is first India’s supercomputer with a 300TB storage capacity based on a luster parallel file system and a software stack comprising CentOS 6.6, Intel parallel studio 2016, GNU compilers, Intel MPSS, CUDA, Mellanox OFED, Luster, SLURM resource manager & scheduler and Bright cluster manager.
9. PARAM Brahma
PARAM Brahma was built in India by C-DAC and IISc under the national supercomputing mission (NSM), co-funded by the ministry of electronics and information technology and the department of science and technology. It has a computational power of 797 Tflops (Rpeak) and 526.5 Tflops (Rmax) with a storage capacity of 1PB. The unique property of PARAM Brahma is that it has a cooling system called direct contact liquid. This cooling system uses the thermal conductivity of liquids, mainly water, to maintain the system’s temperature during operations. PARAM Brahma supercomputer was launched in 2019 and, as of 2020, is available at IISER Pune and has 2 X Intel Xeon Cascadelake 8268, 24 cores, and 2.9 GHz processors.
PARAM Siddhi-AI is the fastest among the PARAM supercomputers, with a Rpeak of 5.267 Pflops and a sustained Rmax of 4.6 Pflops. It is a high-performance computing artificial intelligence (HPC-AI) system built in India. The integration of AI in supercomputers helps research for advanced materials, computational chemistry and astrophysics, health care system, flood forecasting, faster simulations in the covid-19 application, and medical imaging and genome sequencing. PARAM Siddhi-AI was released in 2020, containing the NVIDIA DGX SuperPOD based networking architecture, HPC-AI engine software and frameworks, and cloud platform. It ranked 63rd in the Top500 list of supercomputers worldwide in November 2020 and is one of top India’s supercomputers sharing the position with the Pratyush supercomputer.
11. PARAM Pravega
PARAM Pravega is a recently released supercomputer in January 2022 under the national supercomputer mission at the Indian Institute of Science, Bengaluru. It hosts an array of program development tools, utilities, and libraries for developing and executing high-performance computing operations. PARAM Pravega runs on CentOS 7.x, has a combination of heterogeneous nodes, including Intel Xeon Cascadelake processors for CPU nodes and NVIDIA Tesla V100 cards for GPU nodes, and has a storage capacity of 4PB. The peak computing power of PARAM Pravega is 3.3 Pflops.
12. PARAM Ganga
Under the national supercomputing mission, PARAM Ganga is established at the Indian Institute of Technology Roorkee. Among PARAM supercomputers, PARAM Ganga is based on heterogeneous and hybrid configurations of nodes similar to PARAM Pravega. It has 312 nodes, combining CPU, GPU, and HM modes with a peak computing power of 1.67 Pflops. The cluster of the supercomputer contains compute nodes connected to Mellanox (HDR) InfiniBand interconnect network. In addition, the PARAM Ganga supercomputer uses a luster parallel file system and runs on CentOS 7.x.
13. PARAM Shakti
PARAM Shakti is a petascale supercomputer in the PARAM supercomputers series built at the Indian Institute of Technology Kharagpur under the NSM launched in March 2022. This supercomputer facility aims to amplify the research and development initiatives in academics and industries in India and focuses on solving large-scale problems in various fields of science and engineering. PARAM Shakti has 17680 CPU cores and 44GPUs and an RHDX-based cooling system, with a computing power of 1.6 Pflops.
A team of researchers at Google AI has developed an artificial intelligence model that can predict odor like human beings. The model maps molecule structure to the smell of a substance and is even capable of recognizing unnoticeable smells.
Today we introduce an ML-generated sensory map that relates thousands of molecules and their perceived odors, enabling the prediction of odors from unseen molecules and providing a potential tool to address global health issues like insect-borne disease. https://t.co/wmiq6wPKv5
Mapping molecules to recognize or predict odor is challenging because smells are senses when molecules stick to some sensory receptors in the nose. The nose is host to over 300 such receptors, making it extremely difficult to draw conclusions about an odor with certainty.
There have been a few attempts in the past to explore the interplay of molecules and map to the corresponding smells. In 2019, a graph neural network (GAN) was trained to learn several molecule-smell pairs and fall under specific labels like “floral,” “beefy,” or “minty.”
The model developed by Google researchers has successfully generated a “Principal Odour Map” (POM) with characteristics of a sensory map. Scientists have tested the model on many parameters to check if it has learned to predict odors that humans fail to recognize. The model surpassed several tests and presented exceptional results in odor prediction. They found the map could predict the activity of sensory receptors and the behavior of many animals.
Scientists hope to use POM to predict animal olfaction and their response to deadly diseases transmitted by insects who are attracted by odors. The aim is to develop less expensive, longer lasting, and safer repellents to reduce the instances of diseases and save lives.
Rephrase.ai, an AI startup dealing with synthetic media, has raised US$10.6m in a series A funding round led by Red Ventures, a global investor with a diverse portfolio. The investment will improve the company’s capabilities in product experiences and scale hiring within engineering, marketing, and sales teams.
Rephrase.ai came into existence four years ago, intending to develop an engine to create professional-level videos “as easy as writing text.” It leverages high-grade synthetic video-making capabilities for all companies. Synthetic video creation has become an integral part of communications and content teams to humanize virtual meetings. Its synthetic video capabilities increase conversion rates, reduce costs, and improvise content quality. The company utilizes deep learning to create avatars of humans that are used in synthetic videos and text inputs.
Its video capabilities can be seen in Cadbury’s ‘Not Just a Cadbury Ad’ starring Shah Rukh Khan. the ad used Rephrase.ai’s video technology with location targeting to name local sweet stores.
Carlos Angrisano, President of Red Ventures, emphasized the investor’s aim of reinventing video production as process technology. He said, “We are impressed by Rephrase.ai’s leadership and talent bench, which is a tremendous competitive advantage in such a nascent field.”
Over the years, Rephrase.ai has developed many Chief Experience Officers (CXOs), celebrities, and influencers through lip-syncing, facial mapping, and expression capabilities and aided enterprises to grow effectively around the globe.
With 35 people, Rephrase.ai has built a top-tier team, including researchers and developers with experience at companies like Amazon and Google. Ashray Malhotra, co-founder and CEO of Rephrase.ai said, “In the last year, we’ve developed hundreds of digital human clones, creating millions of videos during the process. I’m thrilled to welcome Red Ventures, Silver Lake, and 8VC as partners on this journey to help expedite the world’s adoption of generative AI videos.”
The European Union is in the process of integrating and formulating new regulatory frameworks for metaverse activities. The EU has announced a union-wide metaverse initiative that will come live sometime in 2023.
The metaverse initiative is part of the letter of intent that envisions a “Europe fit for the digital age.” The letter states that the EU will “continue looking at new digital opportunities and trends, such as the metaverse.”
Thierry Breton, EU’s internal market commissioner, explained that the organization would put forward many structures to address issues in existing digital systems and develop infrastructure to enhance interoperability amongst metaverse worlds.
As metaverse is based on a system of multiple services, including software, 5G connectivity, cloud platforms, etc., the organization has launched the Virtual and Augmented Reality Industrial Coalition, an institution to bring all stakeholders together.
Breton said, “We will launch a comprehensive reflection and consultation on the vision and business model of the infrastructure that we need to carry the volumes of data and the instant and continuous interactions which will happen in the metaverses.” This is not the first time European Union has implemented tech-related frameworks and initiatives. The union recently presented a counterfeiting project for NFTs using blockchain technologies.
Meta’s Facebook and Alphabet’s YouTube have faced the heat due to some hate speeches and violent rhetoric surfacing on the platforms. President Joe Biden addressed the issue during the White House Summit on September 15 and called on Americans to combat online extremism. As a response, Meta and YouTube agreed to expand policies and research new techniques to fight online extremism.
YouTube, the video streaming site, currently has policies that prohibit hateful incitements. However, it has not been able to apply them extensively in preventing militia groups from streaming their agenda. Jack Malon, a YouTube spokesperson, said the updated policies would go further with enforcement.
Facebook, owned by Meta, also comes under scrutiny for having some violent comments, posts, and videos surfacing on the social media platform. Facebook announced its plan to collaborate with the Middlebury Institute of International Studies Center on Terrorism, Extremism, and Counterterrorism to delve into research and figure out more extensive policies against violent rhetorics and hateful speeches.
Other tech giants like Microsoft also attended the summit and announced a basic and affordable version of their machine learning and AI tools to track and prevent violent situations in schools.