The NVIDIA Graduate Fellowship Program is accepting applications from Ph.D. students regarding their research projects until September 9th. This is the 22nd year consecutively that NVIDIA is inviting Ph.D. students to showcase their academic achievements and research. The program began in 2002 and has gained momentum, with as many as 185 grants worth over US$5.9m.
The program seeks doctoral students in 1st year and poses an award of US$50,000 for exceptional research projects. All recipients will have access to NVIDIA products and mentorship opportunities throughout the program. A mandatory internship will follow the program after the fellowship year ends.
Experts will evaluate research projects on several criteria like recommendations, academic achievements, relevance, and domains catering to NVIDIA’s primary research.
The program is a great way to support academia and research in the pursuit of technological innovation and to expand NVIDIA’s employer base by introducing it as an ideal avenue for future experts. Per NVIDIA’s press release, this program “allows us to demonstrate our commitment to academia in supporting research that spans all areas of computing innovation.”
This year’s itinerary will emphasize artificial intelligence, deep neural networks, natural language processing, robotics, and related technologies. Billy Dally, senior research VP and chief scientist at NVIDIA, said the fellowship program would help participants develop relationships with industry experts and pave their way forward.
Meta AI is releasing Implicitron, a modular framework within the PyTorch3D library, to advance 3D neural representations. Implicitron will provide implementations and abstractions to render 3D components.
With rapid advances in neural representations, more windows on possibilities are opening up, leading to an unclear method of choice. This new research is based on a new computer vision technique that seamlessly integrates natural and virtual objects in augmented reality. After NeRF techniques came into the picture, over 50 variants of this method have been released for synthesizing views of complex scenes. It is yet in its infancy, with new variants surfacing frequently.
Implicitron makes it possible to evaluate combinations, variations, and modifications with a standard codebase without any 3D graphics expertise. The modular architecture enables people to use it as a user-friendly state-of-the-art method while extending NeRF with a trainable view. Meta has successfully created composable versions of several generic neural reconstruction components to create real-time photorealistic renderings.
Additionally, Meta has also curated additional components for experimentation and extensibility. They have included a plug-in system to allow user-specified implementation and flexible configuration. It also comes with a training class that utilizes PyTorch Lightning for launching new experiments.
Implicitron aspires to function as a cornerstone for research in the area of neural implicit representation and rendering, just like Meta’s Detectron2 has become the go-to framework for constructing and assessing object detection methods on a range of data sets.
Meta aims to provide users of the framework with a way to quickly install and import components from Implicitron into their projects without recompiling or copying the code.
Hyundai Motor Group bought a controlling share in Boston Dynamics AI institute as it invested over US$400m, giving the former 80% shareholding. The new AI facility will be a research-driven organization focused on “solving the most important and difficult challenges facing the creation of advanced robots.”
Both the companies aim to advance artificial intelligence fundamentals in this new institute led by Marc Raibert, the founder of Boston Dynamics. Hyundai will invest its resources in technical areas and their development with its staff while partnering with many corporate research labs.
This AI institute will be centered in the Kendall Square research community, Cambridge, Massachusetts, and will host many software and hardware experts. Several more AI, robotics, ML, and engineering experts will work together to develop robotic technologies and enhance their capabilities for several new use cases.
Raibert said, “Our mission is to create future generations of advanced robots and intelligent machines that are smarter, more agile, perceptive, and safer than anything that exists today.”
This is not the only investment Hyundai Group is taking; it also announced plans to establish a Global Software Center to enhance its software technologies and SDVs (software-defined vehicles). The center will focus on 42dot, autonomous driving technology, and mobility software.
Python is one of the most popular and basic interpreted programming languages. The reason why most people favor Python is that the usability of Python is limitless. It can be applied to any programming task and used for web development, machine learning, or complex data analysis. With its easy-to-learn and understand property, Python comes on top of a coder’s arsenal. The popularity of Python is only increasing daily, which makes newcomers opt to learn Python. There are many resources for learning the Python programming language, including books, video tutorials, online courses, etc. Here, we have a list of top Python programming books one can pick and start coding.
Python Crash Course – Eric Matthes
‘Python Crash Course’ by Eric Matthes is a Python programming book for beginners with a comprehensive and project-based introduction to Python language. This book provides the fundamentals of Python, including Python elements and data structures. This book remarks that it concentrates on the basics and the application of everything you learn. The book has two sections; the first covers different data types & how to work with each data type, logical (if & while) statements, then follows dictionaries, functions, classes, file handling, and code testing & debugging.
The second section contains three major projects and some fun and clever implementations of the knowledge you gained in the previous section. The first project is an Alien invasion game, specifically the space invaders arcade game, where you develop the game using pygame package. The second project is based on data visualizations, including working with matplotlib & pygal packages, random walks, rolling dice, graphs & charts, and a bit of statistical analysis. This project aims for you to interact with web APIs, deal with various data formats, and retrieve & visualize data from GitHub. The third project is a simple creation of a web application using Django. Step by step, this project guides you through installing Django, designing models, creating an admin interface, setting up user accounts, managing access controls, styling your entire application, and deploying it to Heroku. Well, it seems a lot, but this book makes it easier with well-written and organized content & exercises.
Head First Python: A Brain-Friendly Guide – Paul Barry
‘Head First Python: A Brain-Friendly Guide’ by Paul Barry uses an easy-to-learn approach that is a visually rich format to engage the mind, based on the latest research in cognitive science and learning theory. This multi-sensory learning approach focuses on how your brain works, making the book very engaging and easy to read. The book starts with basic concepts of lists and how to use & manage them. Then discuss modules, errors, and file handling with a unifying project of building a dynamic website for a school athletic coach through a common gateway interface (CGI). It covers a range of Python tasks using data structures and built-in functions, making the learning process more accessible, painless, and effective. This book will teach you to build your web application, database management, and exception handling with other fundamentals.
Python Programming: Using Problem Solving Approach – Reema Thareja
‘Python Programming: Using Problem Solving Approach’ by Reema Thareja is a detailed Python textbook designed to fulfill a first-level course in Python programming. The book contains 12 in-depth chapters related to Python and more. The first two chapters introduce computers and problem-solving strategies, and object-oriented programming. The highlight of the book is that the concepts include illustrations for easy depiction and understanding, and numerous examples are provided with the outputs to help students grasp the art of writing code. Further, the chapters review the basics of Python programming, decision control statements, functions, Python strings revisited, data structures, and classes and objects. The unique point of this book is the notes and programming tip markups that indicate the important concepts and help readers avoid common errors while coding. The last three chapters of the book discuss the other topics in Python scripting, including inheritance and polymorphism, operator overloading, and error and exception handling.
Automate the Boring Stuff with Python – Al Sweigart
‘Automate the Boring Stuff with Python’ teaches Python 3 is for all types of learners, from beginners to experienced programmers. This book doesn’t need the learner to have any prior knowledge of Python or programming, anyone can learn from it. It is one of the best-selling Python programming books that help you master the fundamentals of Python. The book explores several library modules for the use case of tasks like data scraping, reading PDF & word documents, and automating clicking & typing. It covers the fundamentals and core concepts of Python in the first eleven chapters and then dives into web scraping, working with excel spreadsheets, google spreadsheets, PDF & word documents, etc. The level of tasks is divided so that you can create Python programs once you master Python programming basics. Then perform automation exercises to search for text in file/files, create, update, move or rename files, search the web & download online content, etc. The book focuses on the jobs that take hours to complete when done manually, so by using Python, you can automate the process and perform the same task in minutes.
Python Programming for Beginners: 2 Books in 1 – The Ultimate Step-by-Step Guide To Learn Python Programming Quickly with Practical Exercises (Computer Programming) – Mark Reed
‘Python Programming for Beginners: 2 books in 1 – The Ultimate Step-by-Step Guide To Learn Python Programming Quickly with Practical Exercises’ is one of the Python programming books for beginners that delivers what it promises in the title itself. The book is written by Mark Reed, which is a very short and concise book with a step-to-step guide. In this fast-paced world, the learning process seems lengthy and boring, but this book aims to make the process of learning Python programming simpler and faster. The book is intralinked, meaning the previous chapters hold importance for the upcoming chapters following an overall step-to-step approach. The chapters in this book consist of algorithm and information processing, working with Python strings, math functions in Python, file processing, and more. Along with theories and explanations of Python, the book comes with a dynamic and interactive guide to executing the codes learned.
Think Python: How to Think Like a Computer Scientist – Allen B.Downey
‘Think Python: How to Think Like a Computer Scientist is one of the free Python programming books written by Allen B.Downey, an American computer scientist. It introduces Python programming and is designed to define all core terms and then develop each new concept in a logical progression. The book aims to teach how coders or expert programmers think about coding. This hands-on guide will walk you through a step-by-step process of beginning a programming journey, the basic framework, and the terminologies in programming, then move to advance topics of functions, recursion, data structures, and object-oriented design. This book covers a wide range of content that otherbooks on Python do not touch. These overlooked topics discussed in this book are operator overloading, polymorphism, analysis of algorithms, and mutability versus immutability. The latest edition, the book’s second edition, packs four major projects providing a practical case study for learners to understand and implement code. This book offers a hands-on programming experience as you learn them and is best suited for beginners, self-learners, and working professionals who have a zeal for Python programming.
Fluent Python: Clear, Concise, and Effective Programming – Luciano Ramalho
The name says it all, ‘Fluent Python: Clear, Concise and Effective Programming’ is a book that will make you fluent in Python programming language. The author, Luciano Ramalho, is a web developer and runs his own Python training company. When other Python programming books focus on details and working of a code script, this book has a basis on the Python core features & libraries and shows how to make your code shorter, faster, and more readable simultaneously. The book is focused on intermediate Python programmers who have a solid foundation and want to upskill to the next level and also the experienced programmers who work on other programming languages and want to learn Python to perform the same tasks. The highlight of this book, among other Python programming books, is the organization of topics, such as each section is independent and doesn’t need to cover prior chapters to go into it. The chapters are divided into six sections; prologue, data structures, functions as objects, object-oriented idioms, control flow, and metaprogramming. This book is exceptionally approachable as each chapter contains code examples and numbered call-outs linking lines providing helpful descriptions. It is a hands-on guide helping programmers to write Python code using the best features of the languages.
All the above mentioned Python programming books provide good knowledge of Python programming and a practical experience in coding. Apart from these, this bonus book is for Python programmers to practice and prepare for future opportunities.
Bonus book – Elements of Programming Interviews in Python: The Insiders’ Guide – Adnan Aziz, Amit Prakash & Tsung Hsien Lee
The book ‘Elements of Programming Interviews in Python: The Insiders’ Guide’ is to test what you know about Python. It contains a set of 250 challenging problems to test your Python skills, which are often asked at interviews at top software companies. The problems asked in the book have illustrated 200 figures, 300 tested programs, and 150 additional variants followed by detailed solutions. Further, the book summarizes theory-based interview tips and practice questions. This is one of the Python programming books that helps to brush up on your Python concepts, including data structures and algorithms.
BrainChip Holdings Ltd., a commercial supplier of neuromorphic AI, is leveraging its neuromorphic technology via the BrainChip University AI Accelerator Program. The program aims to share technical knowledge, promote cutting-edge discoveries, and guide students to become the next generation of tech innovators.
The program is a package of training, guidance, and hardware provision for students at higher education institutions with AI and engineering programs. BrainChip’s products will be available for students who enroll in the program to use in their projects for multiple use cases. The students will also have access to event-based technologies in real-world applications.
The AI accelerator program finished a pilot session at Carnegie Mellon University in the previous Spring semester. Five more universities are expected to join the sessions in the coming academic year.
Each session will demonstrate and educate students about a working environment for BrainChip’s AKD1000 on a Linux system, thus clubbing lecture-based tutoring with hands-on experiential learning. BrainChip’s Akida mimics the brain to analyze specific essential sensory inputs on the acquisition, data processing, and energy economy.
Prof John Paul Shen, Electrical and Computer Engineering Department at Carnegie, said, “Our students had a great experience in using the Akida development environment and analyzing results from the Akida hardware. We look forward to running and expanding this program in 2023.”
Meta and Stanford University researchers have developed a new metric for pruning AI datasets. The metric will enhance training scalability by following a power-law relationship where additional data samples would be required to increase the performance by a few percentage points.
The pruning techniques used at present are either inefficient or severely compute-intensive. This new pruning algorithm will require much lesser computational time and is self-sufficient.
Researchers used statistical mechanics to show that proper dataset pruning can scale the performance by an exponential-decay relationship. Exponential-decay relationships require less additional sample data to output the same performance.
Meta’s researchers started by developing a theoretical model of data pruning and determining a ‘margin’ of the training example, where “easy” indicated a large margin and “hard” meant a smaller one.
They used K-means clustering on an embedding space. The pruning metric is the distance between the dataset example and the nearest cluster centroid. The researchers observed that the best pruning results depended on the initial dataset size. They concluded that as dataset size increases, the number of datasets required for pruning would also increase to achieve significant results via exponential decay.
This is not the first time that model performance has become the focus of a research project. In 2020, OpenAI also published research based on accuracy trends of NLP models. The research also prioritized dataset sizes as a factor affecting the model performance.
Lenovo announced its new AI Innovators Program for the convenience of artificial intelligence deployment. The program will offer various independent software vendors (ISVs) and customer support for scaling in AI.
Businesses are now focusing on strategic AI solutions and technologies to unlock more significant potential and efficiencies. With Lenovo’s program, Lenovo’s innovators will deliver cognitive solutions across financial services, manufacturing, retail, and innovative city applications. These solutions enable vertical use cases like predictive maintenance, clinical maintenance, and autonomous shopping.
Scott Tease, VP and General Manager of HPC and AI at Lenovo, said that the company focuses on the “intelligent transformation to smarter technology” with more innovative infrastructure and business models. He said, “Lenovo’s partner ecosystem provides the full range of support to help businesses simplify implementation across a variety of infrastructures and deploy today’s most innovative AI software.”
Lenovo AI Innovators ecosystem has welcomed more than 30 AI ISV partners aboard, representing over US$1b in capital investment. Sunlight Technologies is one of the first global partners with Lenovo AI Innovators. They are creating industry-first AI solutions at the edge to turn data into decisions across several platforms.
Julian Chesterfield, CEO and founder of Sunlight, said, “This partnership allows Sunlight early access to Lenovo’s latest edge innovations for certification and proofs of concepts that mutually benefit our networks of partners, ISVs and enterprises.”
Lenovo’s ecosystem will be a unique one-stop destination for all enterprises leveraging industry-focused services and solutions.
Motional, a Hyundai and Aptiv joint venture, and Lyft recently started an all-electric robotaxi service now active in Las Vegas. The robotaxi service will be a prelude for a fully driverless service to be launched in 2023.
Motional is an autonomous vehicle company working autonomously driven vehicles in Las Vegas via a partnership with Lyft. During the annual Consumer Electronics Show, the joint venture began as a weeklong pilot between Aptiv and Lyft in 2018. As of now, they have completed over 1,00,000 passenger trips.
Lyft and Motional are now proceeding with the public launch of their service, making it the first time that customers can hail autonomous rides in Hyundai Ioniq 5s. However, the rides would accompany a safety driver behind the wheel for emergencies.
These robotaxis do not require potential riders to sign up for a waiting list or any non-disclosure agreement for beta-testing. The rides will be free now while the companies plan to charge it from the next operational years.
Akshay Jaising, VP of Commercialization at Motional, said, “Any Lyft rider in Las Vegas can request a Motional AV. No NDAs. No sign-ups. That’s how Motional and Lyft have operated for the past four years. We believe the best feedback is from real riders, not employees or limited participants.”
Motional also claims to have a permit to test its fully driverless vehicles, set to launch in 2023 in Nevada. The companies anticipate that they will secure appropriate licenses and permits to begin conducting commercial rides by 2023.
From September 19 to September 22, NVIDIA says it will virtually host the upcoming GTC conference, featuring a keynote address from CEO Jensen Huang and more than 200 tech sessions. However, before the most awaited event by NVIDIA, the company is already making headlines with the latest announcements in tech, especially at SIGGRAPH 2022–the largest gathering of computer graphics professionals in the world.
SIGGRAPH 2022
At SIGGRAPH 2022, NVIDIA described Universal Scene Description (USD), created by Pixar, a Disney company, as the metaverse’s equivalent of HTML in their opening presentation. NVIDIA also discussed the metaverse’s future, in which avatars would resemble browsers and provide a new level of human-machine interaction. Additionally, the company introduced the idea of neural graphics, which mainly relies on AI to produce more realistic metaverse graphics with much less effort.
NVIDIA believes that neural graphics will play an integral role in metaverse by using AI to learn from data to create powerful graphics. Integrating AI improves outcomes, automates design decisions, and offers previously unimagined potential for makers and artists. Neural graphics will revolutionize virtual environment creation, simulation, and user experience.
NVIDIA acknowledges that a professional artist must balance photorealism and detail against time constraints and financial constraints when creating 3D objects for use in creating scenes for video games, virtual worlds (including the metaverse), product design, or visual effects. They must balance photorealism and detail against time constraints and financial constraints.
Based on these concerns, the company published new research and a comprehensive set of tools that use the power of neural graphics to create and animate 3D objects and environments in order to streamline and accelerate this process. In this article, we will discuss some of the groundbreaking product announcements from NVIDIA.
Holographic Glasses
One of the major problems with virtual reality (VR) experiences—bulky headsets—was addressed in a recent study report by Stanford University and NVIDIA researchers. They demonstrated how these headsets might be thinned down to the thickness of a pair of ordinary-looking spectacles. It is based on the idea of pancake lenses, which were made possible with NVIDIA’s assistance so they could be used with three-dimensional (3D) pictures. In a joint effort with the Stanford team, NVIDIA was also successful in reducing the distance between the lens and the display. The latter was accomplished using a phase-only spatial light modulator (SLM) that creates a small image behind the device and is illuminated by a coherent light source.
It is a holographic near-eye display device known as Holographic Glasses that uses a pupil-replicating waveguide, a spatial light modulator, and a geometric phase lens to produce holographic pictures in a thin and light design. The suggested concept uses a 2.5 mm thick optical stack to deliver full-color 3D holographic pictures. The researchers introduced a brand-new pupil-high-order gradient descent approach to calculate the right phase when the user’s pupil size varies. The prototype for a wearable pair of binoculars supports 3D focus cues and offers a diagonal field of vision of 22.8° with a 2.3 mm static eye box and the option of a dynamic eye box with beam steering, all while weighing only 60 g without the driving board.
NeuralVDB
At SIGGRAPH 2022, NVIDIA launched NeuralVDB in an effort to condense neural representations and significantly reduce memory footprint to enable higher-resolution 3D data.
By utilizing recent developments in machine learning, NeuralVDB improves on an established industry standard for effective storage of sparse volumetric data, or VDB. While preserving flexibility and only incurring a limited number of (user-controlled) compression errors, this revolutionary hybrid data structure drastically reduces the memory footprints of VDB volumes.
According to NVIDIA, NeuralVDB will introduce AI capability to OpenVDB, the industry-standard framework for modeling and displaying sparse volumetric data such as water, fire, smoke, and clouds. Building on this foundation, NVIDIA released GPU-accelerated processing with NanoVDB last year for much-increased performance. With the addition of machine learning, NeuralVDB expands on NanoVDB’s GPU acceleration by introducing compact neural representations that significantly lower its memory footprint. This makes it possible to express 3D data at a considerably greater resolution and scale than OpenVDB. Users are now able to manage large volumetric information with ease on devices like laptops and individual workstations.
In a nutshell, NeuralVDB inherits all of its predecessors’ features and optimizations while introducing a structure of compact neural representations that decreases memory footprints by up to 100x.
In addition, NeuralVDB permits the use of a frame’s weights for the following one, accelerating training by up to 2x. By leveraging the network findings from the preceding frame, NeuralVDB also allows users to achieve temporal coherency, or smooth encoding.
For experts working in fields like scientific computing and visualization, medical imaging, rocket science, and visual effects, the launch of NeuralVDB at SIGGRAPH is a game-changer. NeuralVDB can open up new possibilities for scientific and industrial use cases by achieving the perfect mix of significantly lowering memory requirements, speeding up training, and enabling temporal coherency, including large-scale digital twin simulations and massive, complex volume datasets for AI-enabled medical imaging.
Kaolin Wisp
NVIDIA Kaolin Wisp is a PyTorch library that uses NVIDIA Kaolin Core to interact with neural fields (including NeRFs, NGLOD, instant-ngp and VQAD). By cutting the time needed to test and deploy new approaches from weeks to days, it enables faster 3D deep learning research.
NVIDIA shares that Kaolin Wisp seeks to provide a set of common utility functions for doing neural field research. This comprises utility functions for rays, mesh processing, datasets, and image I/O. Wisp also includes building pieces for creating sophisticated neural networks, such as differentiable renderers and differentiable data structures (such octrees, hash grids, and triplanar features). Additionally, it has interactive rendering and training, logging, trainer modules, and debugging visualization tools.
NVIDIA founder and CEO Jensen Huang stated at SIGGRAPH 2022 that the metaverse, the next chapter of the internet, will be propelled by the fusion of artificial intelligence and computer graphics. Metaverse will also be populated by one of the most popular forms of robots: digital human avatars. These avatars will work in virtual workplaces, participate in online games, and offer customer support to online retailers.
Such digital avatars need to be developed with millisecond precision in natural language processing, computer vision, complicated face and body motions, and other areas. With the Omniverse Avatar Cloud Engine, NVIDIA hopes to streamline and expedite this process.
The Omniverse Avatar Cloud Engine is a brand-new set of cloud APIs, microservices, and tools for building, personalizing, and delivering apps for digital human avatars. Because ACE is based on NVIDIA’s Unified Compute Framework, developers can easily include key NVIDIA AI capabilities into their avatar applications. Additionally, Omniverse ACE enables you to create and flexibly deploy your Avatar to meet your demands, regardless of whether you have real-time or offline requirements.
Developers could employ Avatar Cloud Engine to bring their avatars to life by leveraging powerful software tools and APIs such as NVIDIA Riva for generating speech AI applications, NVIDIA Merlin for high-performance recommender systems, NVIDIA Metropolis for computer vision and advanced video analytics, and NVIDIA NeMo Megatron for natural language understanding. They can also use the Omniverse ACE platform to create intelligent AI-service agents with Project Tokkio. With the help of Project Tokkio, an application created with Omniverse ACE, retail establishments, quick-service eateries, as well as the web will all benefit from AI-powered customer service.
GauGAN360
To create 8K, 360-degree settings that can be imported into Omniverse scenes, Nvidia has introduced GauGAN360, a new experimental online art tool. The software, which uses the same technology as Nvidia’s original GauGAN AI painting software, enables users to paint an overall landscape, and GauGAN360 will produce a cube map or an equirectangular image that corresponds the sketch.
Omniverse Audio2Face
It is an artificial intelligence software that creates expressive face animation from a single audio source. Audio2Face can be used to create standard facial animations as well as interactive real-time applications and retarget to any 3D human or human-esque face, whether realistic or stylized.
According to NVIDIA, getting started is easy since Audio2Face comes prepackaged with “Digital Mark,” a 3D character model that can be animated with your audio file. All you have to do is choose your music and upload it. The output of the pre-trained Deep Neural Network is then fed into the 3D vertices of your character’s mesh to control the face movement in real-time. Additionally, you can change several post-processing factors to alter how your character behaves.
TAO Toolkit
NVIDIA also released TAO Toolkit at SIGGRAPH, a framework that enables developers to build an accurate, high-performance pose estimate model. This model can assess what a person could be doing in a picture using computer vision considerably more quickly than existing approaches. By abstracting away the complexity of the AI/deep learning framework, this low-code variant of the NVIDIA TAO framework speeds up the model training process with Jupyter notebooks. You can use the TAO Toolkit to optimize inference and fine-tune NVIDIA pre-trained models with your own data without having any prior AI knowledge or access to a large training dataset.
Developers can use TAO Toolkit to deploy optimized models using NVIDIA DeepStream for vision AI, Riva for speech AI, and Triton Inference Server. They can also deploy it in a modern cloud-native infrastructure on Kubernetes and integrate it in their application of choice with REST APIs.
Instant Neural Graphics Primitives: NVIDIA Instant Neural Graphics Primitives is a revolutionary technique to capture the form of real-world objects that serves as the basis for NVIDIA Instant NeRF, an inverse rendering model that converts a collection of still photos into a digital 3D scene. For its significance to the future of computer graphics research, the research that formed the basis of Instant NeRF was honored at SIGGRAPH as the best paper.
Other Key Annoucements
NVIDIA Jetson AGX Orin
On August 3rd, 2022, NVIDIA announced the NVIDIA Jetson AGX Orin 32GB production modules. The Jetson AGX Orin 32GB combines an 8-core Arm-based CPU with an Ampere-based GPU to allow AI acceleration in embedded systems, edge AI deployments, and robotics. The device, which has 64GB of flash storage and 32GB of RAM, is just marginally bigger than a Raspberry Pi.
The Jetson AGX Orin 32GB unit that was unveiled earlier this year, can perform 200 trillion operations per second (TOPS). Its production module will have a 1792-core GPU with 45 Tensor Cores and will include an array of connectivity options such as 10Gb Ethernet, 8K display, and PCIe 4.0 lanes. According to NVIDIA, the 64GB version will be available in November and a pair of comparatively powerful Jetson Orin NX production modules are coming later this year.
The Jetson AGX Orin developer kit enables several concurrent AI application pipelines with its NVIDIA Ampere architecture GPU, next-generation deep learning and vision accelerators, high-speed I/O, and quick memory bandwidth. Customers can create solutions employing the biggest and most intricate AI models with Jetson AGX Orin to address issues like natural language comprehension, 3D vision, and multi-sensor fusion.
In addition to JetPack SDK, which incorporates the NVIDIA CUDA-X accelerated stack, Jetson Orin supports a variety of NVIDIA platforms and frameworks, including Riva for natural language understanding, Isaac for robotics, TAO Toolkit to speed up model development with pre-trained models, DeepStream for computer vision, and Metropolis, an application framework, collection of developer tools, and partner ecosystem that combines visual data and AI to increase operational efficiency and safe operations.
By modeling any Jetson Orin-based production module on the Jetson AGX Orin developer kit first, customers can market their next-generation cutting-edge AI and robotics solutions considerably more quickly.
Omniverse Extensions
Antonio Serrano-Muoz, a Ph.D. student in applied robotics at Mondragon University in northern Spain, developed an Omniverse Extension to use the Robot Operating System application with NVIDIA Isaac Sim. Omniverse Extensions are fundamental building blocks that anybody may use to develop and expand the functionality of Omniverse Apps to suit their unique workflow requirements with just a few lines of Python code.
One of the six open-source, GitHub-accessible Omniverse Extensions developed by Serrano-Muoz expands the capabilities of NVIDIA Isaac Sim, an application framework powered by Omniverse that allows users to build photorealistic, physically correct virtual worlds to design, test, and train AI robots.
Antonio also built a digital twin of the robotics prototype lab at Mondragon University and robot simulations for industrial use cases using NVIDIA Omniverse.
After receiving scrutiny from various technology giants and privacy advocates, the Indian Government has withdrawn the Personal Data Protection bill that could hamper sensitive information handling by the firms and grant access to the government.
The bill was unveiled in 2019 by the Ministry of Electronics and Information Technology to protect the personal data of individuals and enterprises. The bill sought to empower individuals with data rights, but it does not specify how much power they have over it. The bill also mandated that individuals and enterprises only store certain information that is “critical.”
Lawmakers indicated that the bill could now see the “light of the day” soon. However, the bill received enormous recommendations and amendments from a panel of experts. The scrutiny identified several issues relevant to the bill but “beyond the scope of a digital privacy law.”
Ashwini Vaishnaw, IT Minister, said in a statement, “The Personal Data Protection Bill, 2019 was deliberated in great detail by the Joint Committee of Parliament 81 amendments were proposed and 12 recommendations were made toward comprehensive legal framework on digital ecosystem.”
Many industry stakeholders like Internet Freedom Foundation, an advocacy group, said that the bill exempts government departments and prioritizes big corporations while disrespecting the fundamental right to privacy. Tech giants like Google, Amazon, and Meta also expressed their concerns regarding some of the recommendations on the proposed bill.