Chinese autonomous car manufacturer Baidu’s subsidiary, which manufactures smart electric vehicles, Jidu Automotive, plans to mass-produce and deploy level 4 autonomous electric vehicles by the end of 2023.
The major announcement was made during Baidu’s Annual Developers’ Conference by the company’s co-founder and CEO, Robin Li, via China’s first metaverse platform named Xirang. The robocars will be initially deployed as taxis and trucks in assigned regions of the country and then expand slowly to provide other related services to customers.
Experts believe that the large-scale deployment of L4 autonomous vehicles of commercial use might roll out by 2030. Jidu Automotive claims that the autonomous car will not require any human intervention to operate and will come with self-learning and self-improving capabilities, making it a highly capable and sophisticated self-driving vehicle.
Experts suggest that there will be a revolution in the intelligent automobile industry in the coming 10 to 40 years, and this new development will keep expanding over time. Researcher at the Automobile Industry Research Center China, Zhang Xiang, said, “Baidu has accelerated its steps in the vehicle-to-everything infrastructure construction across the nation, in a bid to promote the commercialization of autonomous driving technologies.”
He further added that it would take quite some time for fully autonomous vehicles to reach mass-production and commercial deployment stages as the current technology standards are insufficient to build reliable autonomous vehicles.
Recently, Baidu has achieved various remarkable milestones in its robotaxi venture by completing more than 110,000 rides during the third quarter of 2021. With success, Baidu also plans to expand its robot taxi service to over 65 cities by 2025 and to 100 cities by 2030. Last year, Baidu also unveiled its new autonomous robocar-L5 driving autonomy in its annual flagship technology conference Baidu World 2021 event.
Largest online consignment and thrift shop Swap partners with technology firm Find.Fashion to offer unprecedented visual searching features using artificial intelligence and emotion recognition technologies.
Both the companies plan to integrate Find.Fashion’s cutting-edge technology with Swap’s shopping platform to increase the capabilities of Swap and provide a better shopping experience to the customers.
The AI-powered visual search feature will be made available on Swap’s mobile shopping website soon. Find.Fashion’s solution prioritizes emotions behind user actions instead of solely relying on text-based actions of users to offer unmatched quality and relevance of search results to customers.
Chief Commercial Officer and Board Member of Swap, Antonio Gallizio, said, “Adding FIND.Fashion visual search makes it easier to find what they want at that moment, and it makes the shopping experience easier, faster, and more engaging. We know this because buyers that use the search walk away happier with their purchases and find more products on our site that they inevitably buy.”
He further added that the feature would considerably help shoppers who are unsure regarding what they want to buy. The feature will eliminate the need to type what shoppers want to buy and replace it with an intuitive visual search, increasing the chances of finding the best suitable products according to the wants of customers.
Estonia-based world’s first visual search for eCommerce developing firm Find.Fashion was founded by Diana Saarva, Heikki Haldre, and Paul Pällin in 2018. The company helps consumers search for fashion-related products when they are in a dilemma of what to purchase using artificial intelligence technology.
“We’re thrilled Swap.com customers are enjoying FIND.Fashion visual product search with emotion recognition and that we’ve been able to help the company exceed its ecommerce goals in such a short amount of time,” said CEO and Co-founder of Find.Fashion, Heikki Haldr.
Artificial Intelligence company Veritone announced that its MARVEL.ai synthetic voice solution will now support NVIDIA Omniverse Audio2Face. The compatibility will allow developers to build better video and animation projects.
NVIDIA Omniverse Audio2Face is an artificial intelligence-powered platform that allows users to effortlessly generate facial expression animations from audio sources. Veritone MARVEL.ai offers over 200 synthetic stock voices and 150 languages, which now can be used with Audio2Face to create impeccable animation videos.
MARVEL.ai is a Voice as a Service platform that lets users securely create and monetize synthetic voices in the digital world. Developers will now be able to create the best possible VR, AR, 3D graphics, and virtual communities by coupling the two artificial intelligence platforms.
Co-founder and President of Veritone, Ryan Steelberg, said, “Given NVIDIA’s recent announcement at CES 2022 that its real-time 3D design collaboration and virtual world simulation platform is now freely available to millions of NVIDIA Studio creators using GeForce RTX and NVIDIA RTX graphics processing units (GPUs), the interconnected 3D virtual worlds that both Veritone and NVIDIA envision for commerce, entertainment, creativity and industry will be possible.”
He further added that they are incredibly thrilled to announce the compatibility of the two platforms, which many believe to be the key to creating the Metaverse. Apart from NVIDIA Audio2Face, Vertone’s platform is also compatible with other related audio animation software that allows users to add synthetic voice to metaverse characters seamlessly.
Additionally, users can enjoy text to speech features in the Veritone MARVEL.ai platform. United States-based artificial intelligence-powered solutions developing company Veritone was founded by Chad Steelberg, Patrick Lennon, Ryan Steelberg, and Zeus Peleuses in 2014. The tech firm is known for developing aiWARE, which can be used to transform and analyze both structured and unstructured data.
Chief Marketing Officer of Veritone, Scott Leatherman, said, “As the world pivots to understand, explore, and drive commerce in the metaverse, we are proud to support NVIDIA’s Omniverse with hyper-realistic synthetic voice from MARVEL.ai.”
AI-powered mobility technology startup Hello Llama makes its debut at the CES 2022 event with the launch of its new Llama Vision solution. The CES event invites technology companies from various parts of the world to showcase their cutting-edge technologies to a large global audience.
Hello Llama is led by David Touwsma, who has expertise in micro-mobility technologies and has earlier deployed multiple LEVs, bikes, and scooters in public use. The newly launched technology is Advanced Driver-Assistance Systems (ADAS) for light electric vehicles.
It leverages leading artificial intelligence and computer vision technologies to provide one-of-a-kind features. The competent system collects real-time information and sends it to operators to make roads safer for riders and pedestrians. Llama Vision also sends the collected data to municipalities to aid them in managing road conditions effectively.
David Touwsma said, “By collaborating with multiple partners in the shared mobility and delivery space, Llama’s goal of providing customized AI hardware and software solutions will allow the operators and cities to improve on last mile technology and create a safer environment.”
He further added that the company’s vision is to create a safe co-existence of light electric vehicles within communities. Hello Llama has been founded by Arun Gunasekaran and Bryan Ovalle, who have worked with the world’s leading technology companies, including Tesla and SpaceX.
Llama Vision can be effortlessly integrated with vehicles during design and production processes, making the technology highly flexible and versatile. Currently, the company plans to deploy its technology for shared mobility, municipality transit, delivery fleets, university transits, and OEM directs.
“Technology has changed rapidly over the past 12 months allowing our engineering team to optimize faster than our competition and bring to market a far superior product at a price that will allow our customers to run profitable businesses,” said CTO and Co-founder of Hello Llama Arun Gunasekaran. He also mentioned that the company is excited to announce new mobility features in the coming weeks to revolutionize the industry.
Indian artificial intelligence startup Fractal becomes a unicorn after receiving funding of $360 million from TPG. It is the second Indian startup to achieve unicorn status after Mamaearth in 2022.
The company has raised a total of $685 million to date, and the fresh funding has raised Fractal’s market valuation to over $1 billion, making it a unicorn. TPG Capital Asia, the investment company’s Asia-focused equity platform, will take the responsibility of providing the newly raised funds to Fractal.
According to the company, the transaction of the funds will close by the first quarter of 2022. Fractal plans to use the funds to invest heavily in the research and development of its products.
Co-founder and Group CEO of Fractal, Srikanth Velamakanni, said, “Our AI solutions and products, along with our globally recognized team of experts, empower these organizations to realize and maximize their full potential. As we continue to build upon this foundation, the investment from TPG will accelerate our ability to scale and meet this rising demand globally.”
He also mentioned that TPG’s capabilities completely complement Fractal’s goals, which would help them work together like the way they have been enjoying their partnership with the company’s major shareholder, Apax. Last year Fractal also planned an initial public offering (IPO) to accelerate its growth. The new funding will allow the company to seek more mergers and acquisitions soon.
Mumbai-based artificial intelligence startup Fractal was founded by Pranay Agrawal and Srikanth Velamakanni in 2000. The firm specializes in developing several AI-powered products like Qure.ai and Crux Intelligence that help companies make better strategic decisions. Currently, Fractal has a workforce of over 35,000 employees and has offices in 16 locations across the world, including the United States, Singapore, and Australia.
“Our AI solutions and products, along with our globally recognized team of experts, empower these organizations to realize and maximize their full potential,” said Co-founder and CEO of Fractal, Pranay Agarwal.
Global digital transformation solutions providing company UST launches its new UST AiSense to provide personalized shopping recommendations to customers for food items and wines. The technology will considerably help retail shops offer an enhanced shopping experience to their customers.
The newly developed artificial intelligent system powered by Tastry helps consumers find the best compatible food items for their choice of wine and beer. Apart from assisting consumers in selecting the products of their choice, the tool also offers analytics features to help retailers better understand the wants of their customers effortlessly.
Senior Director, Retail Platform and Solutions at UST, Mahesh Athalye, said, “UST AiSense harnesses the power of Tastry to connect consumer wants with retailers’ digital platforms to drive deeper customer engagement and larger basket sizes. The technology serves brands and retailers by providing science-based suggestions for product development, inventory purchase, and direct-to-consumer recommendation.”
He further added that their new technology would allow retail shop owners to optimize product mix, increase sales, margins, and store loyalty of customers by providing an unmatched shopping experience. According to the company, AiSense is a pallet-based recommendation tool that will help consumers choose the product they will enjoy the most.
The AI solution can be deployed within popup stores or can be accessed through a website and smartphone application. The solution can also display critical metrics like total sales, available inventory, and customer preferences to help shop owners run, manage, and increase their business.
“The UST AiSense solution helps you find the wine you love by harnessing artificial intelligence combined with our sensory science. Personalized purchase recommendations provided by the AiSense app guides you through the process of selecting your perfect wine or beer and associated food every time,” said General Manager of UST, Keith Pickens. He also mentioned that the app would allow users to select the best option for them every time without having to worry about their taste preferences. Interested users can easily download the mobile application of UST AiSense from Google Playstore and Apple Appstore.
Himachal Futuristic Communications Limited (HFCL) collaborates with leading artificial intelligence-powered WiFi technology provider Aprecomm to integrate its network offerings with artificial intelligence capabilities.
HFCL had earlier deployed AI-powered technologies in its PM-WAN solution and aimed to expand the adoption of artificial intelligence in its other network offerings for enhancing its capabilities further.
The integration will enable HFCL to add analytics features to its services to help both network providers and consumers to avail better services. The company currently has ten products in its portfolio which will get integrated with AI solutions.
Managing director of HFCL, Mahendra Nahata, said, “I am elated with the partnership with Aprecomm. Integration of Aprecomm’s AI-powered solutions to our platform enables HFCL to offer enhanced user experience with added reliability and security to our customers.”
He further added that this new partnership with Aprecomm will aid them in building resilient networks for consumers across the world, and they also plan to expand this integration to other various products in the future. The integrated technology will allow HFCL to monitor and analyze wireless network performance in real-time to help consumers get the best services from their end.
The company will use Aprecomm’s VWE AI engineer to achieve real-time monitoring of wireless networks. Network providers will revise a well-optimized dashboard capable of performing multiple complicated tasks, including event analysis, deployment assistance, user index measurements, correlated user experience measurements, and provide impactful insights into performance.
Bengaluru-based AI-enabled WiFi technology provider Aprecomm was founded by Guharajan Sivakumar and Pramod Babu Gummaraj in 2016. The firm specializes in integrating wireless access points with artificial intelligence algorithms to provide a better user experience.
Co-founder and CEO of Aprecomm, Pramod Gummaraj, said, “Having reliable Internet connectivity has become extremely essential in today’s world. Programs like PM-WANI are driving us closer to this dream. Network Intelligence and Self-Management will play vital roles with this Increased Connectivity.”
Edge native artificial intelligence and machine learning product developing company MicroAI to demonstrate its new platform named Launchpad during the CES 2022 event to be held in Las Vegas from 5th to 7th January.
The CES event invites multiple technology companies from around the world to let them showcase their products to a large global audience. The Launchpad is a new product developed by MicroAI that is a quick start deployment tool for seamlessly managing MicroAI software running on embedded devices.
Additionally, the company will also demonstrate how manufacturers can use their platform to achieve the best possible results in a separate event. MicroAI has collaborated with renowned communication service provider iBasis to showcase its product at the CES 2022 event.
The company will utilize iBasis’ communication technology to demonstrate the working of Launchpad and how it can be used to easily manage MicroAI applications on various devices.
CEO of MicroAI, Yasser Khan, said, “Edge-native AI enables embedded AI software to run on microcontrollers and microprocessors in endpoint devices, transforming how AI can be made available right where data is captured.”
He further added that their newly developed Launchpad would help companies operating in numerous industries to effectively manage data. Apart from managing MicroAI software, users can also use Launchpad to analyze data from multiple sensors, such as information from temperature sensors, and display them on one screen for better scrutiny.
Dallas-based edge AI enablement firm MicroAI specializes in developing intelligence asset management solutions for multiple industries, including semiconductor, automobile, telecom, manufacturing, and many more.
The company has multiple offshore offices in countries such as India and Japan. MicroAI is also acclaimed for providing endpoint artificial intelligence and machine learning solutions for asset optimization, predictive maintenance, and cyber security.
NVIDIA’s real-time 3D design collaboration and virtual world simulation platform, NVIDIA Omniverse, is now accessible in a free version for millions of individual NVIDIA Studio developers employing GeForce RTX and NVIDIA RTX GPUs. In other words, Omniverse is shifting from beta to 1.0 for creators and artists. The announcement was made at the CES 2022 tech trade event in Las Vegas by Richard Kerris, Vice President of the Omniverse development platform.
The company also revealed new platform improvements for Omniverse Machinima and Omniverse Audio2Face, as well as ecosystem upgrades and new platform capabilities, including Nucleus Cloud and more. Omniverse Nucleus Cloud is a ‘one-click-to-collaborate’ function that allows individual creators in different regions to share and collaborate on developing, altering, and updating big Omniverse 3D scenes without requiring significant data transfers. Meanwhile, Omniverse Audio2Face is an AI-powered tool from NVIDIA Jarvis that analyzes audio samples and applies a “blendshape” capability to match face motion to audio expressions. This, according to Rochard, will be crucial in the construction of meta-humans for metaverse applications.
New support for the Omniverse ecosystem has also been added from 3D markets and digital asset libraries such as TurboSquid by Shutterstock, CGTrader, Sketchfab by Epic, and Twinbru. Omniverse is adding new characters and items from games like Shadow Warrior 3 and Mount & Blade II: Bannerlord to its Omniverse Machinima assets for free. Users and producers may remix the game material to create their 3D movies by dragging and dropping assets into their scenes.
Omniverse can be considered NVIDIA’s collaborative solution for creating digital twins of real-world things and is sometimes referred to as a “metaverse for engineers.” It combines several of NVIDIA’s fundamental technologies, including GPU-accelerated rendering, AI modeling and simulation, 3D scene building, and augmented reality visualization. The universal scene description (USD) exchange framework established by Pixar in 2012 has been combined by NVIDIA Omniverse with these technologies for modeling physics, materials, and real-time path tracking.
Source: NVIDIA
Like HTML provided the common “connective tissue” for content development years ago, Richard defined the platform as offering the common “connective tissue” for developing and administering virtual worlds. Although, as Richard pointed out, the platform has already been downloaded by about 100,000 people since the testing program began approximately a year ago, the switch from beta to public access this week will put that idea to the test.
Richard also stated that Omniverse currently has “connectors” to 14 external apps such as Autodesk 3ds Max, Autodesk Maya, and Epic Games’ Unreal Engine and that the platform would soon have a connector to Adobe Substance 3D Material Extension. Nvidia Omniverse now interfaces to e-on software’s VUE, PlatFactory, and PlantCatalog, among other new connectors and asset libraries. VUE is an all-in-one application that lets users create digital 3D nature based on nature’s rules, from skies and volumetric clouds to terrains, large-scale ecosystems, wind-swept vegetation populations, open water bodies, roads, and rocks. It also includes a native Omniverse live link connector that syncs all scene modifications directly to Omniverse stages. PlantFactory allows you to make twigs as little as twigs or redwood trees as vast as redwood trees, as well as wind animation effects. PlantCatalog includes around 120 ready-to-use plant items in its database.
The announcement at CES 2022 comes less than two months after Omniverse Enterprise was released for enterprise creation of digital twins and other metaverse applications for industrial use cases. In November at the NVIDIA GTC Fall 2021 conference, Omniverse Enterprise, a premium subscription platform for professional teams, was launched and is marketed through NVIDIA’s global partner network. Omniverse Enterprise is a multi-element system that serves end-users while also supporting back-end infrastructure. The Omniverse Nucleus server software, which costs $1,000 per named user per year, is included in this package. Omniverse Nucleus is a database engine that links people and allows 3D assets and scene descriptions to be shared. Designers working on shading, animation, lighting, modeling, layout, and special effects may collaborate to build a scene once they’re connected. Nucleus can be accessed by several people.
NVIDIA Omniverse Enterprise enables 3D production teams to collaborate smoothly on large-scale projects. BMW, for example, is employing Omniverse to create a virtual factory that incorporates a variety of planning data and apps and allows real-time cooperation with no restrictions. Members of BMW’s Digital Solutions for Production Planning and Data Management for Virtual Factories gave a detailed tour of the Regensburg factory’s fully functional, real-time digital twin capable of simulating at scale production and finite scheduling based on constraints down to work order instructions and robotics programming on the shop floor at GTC November 2021 Conference.
Omniverse Avatar, which integrates NVIDIA’s voice processing, recognition, and grammar, computer vision, recommendation systems, and 3D simulation technologies into a system for creating interactive, conversational avatars, was also unveiled at the same event.
Agricultural and forestry machinery manufacturing company John Deere unveils its new fully autonomous tractor. John Deere revealed its new tractor at the CES 2022 event held in Las Vegas. John Deere’s autonomous tractor will be displayed in Central Plaza at the Las Vegas Convention Center.
The event invites companies from all over the world to showcase their innovations to a massive audience. According to John Deere, the fully autonomous tractor is ready to enter large-scale production and will soon be made available for purchase.
The company has integrated various of its technologies and machinery, including Deere’s 8R tractor, TruSet-enabled chisel plow, GPS guidance system, and several others, to come up with this new fully autonomous tractor.
Vice president of production and precision agriculture production systems for Deere, Deanna Kovar, said, “This machine is not for demonstration but for real work in real farm fields and with real customers. We’re pretty confident this approach can be more productive. This tractor never stops to sleep or calls in sick.”
John Deere has equipped the tractor with twelve cameras that enable it to scan the surroundings and make decisions to pass through any obstacle. The captured image is then run through a deep neural network which analyzes each pixel in about 100 milliseconds, making the decision related to movement highly accurate and reliable.
The tractor has been equipped with NVIDIA GPUs to achieve the maximum possible performance for better autonomy. In order to use the tractor, users just need to position the tractor in fields and use the John Deere Operations Center Mobile platform to operate the vehicle effortlessly.
The platform effectively displays live video, images, data, and metrics to help users manage the operations of the autonomous tractor. Users also get to adjust the tractor’s speed according to their requirements, making it a highly flexible and manageable machine.