Friday, December 27, 2024
ad
HomeMiscellaneousKey announcements at Google AI@ Event 2022 

Key announcements at Google AI@ Event 2022 

Google has announced new projects, software, and upgrades for its existing products to boost artificial intelligence adoption, at its first AI@ Event.

Google held its inaugural AI@ Event (Google AI@ Event 2022) last Wednesday at the company’s Pier 57 offices in New York City to highlight its latest work in AI technology. The event focused on new advances as well as early research from Google AI and research teams in climate change, Generative AI, Language Translation, Health AI, disaster management, responsible AI, accessibility, healthcare, and creativity. Here is a rundown of all the announcements made during the event.

Flood Hub

The business declared the opening of a brand-new improved platform called Flood Hub, which aims to analyze enormous meteorological data sets to show the potential for flooding in various counties. Google has been using AI to anticipate floods since 2018, warning consumers through Google search and Maps. It began utilizing AI to anticipate flood patterns in India’s Patna region in 2018. Three years later, an enlarged version of the technology-assisted in reaching an estimated 23 million individuals in India and Bangladesh with 115 million flood alerts using Google Search and Maps. As part of the latest update, this feature is available in 18 additional countries, viz., Brazil, Colombia, Sri Lanka, Burkina Faso, Cameroon, Chad, the Democratic Republic of Congo, Ivory Coast, Ghana, Guinea, Malawi, Nigeria, Sierra Leone, Angola, South Sudan, Namibia, Liberia, and South Africa. Google said that if FloodHub performs as planned, it may be able to anticipate floods in affected regions up to seven days in advance

Wildfire Detection System

Google began providing a map feature that instantly displays wildfire boundaries to users in the US in 2020. Google introduced a wildfire layer to maps in 2021, with the US still having more granular tracking of specific incidents. Since July, the company has kept tabs on more than 30 significant wildfire incidents in the United States and Canada.

During the Google AI@ Event 2022, Google said it is introducing an improved AI-powered wildfire monitoring system to the United States, Canada, Mexico, and some regions of Australia. The system uses machine learning algorithms built on satellite data from NASA and the National Oceanic and Atmospheric Administration to track and forecast the development of wildfires in real-time. The feature’s initial focus is assisting first responders in deciding how to put out the fire effectively. Since the largest change is on the back end, users might notice few differences in how they use the product.

AI Test Kitchen for Imagen

In May, Google released its AI text-to-image model, Imagen, which uses diffusion models to create high-resolution images mapping noise back to data. Google revealed at the AI@ Event that Imagen would be added to the AI Test Kitchen app for season 2, albeit only as a restricted Imagen demo. The tool can only be used to generate AI-generated content using the two demos: City Dreamer and Wobble. In contrast to City Dreamer, which lets users create photos of themed cities, Wobble lets users build artificial intelligence-created monsters using similar language prompts. Google uses the AI Kitchen app as a platform to test some of their AI models and solicit user input. It aims to minimize substantial inadvertent harm by letting consumers see its technological prowess and offer feedback through these incredibly constrained use cases. A condensed version of the company’s controversial LaMDA chatbot may also be found on the app.

AI Text Kitchen is now accessible in English for Android and iOS users in Australia, Canada, Kenya, New Zealand, the United Kingdom, and the United States. 

Read More: How AI Image Generators are Compounding existing Gender and Cultural Bias woes?

Generative Video Content

For the first time, Google used Imagen Video and Phenaki to produce a long coherent video from text prompts for the Google AI@ Event 2022. Imagen Video is a text-to-video generative AI model that can create high-definition videos from text input. It is, to put it simply, an extension of Imagen. The text-conditioned video diffusion model is capable of producing movies with a maximum resolution of 1280×768 at a frame rate of 24 fps. Like Imagen Video, Phenaki is a language model capable of realistic video synthesis when given a series of textual prompts.

The video demo showed a single blue helium balloon that was the subject of Google’s AI-generated, super-resolution video, where viewers could see it move across a park before hitting a stray giraffe. The film was interspersed with a series of relevant written commands that were shown on an adjacent screen every few seconds.

Speaking at the AI@ Event, Google Brain Principal Scientist Douglas Eck claimed it’s quite challenging to produce videos with high quality and coherence in time. Movies, or really any other media aiming to employ pictures and videos to create a cohesive tale, rely heavily on that mix of visual quality and continuity over time.

Wordcraft

Google is also paying attention to the generative side of conversational AI despite the market for visual synthetic media’s explosive expansion. The software titan said that it had initiated early-stage experimentation with Wordcraft, a text generator built on its LaMDA dialog system. Wordcraft, in contrast to other text-editing programs like WordTune or Grammarly, aims to aid in the creation of fiction rather than merely enhancing spelling and grammar. On the Wordcraft Writers Workshop website, a group of 13 authors have been using Wordcraft to create brand-new stories that you can read.

1000 Languages Initiative

At the Google AI@ Event 2022, Google announced its ambitious 1,000 Languages Initiative, an effort to create a single AI language model that supports the top 1,000 languages spoken worldwide, including Luganda, Swahili, Lingala, and others. The company created a Universal Speech Model (USM) for this project that has been trained in more than 400 languages. To gather audio samples of various regional languages, Google is also collaborating with South Asian local governments, NGOs, and academic institutions.

Google already has a vast language portfolio, but it wants to continue. More than 7000 languages are spoken worldwide, but just a few are represented online today. The initiative will concentrate on improving representation while AI models are being trained.

Robot to write codes

The amount of work required to train a robot to carry out new jobs can be decreased thanks to an internally developed software tool that was debuted by Google at AI@ Event 2022. The tool, Code as Policies, or CaP, claims that developing code for large language models may be repurposed to produce robot policy code in response to directions in natural language. The objective is to enable robotic systems to develop their own code. The intention is to spare human developers the trouble of having to go in and reprogram things when new information comes in.

It’s available on GitHub under an open-source license.

CaP relies on Google’s PaLM-SayCan paradigm for robots to interpret open-ended human suggestions and reply appropriately and safely in a physical setting. It expands on the PaLM-SayCan research by allowing language models to carry out complex robotic tasks using the full expression of Python code for general-purpose jobs. With CaP, Google advocates leveraging language models to urge robots to create code directly. Apart from PaLM-SayCan, CaP also builds on previous work in automated code completion, like GitHub’s Copilot functionality and OpenAI’s GPT-3.

In addition to writing new code, the tool may use software libraries, which are pre-packaged collections of code that perform common activities. CaP also uses third-party libraries and APIs to create the most appropriate code for a given situation, including supporting instructions in non-English languages and even emojis.

CaP’s capabilities were evaluated by Google researchers in a series of internal tests. In one experiment, the researchers examined whether CaP could instruct a robot to move toy blocks around a table. When given the directive to “arrange the blocks in a square around the middle,” CaP was able to produce code that enabled the robot to do just that.

Healthcare AI 

With a mission to offer accessible healthcare solutions, Google announced it is creating a low-cost ultrasonic tool in collaboration with Northwestern Medicine to help nurses and midwives in underdeveloped areas without access to sonographers. Using an Android app and a portable ultrasound monitor, nurses and midwives in the U.S. and Zambia are trialing a system that assesses a fetus’ gestational age and position in the womb. By detecting problems early in pregnancy, the AI application will assist medical professionals in collecting and interpreting ultrasound pictures and providing timely healthcare.

At the AI@ Event, Google also assured it would expand its collaboration with caregivers and public health authorities to provide access to diabetic retinopathy screening using its Automated Retinal Disease Assessment technology (ARDA). By clicking a photo of their eyes with their smartphone, more than 150,000 people have undergone screening.

Responsible AI

In an effort to reaffirm the company’s commitment to Responsible AI, Google Vice President of Engineering Research Marian Croak highlighted certain possible drawbacks offered by the technology on show at the Google AI@ Event 2022. These include concerns about prejudice and toxicity being amplified by algorithms, deep fakes further eroding faith in news, and false information that can make it difficult to tell what is true from what is false. According to Croak, part of that process include doing research to enable people to have more influence over AI systems so that they could collaborate with systems rather than having the machine take complete charge of situations.

Croak asserted that she believes Google’s AI Principles prioritize people, the avoidance of damage, and safety over its standard economic objectives. She claims that Google conducts adversarial testing on a continuous and constant basis. Then, their researchers make sure that they are establishing quantitative standards across all of the aspects of their AI technologies that can be evaluated and confirmed. These initiatives are being carried out by a wide group of researchers, including social scientists, ethicists, and engineers.

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our WhatsApp Channel and Discord Server to be a part of an engaging community.

Preetipadma K
Preetipadma K
Preeti is an Artificial Intelligence aficionado and a geek at heart. When she is not busy reading about the latest tech stories, she will be binge-watching Netflix or F1 races!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular