Home Blog Page 140

NITI Aayog’s Notion of Responsible AI

niti aayog responsible ai

Due to the rising applicability of artificial intelligence, the government has decided to focus its efforts on several sectors that can embrace and benefit from the same. NITI Aayog is a government body developing itself as a state-of-the-art center for resources and skills. It promotes research and policy development for the government while dealing with contingency issues. This government body has embarked on making AI responsible in the country. 

In June 2018, NITI Aayog released a discussion paper named National Strategy for Artificial Intelligence (NSAI) as a part of its mandate entrusted in the Budget Speech of 2018-2019. The NSAI discussion paper highlighted the potential of artificial intelligence (AI) and its large-scale adoption in the country and made recommendations to ensure responsible utilization and management. The paper also included a roadmap for implementing AI in five public sectors and described “AI for All” as the guiding philosophy for upcoming AI design, development, and implementation in India.

Besides researching, NITI Aayog has also collaborated with companies like Amazon Web Series and Intel to establish innovation centers. It established the Experience Studio at the Delhi headquarters to facilitate innovation among industry experts, government stakeholders, and startups.

More recently, NITI Aayog has published a series of papers discussing “Responsible AI,” the practice of integrating good intent while leveraging artificial intelligence. The series was propagated using the hashtag #AIForAll. 

NITI Aayog published the first edition, “Principles for Responsible AI,” a two-part approach paper defining ethical design, development, and use of artificial intelligence in India and enforcement methods for putting these principles into practice. This edition mentions case studies and considerations in the context of ‘Narrow AI’ solutions categorized as ‘systems considerations’ and ‘societal considerations.’ The former category results from system designs and deployment methods, while the latter stems from ethical challenges arising from AI applications.

This edition also builds on Capgemini’s report highlighting that approximately 85% of organizations had ethical concerns about using AI. It discusses other relevant concerns like job loss due to automation, malicious intent that comes with technology, targeted propaganda, etc.

Read More: Google is Developing an AI App that Creates Images from Text

The second part of the Responsible AI series, “Operationalizing Principles for Responsible AI,” identifies a series of actions that organizations should adopt while embracing AI responsibly. Written in collaboration with the World Economic Forum Centre for the Fourth Industrial Revolution, the book bifurcates all necessary actions between the government and private sector. A particular focus has been placed on the government’s role in ensuring responsible AI adoption and managing the actions of the private sector. 

NITI Aayog recently released the third edition of the series, “Responsible AI for All: Adopting the Framework – A use case approach on Facial Recognition Technology.” To document this paper, NITI Aayog collaborated with Vidhi Centre for Legal Policy and tested the principles and actions mentioned in the previous two releases in the first use case, Facial Recognition Technology (FRT). The book describes FRT as a concept with some common uses across the country.

However, the mentioned technology is a debatable topic on both domestic and international scales because of its hazards to human rights, like privacy. Therefore, as part of the organization’s effort to make AI more responsible, it will work closely with the Ministry of Civil Aviation to launch the Digi Yatra Program. This program will incorporate facial recognition (FR) and facial verification (FV) technologies to enhance the travel experience. FVT will also be used at various airports for passenger identification verification, ticket validation, and other checks as necessary from time to time, depending on the operational requirements.

Working in such a technical field and realizing its potential is a significant step forward for the government. The public sector is also embracing it, not only to ensure that such technologies are ethically used responsibly in the private sector but also for the enhancement of its own. There are many use cases where governments can use artificial intelligence. It can benefit emergency services, public interaction, virtual assistants, and many others.

Advertisement

Key announcements at Google AI@ Event 2022 

Key announcements at Google AI@ event 2022
Google AI@ Event

Google held its inaugural AI@ Event (Google AI@ Event 2022) last Wednesday at the company’s Pier 57 offices in New York City to highlight its latest work in AI technology. The event focused on new advances as well as early research from Google AI and research teams in climate change, Generative AI, Language Translation, Health AI, disaster management, responsible AI, accessibility, healthcare, and creativity. Here is a rundown of all the announcements made during the event.

Flood Hub

The business declared the opening of a brand-new improved platform called Flood Hub, which aims to analyze enormous meteorological data sets to show the potential for flooding in various counties. Google has been using AI to anticipate floods since 2018, warning consumers through Google search and Maps. It began utilizing AI to anticipate flood patterns in India’s Patna region in 2018. Three years later, an enlarged version of the technology-assisted in reaching an estimated 23 million individuals in India and Bangladesh with 115 million flood alerts using Google Search and Maps. As part of the latest update, this feature is available in 18 additional countries, viz., Brazil, Colombia, Sri Lanka, Burkina Faso, Cameroon, Chad, the Democratic Republic of Congo, Ivory Coast, Ghana, Guinea, Malawi, Nigeria, Sierra Leone, Angola, South Sudan, Namibia, Liberia, and South Africa. Google said that if FloodHub performs as planned, it may be able to anticipate floods in affected regions up to seven days in advance

Wildfire Detection System

Google began providing a map feature that instantly displays wildfire boundaries to users in the US in 2020. Google introduced a wildfire layer to maps in 2021, with the US still having more granular tracking of specific incidents. Since July, the company has kept tabs on more than 30 significant wildfire incidents in the United States and Canada.

During the Google AI@ Event 2022, Google said it is introducing an improved AI-powered wildfire monitoring system to the United States, Canada, Mexico, and some regions of Australia. The system uses machine learning algorithms built on satellite data from NASA and the National Oceanic and Atmospheric Administration to track and forecast the development of wildfires in real-time. The feature’s initial focus is assisting first responders in deciding how to put out the fire effectively. Since the largest change is on the back end, users might notice few differences in how they use the product.

AI Test Kitchen for Imagen

In May, Google released its AI text-to-image model, Imagen, which uses diffusion models to create high-resolution images mapping noise back to data. Google revealed at the AI@ Event that Imagen would be added to the AI Test Kitchen app for season 2, albeit only as a restricted Imagen demo. The tool can only be used to generate AI-generated content using the two demos: City Dreamer and Wobble. In contrast to City Dreamer, which lets users create photos of themed cities, Wobble lets users build artificial intelligence-created monsters using similar language prompts. Google uses the AI Kitchen app as a platform to test some of their AI models and solicit user input. It aims to minimize substantial inadvertent harm by letting consumers see its technological prowess and offer feedback through these incredibly constrained use cases. A condensed version of the company’s controversial LaMDA chatbot may also be found on the app.

AI Text Kitchen is now accessible in English for Android and iOS users in Australia, Canada, Kenya, New Zealand, the United Kingdom, and the United States. 

Read More: How AI Image Generators are Compounding existing Gender and Cultural Bias woes?

Generative Video Content

For the first time, Google used Imagen Video and Phenaki to produce a long coherent video from text prompts for the Google AI@ Event 2022. Imagen Video is a text-to-video generative AI model that can create high-definition videos from text input. It is, to put it simply, an extension of Imagen. The text-conditioned video diffusion model is capable of producing movies with a maximum resolution of 1280×768 at a frame rate of 24 fps. Like Imagen Video, Phenaki is a language model capable of realistic video synthesis when given a series of textual prompts.

The video demo showed a single blue helium balloon that was the subject of Google’s AI-generated, super-resolution video, where viewers could see it move across a park before hitting a stray giraffe. The film was interspersed with a series of relevant written commands that were shown on an adjacent screen every few seconds.

Speaking at the AI@ Event, Google Brain Principal Scientist Douglas Eck claimed it’s quite challenging to produce videos with high quality and coherence in time. Movies, or really any other media aiming to employ pictures and videos to create a cohesive tale, rely heavily on that mix of visual quality and continuity over time.

Wordcraft

Google is also paying attention to the generative side of conversational AI despite the market for visual synthetic media’s explosive expansion. The software titan said that it had initiated early-stage experimentation with Wordcraft, a text generator built on its LaMDA dialog system. Wordcraft, in contrast to other text-editing programs like WordTune or Grammarly, aims to aid in the creation of fiction rather than merely enhancing spelling and grammar. On the Wordcraft Writers Workshop website, a group of 13 authors have been using Wordcraft to create brand-new stories that you can read.

1000 Languages Initiative

At the Google AI@ Event 2022, Google announced its ambitious 1,000 Languages Initiative, an effort to create a single AI language model that supports the top 1,000 languages spoken worldwide, including Luganda, Swahili, Lingala, and others. The company created a Universal Speech Model (USM) for this project that has been trained in more than 400 languages. To gather audio samples of various regional languages, Google is also collaborating with South Asian local governments, NGOs, and academic institutions.

Google already has a vast language portfolio, but it wants to continue. More than 7000 languages are spoken worldwide, but just a few are represented online today. The initiative will concentrate on improving representation while AI models are being trained.

Robot to write codes

The amount of work required to train a robot to carry out new jobs can be decreased thanks to an internally developed software tool that was debuted by Google at AI@ Event 2022. The tool, Code as Policies, or CaP, claims that developing code for large language models may be repurposed to produce robot policy code in response to directions in natural language. The objective is to enable robotic systems to develop their own code. The intention is to spare human developers the trouble of having to go in and reprogram things when new information comes in.

It’s available on GitHub under an open-source license.

CaP relies on Google’s PaLM-SayCan paradigm for robots to interpret open-ended human suggestions and reply appropriately and safely in a physical setting. It expands on the PaLM-SayCan research by allowing language models to carry out complex robotic tasks using the full expression of Python code for general-purpose jobs. With CaP, Google advocates leveraging language models to urge robots to create code directly. Apart from PaLM-SayCan, CaP also builds on previous work in automated code completion, like GitHub’s Copilot functionality and OpenAI’s GPT-3.

In addition to writing new code, the tool may use software libraries, which are pre-packaged collections of code that perform common activities. CaP also uses third-party libraries and APIs to create the most appropriate code for a given situation, including supporting instructions in non-English languages and even emojis.

CaP’s capabilities were evaluated by Google researchers in a series of internal tests. In one experiment, the researchers examined whether CaP could instruct a robot to move toy blocks around a table. When given the directive to “arrange the blocks in a square around the middle,” CaP was able to produce code that enabled the robot to do just that.

Healthcare AI 

With a mission to offer accessible healthcare solutions, Google announced it is creating a low-cost ultrasonic tool in collaboration with Northwestern Medicine to help nurses and midwives in underdeveloped areas without access to sonographers. Using an Android app and a portable ultrasound monitor, nurses and midwives in the U.S. and Zambia are trialing a system that assesses a fetus’ gestational age and position in the womb. By detecting problems early in pregnancy, the AI application will assist medical professionals in collecting and interpreting ultrasound pictures and providing timely healthcare.

At the AI@ Event, Google also assured it would expand its collaboration with caregivers and public health authorities to provide access to diabetic retinopathy screening using its Automated Retinal Disease Assessment technology (ARDA). By clicking a photo of their eyes with their smartphone, more than 150,000 people have undergone screening.

Responsible AI

In an effort to reaffirm the company’s commitment to Responsible AI, Google Vice President of Engineering Research Marian Croak highlighted certain possible drawbacks offered by the technology on show at the Google AI@ Event 2022. These include concerns about prejudice and toxicity being amplified by algorithms, deep fakes further eroding faith in news, and false information that can make it difficult to tell what is true from what is false. According to Croak, part of that process include doing research to enable people to have more influence over AI systems so that they could collaborate with systems rather than having the machine take complete charge of situations.

Croak asserted that she believes Google’s AI Principles prioritize people, the avoidance of damage, and safety over its standard economic objectives. She claims that Google conducts adversarial testing on a continuous and constant basis. Then, their researchers make sure that they are establishing quantitative standards across all of the aspects of their AI technologies that can be evaluated and confirmed. These initiatives are being carried out by a wide group of researchers, including social scientists, ethicists, and engineers.

Advertisement

Bengaluru Metro launched a QR ticketing service on WhatsApp 

WhatsApp and Bangaluru Metro Rail Corporation Limited (BMRCL) have recently launched Namma Metro’s first WhatsApp-based chatbot QR ticketing service.

The WhatsApp-based chatbot is integrated with Unified Payments Interface (UPI) enabled payments on Whatsapp, allowing Namma metro passengers to purchase tickets and recharge their travel passes within Whatsapp. As per BMRCL, the Whatsapp-based chatbot is the first transit service to enable end-to-end QR ticketing on WhatsApp. The chatbot is available in both English and Kannada languages for passengers.

The Namma Metro official app is readily available on the Google play store. It consists of services like purchasing QR tickets, recharging metro cards, providing feedback, using travel planners to get the nearest metro stations, fare information, and train departure times, and purchasing single journey tickets with WhatsApp payments through a UPI pin. Easy cancel and refund services are available on the QR tickets. These QR tickets are available at a 5% discount off the token fee. 

Read more: Password attacks rise to 921 every second globally: Microsoft 

The managing director of BMRCL, Anjum Parewz, said that the Whatsapp-based QR ticketing service is to provide more comfort during traveling. Passengers are already using the app to plan their trips with QR tickets from 1st November. BMRCL has collaborated with WhatsApp to avail of the QR tickets service, and no additional charge is applicable for purchasing these tickets on both platforms.

Advertisement

Password attacks rise to 921 every second globally: Microsoft 

As per Microsoft’s digital defense report of 2022, the volume of password attacks has risen to an estimated 921 attacks every second globally, which is a 74 % increase in just one year.

According to the same report, attacks against remote management devices have increased, with more than 100 million attacks detected in May 2022. These attacks have increased to five times more than the last year.

Microsoft last year’s defense report stated that it synthesized over 24 trillion security signals daily, which changed to 43 trillion signals in 2022. Microsoft uses sophisticated data analytics and AI algorithms to understand and protect against digital threats, attacks, and criminal activity.

Read more: Meta’s new AI solves International Math Olympiad problems

In 2021, Microsoft blocked 9 billion endpoint threats, 31 billion identity threats, and 32 billion email threats. To date, Microsoft has removed more than 10,000 domains used by cyber criminals and 600 used by nation-state actors. As per the report, the nation’s state actors have targeted 22% of the IT sector, 17% of NGOs, 14% of the education sector, and 10% of the government sectors. 

The report claimed that 93% of Microsoft’s ransomware incident response engagements revealed insufficient privilege access and lateral movement controls. The most common factors leading to weak protection against ransomware are weak identity controls, limited data protection, ineffective security operations, and multi-factor authentication.

Microsoft comprises more than 8500 security and threat intelligence experts, including engineers, researchers, data scientists, geopolitical analysts, investigators, frontline responders, cyber security experts, and threat hunters across 77 nations.

Read more: NVIDIA collaborates with Mozilla Common Voice for Speech AI

Advertisement

Mayo Clinic Researchers Introduce a Novel ML-based Diffusion Model for Medical Imaging of the Brain

mayo clinic introduces diffusion model for brain imaging

Mayo Clinic researchers have been looking beyond the standard generative ML models for realistic medical imaging. They have come to introduce DDPMs (denoising diffusion probabilistic models), an ML-based diffusion technique to generate medical images. DDPMs are a relatively new class of generative ML models that enables the generation of labeled synthetic images. 

Traditionally, generative ML models are used to learn from medical imaging data and generate realistic images that are not patient-specific. Researchers use these synthetic images to study medical conditions and abnormalities without compromising patient privacy. The only drawback is that such models generate unlabeled imaging data, which is selectively helpful in real-world applications. 

However, the volume of medical imaging data with specific abnormalities is considerably less than that with common pathologies. This results in insufficiently large and imbalanced imaging datasets used for training the models, making them less accurate.

Read More: Researchers Propose a Novel AR Localization and Mapping Technique, ‘LaMAR’

Diffusion models are based on the Markov chain theory and generate synthetic output by ‘gradual denoising’ of an image having Gaussian noise. The process is repeated for all images, making these models run significantly slower than others. Nevertheless, diffusion models outperform other generative models as they can extract more representative features from the input medical imaging data. 

In proposing this research, the researchers created a tool that can retrieve 2D axial image slices from the FLAIR sequence of a brain MRI and inpaint a pre-defined area of that slice with a realistic image. The inpainted image can represent components, including the surrounding edema, tumors, or tumor-less brain tissues. 


The diffusion model presented in ‘MULTITASK BRAIN TUMOR INPAINTING WITH DIFFUSION MODELS: A METHODOLOGICAL REPORT‘ will enable medical practitioners to induce/remove tumoral and tumor-less tissues using brain MRI slices with limited data. The researchers have also provided the code online for people to use.

Advertisement

Twitter loses over one million followers since Musk’s takeover

Twitter loses over one million followers

One week after Elon Musk bought Twitter, the social media platform has reportedly lost more than one million followers.

“We have seen an uptick in people deactivating their accounts and Twitter suspending accounts,” said the founder of the anti-disinformation platform Bot Sentinel, Christopher Bouzy.

MIT Technology Review reported that Bot Sentinel has analyzed 3.1 million accounts on Twitter. It believes that around 877,000 accounts were deactivated, and 497,000 were suspended between October 27 and November 1. That’s more than double the usual number.

Read More: Compose And License Royalty-Free Music With Beatoven.Ai 

Bot Sentinel’s analysis overlooked a percentage of users with suspended or deactivated accounts before applying that data to the overall 237 million daily active users on Twitter. Based on the 5,958 accounts deleted or suspended the week before Musk’s takeover, it indicates a 208% increase in lost accounts.

Bouzy told MIT Technology Review that the increase in suspensions is from Twitter taking action on accounts purposely violating Twitter’s rules to see if they can push the limits of free speech. He believes some users are testing what can and will not stay posted, such as posts using hate speech.

Advertisement

The Infinite Conversation: Can AI chatbots converse for hours like humans?

The Infinite Conversation

A website called The Infinite Conversation, built using artificial intelligence and featuring a perpetual conversation between virtual avatars of Slovenian philosopher Slavoj Žižek and German director Werner Herzog, was launched last week by Italian artist and programmer Giacomo Miceli.

Through warning from virtual Žižek, Miceli shares that the initiative seeks to increase awareness about the simplicity of utilizing technologies for synthesizing a human voice. He believes any motivated user without in-depth technical knowledge can accomplish this feat today from their bedroom with a laptop. Miceli warns that this can alter how we interact with the online content we consume while posing issues with the value of reliable sources, betrayal of trust, and gullibility.

Miceli points out that by the end of 2022, it will be cheap and simple to create AI-generated content that makes it more appealing on the surface and is remarkably similar to the “real thing.” This holds true for recordings that seem like famous people (often referred to as Deepfakes) or speech, as in the instance of the Infinite Conversation.

The talented artist is said to have built the website using “open source tools available to anybody,” denying providing technical details but hinting that he could publish an explanation piece this week. In the site’s FAQ, he explains that the script was generated using a popular language model that has been improved on interviews and content created by each speaker.

Read More: Digital Immortality or Zombie AI: Concerns of Using AI to Bring Back the Dead

Visitors will view AI-generated charcoal pictures of the two guys in their profile when they visit the website. A transcript of AI-generated text is marked in yellow and read aloud between them by voices that imitate Herzog or Žižek. You may jump between each part by clicking the arrows underneath the portraits as the conversation moves back and forth between them with their individual accents.

Advertisement

UK bank Santander to block crypto exchanges next year

UK bank Santander block crypto exchanges

UK bank Santander will block real-time payments for crypto exchanges next year. According to an email to customers, the move is intended to protect customers from scams. Santander has not disclosed when the change will take effect in 2023. The bank will enforce a more limited set of restrictions in the short term.

Payments for cryptocurrency exchanges using online banking will be limited to £1,000 per transaction from November 15 onwards, with a limit of £3,000 in total in any rolling 30-day period. The new rules will not affect the ability of consumers to make withdrawals. 

“In recent months, we have seen a rise in UK customers becoming victims of crypto fraud,” said a Santander spokesperson. “We will do everything we can to protect our consumers, and we feel that limiting payments to crypto exchanges is the best way to ensure your money stays safe.” 

Read More: Compose And License Royalty-Free Music With Beatoven.Ai 

Santander will continue to block payments that are being sent to Binance in light of the UK Financial Conduct Authority’s (FCA) harsh stance on the exchange, which was banned from operating in the UK in 2021 by the watchdog group. 

The FCA claimed the firm is “incapable of being effectively supervised” and its “complex and high-risk financial products” pose a significant risk to consumers. But not all UK banks are pulling back from crypto. Neobank Revolut, operating in the UK since 2015, recently launched a card that allows users to pay in crypto for their goods and services. 

Advertisement

Researchers Propose a Novel AR Localization and Mapping Technique, ‘LaMAR’

AR localization mapping technique LaMAR

Researchers from Microsoft and ETH Zurich, one of the leading universities in research and innovation, have proposed a novel AR localization and mapping technique called ‘LaMAR: Benchmarking Localization and Mapping for Augmented Reality.’ The proposed technique appears to overcome the general challenges researchers encounter while mapping in AR.

AR technology has been around for a while and is simply about placing a virtual object in the real world and tracking its location and shape over time. However, visualization and mapping have yet to be done for the same. 

There are a few challenges that researchers face while mapping AR. Firstly, the AR-specific devices are mostly smartphones equipped with multiple cameras and sensors, making it difficult to map in a single-camera environment. Additionally, AR devices offer spatially-posed sensor streams as they can locally track in real-time. However, objects in an AR scenario may change over time, requiring more than just local tracking. 

Read More: Writing in the Era of Artificial Intelligence

Large-scale AR mapping requires robust algorithms and devices that can keep up with quickly changing data in AR scenarios. With LaMAR, researchers developed a robust system and benchmark for AR localization and mapping. 

LaMAR introduces a large-scale dataset of captured AR images, including indoor and outdoor scenes with/without illumination. Secondly, LaMAR provides a pipeline to produce accurate AR trajectories and handle crowd-sourced data from multiple devices, overcoming many standard challenges in AR mapping and localization. 


LaMAR is highly scalable and precise in mapping augmented reality scenarios, as seen from the performance metrics mentioned in the paper. Researchers plan to continue developing the technique further.

Advertisement

Meta’s new AI solves International Math Olympiad problems

Meta introduced a new AI system that solves complexes International Math Olympiad (IMO) problems. The new model acquires 67 percent of the miniF2F validation set accuracy and is 5x more than the previous AI system.

In December 2019, Facebook built its first AI system for solving advanced mathematical equations with the help of symbolic reasoning. The first AI system solves integration problems, first-order, and second-order integration problems. The AI model demonstrated 99.7 percent accuracy for integration problems and 94 and 81.2 percent for first and second-order integration problems, respectively. 

Since solving complex equations requires precision rather than an approximation, the launch of Meta’s new AI in 2022 is a milestone. Meta has used the HyperTree Proof Search (HTPS) method in its new AI system, which is trained on a dataset of successful mathematical proofs and is generalized to new or different problems. This method can collect correct proofs for the IMO problem, consisting of some arithmetic reduction to a finite number of cases.

Meta has mentioned the detailed work of the new AI system in its new research paper on HTPS that will be presented in NeurIPS 2022. The new AI system is made available with the Lean Visual Studio Code (LVSC) plugin, which allows other researchers to explore the system. Lean is a functional programming language to write correct and maintainable codes. 

Advertisement