Global semiconductor manufacturing giant Intel, the Ministry of Electronics and Information Technology (MeitY), National e-Governance Division (NeGD), the Government of India, and the United Nations Development Programme (UNDP) recently held a session on deep learning.
It was their seventh session which focused on Demystifying Deep Learning. More than 95 government officials and ten ministers attended the online session meant for government staff and policymakers.
Intel specialists participated in the recently held webinar targeted for policymakers as part of the company’s Digital Readiness portfolio. The webinar provided relevant industry experience and used case studies for helping policymakers.
Abhishek Singh, President and CEO of NeDGD, Ministry of Electronics and Information Technology, said, “These sessions are not only leading to capacity building but also more and more adoption of technology-based solutions in collaboration with industry experts and ecosystem partners, walking the participants through relevant use-cases and discussing scalable solutions.”
He also talked about several artificial intelligence technologies like Netra.ai, an AI-based treatment for diabetic retinopathy, as well as E-Paarvai, a tool used by the Tamil Nadu government to diagnose cataracts. During the event, participants also got the opportunity to take part in an interactive and hands-on artificial Intelligence exercise via a gaming interface. A
ccording to officials, the session will host several other sessions regarding emerging technologies like blockchain and others. The prime aim of the sessions will be to provide industry-specific experience to attendees with international and Indian use cases.
“In the coming times, Deep Learning Networks will help us understand computer memory better. DL will be democratized further to become a standard part of a developer’s toolkit,” said Omesh Tickoo, Principal Engineer and Research Manager at Intel. He also talked about the rising demand for artificial intelligence and machine learning technologies for modernizing robots and other systems.
With its newest innovation, DeepMind has again pushed the boundaries of artificial intelligence capabilities. This British AI subsidiary of Alphabet has created an AI-backed system called AlphaCode. DeepMind claims that the system can generate “computer programs at a competitive level.” DeepMind discovered that when it tested its system against coding tasks used in human contests, it received an “estimated rank” that placed it among the top 54 percent of human coders.
Image Credits: DeepMind
AlphaCode isn’t the first AI tool to produce computer code. Microsoft unveiled a similar tool (Copilot) to help programmers in June, built with the support of GitHub and OpenAI. The GitHub Copilot is a tool used to analyze existing code and generate new snippets or autocompletes lines of code, rather than acting as a standalone problem-solving entity.
However, these models still fall short when tested against more difficult, unknown issues that need problem-solving skills beyond translating instructions into code. Researchers discovered that roughly 40% of Copilot’s output included security flaws in one investigation. As per Armin Ronacher, creator of the Flask web framework for Python, Copilot can be prompted to recommend copyrighted code from the 1999 computer game Quake III Arena, accompanied with comments from the original programmer.
— Armin “vax ffs” Ronacher (@mitsuhiko) July 2, 2021
“I’m leaving GitHub because copilot uses my OpenSource code for training” is such an odd move. Anyone can fork it to there and GitHub can feed OpenSource code from anywhere to it and US copyright law permits this. I’m also pretty certain we should not strengthen copyright laws …
— Armin “vax ffs” Ronacher (@mitsuhiko) July 3, 2021
At the time of Copilot’s debut, GitHub revealed that roughly 0.1% of its code suggestions might contain fragments of verbatim source code from the training set. Copilot could even potentially generate true personal data like phone numbers, email addresses, or names, as well as code that is biased or racist in nature. As a result, the company recommends that the code be thoroughly inspected and verified before being used. The problem of generating meaningless codes is also common to GPT-3.
However, DeepMind claims that Alphacode, unlike most large model NLP tools, is a large-scale transformer code generation model that can provide unique solutions to these deeper-thinking challenges. While designing AlphaCode, DeepMind focused on the following three objectives:
Finding a clean dataset to work with and since coding competitions are plentiful, the data was easily acquired.
Developing an efficient algorithm, similar to the transformer-based architectures used in natural language processing and image recognition.
Making numerous example solutions and then filtering them to locate work that is relevant to the problem at hand.
The emphasis was given to building transformer-based neural architecture because, they can usually learn in a semi-supervised environment, with unsupervised pretraining and supervised fine-tuning. Transformers are initially exposed to “unknown” data for which no previously specified labels exist in this situation. Then they are trained on labeled datasets throughout the fine-tuning phase to learn to do specific tasks like answering queries, assessing sentiment, and paraphrasing documents.
They do agree, though, that AlphaCode’s abilities aren’t precisely reflective of the kind of issues that a typical programmer may encounter for the time being. AlphaCode was not designed to address the same types of problems that an average programmer faces. It’s also worth noting that the major goal of AlphaCode AI’s development, which was not intended to replace software engineers, is to assist those who wish to code.
According to Oriol Vinyals, Principal research scientist at DeepMind, “the research was still in the early stages, but the results brought the company closer to creating a flexible problem-solving AI.”
DeepMind produced AlphaCode by training a neural network on a large number of coding samples gathered from GitHub’s software repository in the programming languages C++, C#, Go, Java, JavaScript, Lua, PHP, Python, Ruby, Rust, Scala, and TypeScript. In addition, DeepMind fine-tuned and tested the AlphaCode system using CodeContests, a new dataset the lab constructed that combines public programming datasets with challenges, answers, and test cases collected from Codeforces. With 41.4 billion parameters, AlphaCode generates multiple solutions in the C++ and Python programming languages when given a new problem to solve. After that, the DeepMind team executed debugging and testing to automatically select those programs to identify ten solutions worth evaluating and possibly submitting outside.
AlphaCode was evaluated against ten challenges curated by Codeforces, a competitive coding site that offers weekly tasks and assigns coders ranks akin to the Elo rating system used in chess. These tasks are not the same as those that a coder could encounter, e.g., working on a commercial app. They’re more self-contained and need a broader understanding of both algorithms and theoretical computer science ideas. In short, solving these advanced puzzles needs a perfect blend of logical reasoning, coding, critical thinking, and understanding natural language. Further, each content had more than 5,000 participants on the Codeforces site. Averaging at within the top 54.3% of responses, DeepMind estimates that this gives AlphaCode, a Codeforces Elo of 1238, which places it within the top 28% of users who have competed on the site in the last six months. Meanwhile, on CodeContests, given up to a million samples per problem, AlphaCode solved 34.2% of problems.
An example interface of AlphaCode tackling a coding challenge. The input is given as it is to humans on the left and the output generated on the right. Image Credit: DeepMind
Mike Mirzayanov, the founder of Codeforces, argues that the AlphaCode outcomes exceeded his expectations. However, Mirzayanov admitted that he was originally unsure since the method has to be implemented even in basic competitive scenarios. Furthermore, it is critical to even invent it.
At the same time, DeepMind believes it has to address several critical issues before AlphaCode is SaaS-ready. These include interoperability, bias, generalization, and security concerns. Further, as common with all large-scale models, training this transformer-based code generator will need a significant amount of compute. On the plus side, unlike neural network models, which normally require accelerators, once AlphaCode has generated a program, it can usually be performed inexpensively by any computer. This also implies that it might be more conveniently scaled to cater to various applications.
DeepMind, which Google acquired in 2014, has made headlines for projects like AlphaGo, which defeated the world champion in the game of Go in a five-game match, and AlphaFold, which solved a 50-year-old grand challenge in biology. With AlphaCode, the company is set to bring another revolutionary milestone in problem-solving AI technologies.
Business Insider reported that Meta, formerly known as Facebook, blames Apple for its historic less revenue generation in the last quarter.
According to the company, it would lose $10 billion in a year because of a software change made by Apple. A software update allows iPhone users to choose which apps they want to track their behavior across other applications, which led to this massive loss of Facebook.
This feature allowed users to unselect Facebook from that list, considerably affecting Facebook’s advertising revenue. CFO of Meta, David Wehner, said, “The impact of iOS overall as a headwind on our business in 2022 is on the order of $10 billion.”
Advertisements are one of the prime sources of revenue for Meta, and when iPhone users got the chance to disallow Facebook to track them, Meta lost its primary revenue generation source from Apple users.
The new feature was released with the launch of Apple’s iOS 14.5 software update in April last year. A recent report claims that around 95% of iOS users having the feature chose to disallow Facebook from tracking them, which was a significant factor for the loss of Meta.
“We can’t be precise on this. It’s an estimate. We’re working hard to mitigate those impacts and continue to make ads relevant and effective for users,” said David Wehner.
Because of the functionality, app publishers were bound to include a pop-up asking for permission to track user activity for ad sales. When users choose not to share their information with a particular application, Apple disables that app’s access to a range of data that advertisers utilize.
Researchers from the National Eye Institute have developed a new artificial intelligence-powered system that can accurately evaluate vision loss in individuals. The system can monitor the risk of an eye disease named Stargardt that causes vision loss in children.
The artificial intelligence system identifies the retina cells affected by the disease to generate data, helping in patient monitoring. Apart from generating information related to their health condition, the system also helps identify genetic causes of the illness and develop proper treatment plans.
Michael F. Chiang, MD, director of the NEI, said, “These results provide a framework to evaluate Stargardt disease progression, which will help control for the significant variability from patient to patient and facilitate therapeutic trials.”
The researchers focused on the health of photoreceptors in the ellipsoid zone, a characteristic of the inner/outer segment border of photoreceptors that is decreased or eliminated because of the disease.
The most commonly found form of Stargardt is known as ABCA4-associated retinopathy, which develops in nearly out of every 9000 individuals. This disease develops because some individuals inherit two mutated copies of the ABCA4 gene from their parents.
Whereas people who inherit only one gene are genetic carriers of the disease, but they don’t develop it. Researchers used a deep-learning algorithm to quantify and compare photoreceptor loss and different layers of the retina based on the patient’s phenotype and ABCA4 variation.
Researchers studied 66 such patients for a period of five years to develop this system. Images were clicked of their retinas and were fed to a deep learning algorithm to generate results.
Brian P. Brooks, MD, Ph.D., chief of the NEI Ophthalmic Genetics & Visual Function Branch, said, “Different variants of the ABCA4 gene are likely driving the different disease characteristics, or phenotypes. However, conventional approaches to analyzing structural changes in the retina have not allowed us to correlate genetic variants with phenotype.”
Turkey’s Ministry of Agriculture and Forestry is now planning to use artificial intelligence solutions to tackle the massive challenge of controlling wildfires.
The ministry said they intend to use AI for detecting early signs of smoke, which would help in taking adequate precautionary measures against the spread of wildfires. The artificial intelligence-powered system will be developed by the ministry that would use a range of cameras for smoke detection.
This development comes after the massive destruction Turkey had to bear last year due to a sudden spread of wildfire in the region. According to an interview published by Yeni Şafak newspaper, the cameras will be installed at the top of watch towers located in forests to increase the system effectiveness and accuracy.
According to Forestry Minister Bekir Pakdemirli, “smoke perception” allows cameras to detect smoke from a distance of up to 20 kilometres. Additionally, the artificial intelligence-powered system would drastically reduce the fire detection time to a matter of a few minutes, which if done with current practices, would be time consuming.
Currently, the technology is deployed in two provinces named Antalya and Mula, which suffered heavy losses during the fire outbreak of 2021. Last year, Turkey was ravaged by a catastrophic wildfire along Anatolia’s southern coast, which engulfed 53 provinces and caused over 270 forest fires.
Therefore this new AI system will be used as an effective precautionary tool, helping officials to take quicker actions. “AI enables us to keep track of the smoke and deploy our teams as soon as possible,” said Bekir Pakdemirli.
When cameras detect smoke, they automatically convey alert signals to authorities through text or video message using artificial intelligence and machine learning. Turkey currently has more than 100 watch towers with cameras, in which each tower will be able to scan an area of 50,000 hectare and send required notifications to authorities in under two minutes. Recently,the President of Turkey, Recep Tayyip Erdoğan mentioned that the government plans to accelerate the development and deployment of safety infrastructures in the country to battle wildfires. President Erdoğan, said, “We will increase the number of domestically manufactured unmanned aerial vehicles (UAVs) to eight, the number of firefighting planes to 20 and helicopters to 55”
Bloomberg reported that recently two of Google’s ethical AI team members left to join Timnit Gebru’s new nonprofit research institute. This development is not good news for the technology giant as the two researchers belonged from the unit studying an area that is vitally important to Google for its future plans.
The two members who left Google are Alex Hanna, a research scientist, and Dylan Baker, a software engineer. Timnit Gebru’s new nonprofit research institute named DAIR, or Distributed AI Research, was founded in December with the purpose of analyzing various points of view and preventing harm in artificial intelligence.
Hanna, who will soon serve as the Director of Research at DAIR, said that she was dejected because of the toxic work culture at Google and also pointed out the underrepresentation of black women in the company.
A Google spokesperson said, “We appreciate Alex and Dylan’s contributions — our research on responsible AI is incredibly important, and we’re continuing to expand our work in this area in keeping with our AI Principles.”
The spokesperson added that they are also dedicated to building an environment where people with varied viewpoints, backgrounds, and experiences can do their best job and support one another.
The ethical Ai team of Google had already been in a controversy in 2020 when the team’s co-head spoke about the work culture of the company regarding women and black employees.
The severity of the matter was proven when Google’s parent company Alphabet’s CEO Sundar Pichai, had to take the issue into his hands and launch an investigation for the same. Later Googler fired two employees as an action against the allegations.
“A high-level executive remarked that there had been such low numbers of Black women in the Google Research organization that they couldn’t even present a point estimate of these employees’ dissatisfaction with the organization, lest management risk deanonymizing the results,” said Hanna.
OpenAI API recently announced three new families of embedding models – text similarity, text search, and code search, each geared to excel at various tasks. These three models either take code or text as input and provide an embedding vector as a result. Besides, they make natural language and code tasks such as clustering, semantic search, and classification perform effortlessly.
Embeddings are numerical representations of concepts transformed into number sequences. They are beneficial for working with natural language and code since they can be easily consumed and compared with other machine learning models and algorithms such as clustering and search.
The new endpoint maps text and code to a vector representation – “embedding” them in a high-dimensional space using neural network models. These are the descendants of GPT-3 where each dimension captures some aspects of the input.
Text similarity models provide embeddings that represent the semantic similarity of texts and also help in tasks such as clustering, data visualization, and classification.
Text search models provide embeddings that allow for large-scale search tasks, such as discovering a relevant short search query document among a collection of documents based on a text query.
Code search models provide code and text embeddings aiming to discover the relevant code block for a natural language query from a collection of code blocks.
Researchers at the Massachusetts Institute of Technology (MIT) have developed a unique artificial intelligence and machine learning model named Equidock that can rapidly and accurately predict the complex that will form when two proteins bind together.
It is a groundbreaking model as currently, scientists all over the world are trying to understand and battle the COVID-19 pandemic. Scientists have to understand how protein attachments take place for developing a successful synthetic antibody.
Scientists face a significant challenge of the unavailability of training data during the developmental process of Equidock. The newly developed AI model will considerably help researchers in this process of identifying protein attachments.
According to researchers, this AI model is between 80 and 500 times faster than other traditional software used for this purpose. PostDoc at MIT, Octavian Eugen Ganea, said, “Deep learning is very good at capturing interactions between different proteins that are otherwise difficult for chemists or biologists to write experimentally. Some of these interactions are very complicated, and people haven’t found good ways to express them.”
Ganea added that this AI-powered deep learning model could learn these types of interactions from data. Equidock has a high accuracy rate, making it very reliable. Apart from analyzing antibodies, the model can also be very beneficial for other biological processes involving protein interactions, like DNA replication and repair, which can help boost the speed of medication development. “If we can understand from the proteins which individual parts are likely to be these binding pocket points, then that will capture all the information we need to place the two proteins together,” said Ganea. If scientists can discover these two sets of points, they just have to figure out how to rotate and translate the proteins so that one set corresponds to the other.
Amazon has released a new testing tool for Alexa developers who want to increase the number of people who use their voice apps. The Alexa Skill A/B testing tool allows developers to set up A/B tests to learn how to get more customers to spend more time and money with their app and increase the number of distinct sessions when users restart the software.
In general, A/B testing is a type of experiment in which users are given two or more versions of feedback at random, and statistical analysis is performed to see which one works better for a certain conversion objective. A/B testing is extensively used in various sectors, including retail, marketing, and SaaS. A/B testing may be set up on a number of channels and mediums, including Facebook advertisements, search results, email newsletter processes, email subject line text, marketing campaigns, sales scripts, and so on. You design renditions, choose a metric and see which one has the highest conversion rate. Commonly, two alternatives are presented to an audience, with each receiving one of them, and the audience’s reaction helps determine which one should be used universally.
Voice apps employ this idea to evaluate the best key phrases to react to and the best responses to encourage repeat usage by decreasing friction in navigating the app. Setting up these tests may be time-consuming and inconvenient, which is why Alexa’s new A/B testing aims to make it easier. The setup takes only a few hours and can be evaluated over several weeks.
Amazon highlighted UK-based voice design studio, Vocala as an example of a customer that implemented A/B testing to discover that a certain prompt was 15% more successful in driving paid conversions. The test had taken the company less than two hours to finish.
“We analyzed the results of our experiment through a panel. After a few weeks, we could clearly see that the longer prompt was 15% more effective at generating paid conversions,” said James Holland, lead voice developer at Vocala.
According to a post on the Alexa developer blog, users may use the Alexa A/B testing to run experiments that measure customer engagement, retention, churn, and monetization. Developers can use this to examine a variety of products. This includes, customer perceived friction rate, In-Skill (ISP) purchase offers, ISP sales, ISP accepts, ISP offer accept rate, Skill next day retention, Skill dialogs, and Skill active days. The new features follow up on Amazon’s revelation last summer about Alexa Skill A/B Testing, which allowed users to try out new or updated Alexa skills as part of the Alexa Skill Design Guide, which was also announced at the same event.
Last year, during its third annual Alexa Live event, Amazon introduced Alexa Skill Components to help developers build skills quicker by inserting basic Skill code into existing speech models and code libraries. It also announced that the Alexa Skill Design Guide, which codifies lessons gained from Amazon’s developers and the larger skill-building community, has been improved.
Image Source: Amazon Developer
To get started with the Alexa A/B testing, go to the Alexa Developer Console and choose the skill for which you wish to perform an experiment, then click “Certification.” Look for the “A/B Testing” section and select “Create” from the drop-down menu. In the Experiment Analytics area, you can see experiment-related statistics for both the live skill version (control) and the certified skill version (treatment).
Max hospital Saket launched its new AI-powered cancer treatment technology named Radixact X9 Tomotherapy. It is a highly capable artificial intelligence-enabled treatment system that is integrated with the 2nd generation synchronization respiratory motion management system.
The Department of Radiation Oncology recently launched the technology at Max Institute of Cancer Care (MICC), Saket.
The Radixact X9 uses Artificial Intelligence to track and provide treatment in real-time, guaranteeing that the tumor is not missed due to chest or abdominal breathing movement during radiation, which ensures the high accuracy of tumor detection without worrying about breathing movement of the chest or abdomen during radiation.
Dr. Charu Garg, Director of Radiation Oncology, MICC at Max Hospital Saket, said, “New radiation technologies have made it possible to precisely target and destroy the cancer cells while preserving normal cells of the body. With this new technique, the Department of Radiation Oncology at Max Institute of Cancer Care (MICC) Saket has added another milestone in its journey towards providing best-in-class healthcare services in the field of oncology.”
She further added that the launch of their new AI-based technology proves the Hospital’s commitment to cancer treatment and the welfare of cancer patients. The newly introduced technology eliminates or decreases tumors by combining the precision of intensity-modulated radiation therapy and an image-guided scan,
It uses a slice-by-slice approach to target the tumor (helical IMRT), allowing improved radiation dose management inside healthy cells. Experts claim that TomoTherapy is a next-generation advanced radiation therapy system that can deliver extremely precise radiation in even the most severe cancer patients.
Dr. Dodul Mondal from Max Hospital Saket said, “This technology not only allows us to perform high definition of IGRT but also enables monitoring of the changes in a tumor on a day-to-day basis.”
He also mentioned that this form of treatment turns out to be very beneficial in several cases where patients have earlier received radiation but require re-radiation.