Tuesday, May 24, 2022
HomeData ScienceAlphaCode: What’s exciting about DeepMind’s New Transformer-based Code Generating System?

AlphaCode: What’s exciting about DeepMind’s New Transformer-based Code Generating System?

In programming competitions hosted on Codeforces, AlphaCode achieved an average ranking within the top 54.3% across 10 recent contests with more than 5,000 participants each.

With its newest innovation, DeepMind has again pushed the boundaries of artificial intelligence capabilities. This British AI subsidiary of Alphabet has created an AI-backed system called AlphaCode. DeepMind claims that the system can generate “computer programs at a competitive level.” DeepMind discovered that when it tested its system against coding tasks used in human contests, it received an “estimated rank” that placed it among the top 54 percent of human coders.

Image Credits: DeepMind

AlphaCode isn’t the first AI tool to produce computer code. Microsoft unveiled a similar tool (Copilot) to help programmers in June, built with the support of GitHub and OpenAI. The GitHub Copilot is a tool used to analyze existing code and generate new snippets or autocompletes lines of code, rather than acting as a standalone problem-solving entity.

However, these models still fall short when tested against more difficult, unknown issues that need problem-solving skills beyond translating instructions into code. Researchers discovered that roughly 40% of Copilot’s output included security flaws in one investigation. As per Armin Ronacher, creator of the Flask web framework for Python, Copilot can be prompted to recommend copyrighted code from the 1999 computer game Quake III Arena, accompanied with comments from the original programmer. 

At the time of Copilot’s debut, GitHub revealed that roughly 0.1% of its code suggestions might contain fragments of verbatim source code from the training set. Copilot could even potentially generate true personal data like phone numbers, email addresses, or names, as well as code that is biased or racist in nature. As a result, the company recommends that the code be thoroughly inspected and verified before being used. The problem of generating meaningless codes is also common to GPT-3.

However, DeepMind claims that Alphacode, unlike most large model NLP tools, is a large-scale transformer code generation model that can provide unique solutions to these deeper-thinking challenges. While designing AlphaCode, DeepMind focused on the following three objectives:

  • Finding a clean dataset to work with and since coding competitions are plentiful, the data was easily acquired.
  • Developing an efficient algorithm, similar to the transformer-based architectures used in natural language processing and image recognition.
  • Making numerous example solutions and then filtering them to locate work that is relevant to the problem at hand.

The emphasis was given to building transformer-based neural architecture because, they can usually learn in a semi-supervised environment, with unsupervised pretraining and supervised fine-tuning. Transformers are initially exposed to “unknown” data for which no previously specified labels exist in this situation. Then they are trained on labeled datasets throughout the fine-tuning phase to learn to do specific tasks like answering queries, assessing sentiment, and paraphrasing documents.

They do agree, though, that AlphaCode’s abilities aren’t precisely reflective of the kind of issues that a typical programmer may encounter for the time being. AlphaCode was not designed to address the same types of problems that an average programmer faces. It’s also worth noting that the major goal of AlphaCode AI’s development, which was not intended to replace software engineers, is to assist those who wish to code.

Read More: Top 10 Innovations by Google DeepMind

According to Oriol Vinyals, Principal research scientist at DeepMind, “the research was still in the early stages, but the results brought the company closer to creating a flexible problem-solving AI.”

DeepMind produced AlphaCode by training a neural network on a large number of coding samples gathered from GitHub’s software repository in the programming languages C++, C#, Go, Java, JavaScript, Lua, PHP, Python, Ruby, Rust, Scala, and TypeScript. In addition, DeepMind fine-tuned and tested the AlphaCode system using CodeContests, a new dataset the lab constructed that combines public programming datasets with challenges, answers, and test cases collected from Codeforces. With 41.4 billion parameters, AlphaCode generates multiple solutions in the C++ and Python programming languages when given a new problem to solve. After that, the DeepMind team executed debugging and testing to automatically select those programs to identify ten solutions worth evaluating and possibly submitting outside.

AlphaCode was evaluated against ten challenges curated by Codeforces, a competitive coding site that offers weekly tasks and assigns coders ranks akin to the Elo rating system used in chess. These tasks are not the same as those that a coder could encounter, e.g., working on a commercial app. They’re more self-contained and need a broader understanding of both algorithms and theoretical computer science ideas. In short, solving these advanced puzzles needs a perfect blend of logical reasoning, coding, critical thinking, and understanding natural language. Further, each content had more than 5,000 participants on the Codeforces site. Averaging at within the top 54.3% of responses, DeepMind estimates that this gives AlphaCode, a Codeforces Elo of 1238, which places it within the top 28% of users who have competed on the site in the last six months. Meanwhile, on CodeContests, given up to a million samples per problem, AlphaCode solved 34.2% of problems. 

An example interface of AlphaCode tackling a coding challenge. The input is given as it is to humans on the left and the output generated on the right. 
Image Credit: DeepMind

Mike Mirzayanov, the founder of Codeforces, argues that the AlphaCode outcomes exceeded his expectations. However, Mirzayanov admitted that he was originally unsure since the method has to be implemented even in basic competitive scenarios. Furthermore, it is critical to even invent it.

At the same time, DeepMind believes it has to address several critical issues before AlphaCode is SaaS-ready. These include interoperability, bias, generalization, and security concerns. Further, as common with all large-scale models, training this transformer-based code generator will need a significant amount of compute. On the plus side, unlike neural network models, which normally require accelerators, once AlphaCode has generated a program, it can usually be performed inexpensively by any computer. This also implies that it might be more conveniently scaled to cater to various applications.

DeepMind, which Google acquired in 2014, has made headlines for projects like AlphaGo, which defeated the world champion in the game of Go in a five-game match, and AlphaFold, which solved a 50-year-old grand challenge in biology. With AlphaCode, the company is set to bring another revolutionary milestone in problem-solving AI technologies.

To read more about this AlphaCode, visit here.

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our Telegram and WhatsApp group to be a part of an engaging community.

Preetipadma K
Preetipadma K
Preeti is an Artificial Intelligence aficionado and a geek at heart. When she is not busy reading about the latest tech stories, she will be binge-watching Netflix or F1 races!


Please enter your comment!
Please enter your name here

Most Popular