Home Blog Page 131

Meta Introduces CICERO, the First AI That Plays Diplomacy at a Human Level

meta cicero plays diplomacy

Meta has created CICERO, the first AI that plays Diplomacy, a strategy board game, at a human level. CICERO demonstrated its human-level competence by playing the online version on webDiplomacy.net and achieving an average score of more than double what human players do. 

Diplomacy is a strategic board game involving seven players. The game takes on other games like Risk, Poker, and even the TV show Survivor. It is a team-playing game, as the only way to win is via working with other players to conquer as much territory as possible. Team playing is achieved using natural language negotiation before every move.

Meta selected Diplomacy as the subject game because researchers have already been developing simplified variants of the game but without natural language capabilities. To incorporate this technology, Meta attempted to build an agent that could negotiate in the game.

Read More: Chinese Platforms to Test Metaverse During Qatar FIFA World Cup 2022

CICERO combines two major artificial intelligence areas: strategic reasoning and natural language processing. When these technologies are integrated, CICERO can reason and strategize players’ moves and then communicate accordingly via natural language to achieve shared objectives. 


Meta has provided the code, methods, and models behind the AI agent on their GitHub page for developers to gain more insights on the advancement.

Advertisement

IIT Madras Offering 12-week Free Online Course on Machine Learning

Introduction to Machine learning IIT Madras

IIT Madras is offering a free online course on machine learning called ‘Introduction to Machine learning’ for students and interested professionals. 

The Introduction to Machine learning course by IIT Madras aims to equip students with the skills they need to use machine learning and data analytics to understand the growing availability of data from various sources. The course will be held from 23 January 2023 to 14 April 2023.

Professor Balaraman Ravindran, a Mindtree Faculty Fellow and currently a member of the IIT Madras Computer Science Department, will teach the course. The professor has almost two decades of machine learning research expertise, particularly in reinforcement learning. His research interests also include data mining, and social network analysis.

Read More: IIT Kharagpur announces free online course in artificial intelligence and machine learning

Applications are being accepted for the course that will address the fundamental concepts of machine learning from a mathematical perspective. The course will also cover the various learning paradigms as well as some of the more common algorithms and structures employed in each of these paradigms.

  • Probability theory, linear algebra, and convex optimization.
  • Linear regression, multivariate regression, subset selection, shrinkage methods, principal component regression, and partial least squares.
  • Linear classification, logistic regression, and linear discriminant analysis.
  • Learning theory, introduction to reinforcement learning, and optional videos.
  • Neural networks, Statistical Decision Theory, and Gaussian Mixture Models among other topics.

IIT Madras is also inviting enrollment for a course on Reinforcement Learning by Professor Balaraman Ravindran. Enrollment for both courses ends on 30 January 2023.

Advertisement

Kenya on Fast-track to Pass Bill on Crypto Taxation

Kenya Crypto Taxation

Kenya has lately revealed its intention to regulate the cryptocurrency industry by emphasizing taxing digital currency transactions as the sector grows.

It would be possible to tax cryptocurrency exchanges, digital wallets, and transactions under the Capital Markets (Amendment) Bill, 2022. This tax will be similar to a 20% excise charge levied by banks on all commissions and fees on cryptocurrency trading.

If authorized by Parliament, Kenyan crypto investors would be required to pay capital gains tax to the Kenya Revenue Authority whenever they sell or use their crypto in a transaction. If cryptocurrency is held for less than a year, it will be subject to income tax (10% to 30%), and if it is held for more than a year, capital gains tax will be applied. The bill would also mandate that investors disclose key information about their cryptocurrency ownership to the Capital Markets Authority, the government’s financial regulator.

The proposed bill would designate digital currencies as securities, allow for the registration of individual cryptocurrency traders, and create a centralized computerized registry of all digital currency transactions. A fund would be established to safeguard investors from financial loss originating from the collapse of a licensed broker or dealer, and privacy guarantees would be implemented as additional consumer protection measures.

Kenya, whose central bank has warned residents against cryptocurrency usage and trading, will follow in the footsteps of Ethiopia, which banned cryptocurrency before regulating it.

Read More: El Salvador and Bitcoin Mayhem: Is this crypto experiment failing?

Many Kenyan crypto traders are outraged by the government’s latest move. After witnessing a loss in cryptocurrency earlier this year and being unable to withdraw their balances following the recent FTX collapse, most Kenyan crypto pundits did not find the news welcoming.

Kenya has the largest proportion of crypto-owning citizens in Africa. Ukraine, Russia, Venezuela, and Singapore are the only four nations, according to the United Nations Conference on Trade and Development (UNCTAD), whose residents hold more cryptocurrency than Kenya. However, the nation’s crypto community may begin to dwindle in the coming months due to eroding investor confidence in the sector, which has seen over US$2 trillion wiped off the market globally this year.

Advertisement

AI Code Generator Kite Shuts Down

On November 16, Kite, an AI assisting tool that helped developers to write code, will no longer be in use. The founder of Kite has announced on its official website that it has stopped working on Kite and will no longer support the Kite software.

Adam Smith founded Kite in 2014, and since June 2021, Kite has been temporarily unavailable. As per Adam Smith, Kite failed because of two main reasons-Kite fell short of being tech-ready, and they could not build a business around Kite. Falling short in the technology means Kite could not project the 10x improvement needed to break through the market, stating state-of-the-art for ML on code was not good enough. This is because the state-of-the-art models require help understanding the code’s structure, like the non-local context.

Read more: VR-controlled robots are being designed to treat injured UK soldiers

Kite started working on building better models for code, but building models was very engineering intensive and required about $100 million. As per Adam Smith, Kite could not generate more revenue as their 500k developers and engineering managers using the product preferred not to pay for it.

Teams at Kite also thought of turning their business towards code searching using AI technology and a bottom-up development strategy. However, this new strategy could not help Kite and eventually led to the shutdown of Kite. Most of Kite’s code, including the Python type inference engine, Python public package analyzer, desktop software, editor integrations, and GitHub Crawler, are open-source on Github.

Advertisement

India, EU sign agreement on cooperation in climate modeling and quantum tech

India EU agreement in climate modeling and quantum tech

On Monday, the European Union (EU) and India signed an agreement on cooperation in hi-tech fields, such as climate modeling and quantum technologies, as a part of the Trade and Technology Council initiated by the two parties earlier this year.

The ‘Intent of Cooperation on High-Performance Computing, Weather Extremes, Climate Modelling and Quantum Technologies’ was officially signed by the European Commission’s Directorate General for Communications Networks, Content and Technology and the ministry of electronics and IT (MeitY) during a virtual ceremony. 

The agreement builds on commitments by both parties for deepening technological cooperation on high-performance and quantum computing during the EU-India leaders meeting on May 8. “The signing of the agreement is significant in the context of the earlier decision to set up the EU-India Trade and Technology Council,” the EU said.

Read More: US Air Force Teams Up With SandboxAQ For Post-Quantum Cryptography Deal

The agreement aims to facilitate partnership on high-performance computing applications using European and Indian supercomputers in Covid-19 therapeutics, mitigating climate change, bio-molecular medicines, predicting natural disasters, and quantum computing.

The EU initiated the proposal for setting up the council. India agreed to the proposal as it will allow the two parties to work on issues such as quantum computing, 5G, artificial intelligence, climate modeling, and health-related technology.

Advertisement

VR-controlled robots are being designed to treat injured UK soldiers

Researchers at the Department of Automatic Control Systems Engineering and Advanced Manufacturing Research Center (AMRC), Sheffield, have planned to develop a VR-controlled robotics system to treat injured UK soldiers during wars.

The robotic system will enable trained medics to check on the soldiers using the virtual reality (VR) headset and remotely control a robot to perform medical triage. It will send photos and videos of injuries to medics and detect patients’ information like temperature, blood pressure, mouth swap, and blood sample.

Read more: Galileo launches its free machine learning platform to debug unstructured data instantly

Often injured soldiers in wars are checked by a medical technician, similar to a paramedic. The equipment and facilities provided on the battlefield are limited, and moving injured soldiers to hospitals can take time and even days. Therefore, to overcome this challenge, researchers at AMRC planned to design a remotely operated robot that will save a soldier’s life in extreme situations.

Professor Sanja Dogramadzi, the head of Digital Design at the University of Sheffield AMRC, will lead the VR-controlled, remotely operated robots project. Professor Sanja Dogramadzi stated that the remotely operated robotic system would improve safety by reducing the danger soldiers face during medication. The project is funded by the Defence Science Technology Laboratory and Nuclear Decommissioning Authority with the Defence and Security Accelerator.

Advertisement

MIT Researchers Solved Differential Equation Problem in Liquid Neural Network

closed form continuous-time neural network, mit liquid neural network
Image Credits: imaginima / Getty Images

Last year, researchers at MIT created a unique kind of neural network that learns while performing tasks. Dubbed liquid neural network, this deep learning model is capable of adjusting its inherent behavior after the initial training phase and was believed to be the key to bringing exceptional advancements in dynamic scenarios where conditions can change quickly — like autonomous driving, controlling robots, or diagnosing medical conditions. In other words, a liquid neural network can actively adapt to new data inputs in real-time to anticipate future behavior, allowing algorithms to make decisions based on data streams that change frequently.

The research team eventually discovered that as the models’ number of neurons and synapses grows, they become computationally costly and necessitate cumbersome computer programs to solve the underlying, complex math necessary for the algorithms. Due to the magnitude of the equations, the math problems become increasingly challenging to solve, frequently taking multiple computing steps to arrive at a solution.

On Tuesday, MIT researchers reported that they had developed a solution to that constraint, not by expanding the data pipeline, but by solving a differential equation that had puzzled mathematicians since 1907. This differential equation explains how two neurons connect through synapses and could be the key to developing new, quick artificial intelligence systems. These modes are orders of magnitude quicker and more scalable than liquid neural networks, yet they share the same flexible, causal, robust, and explainable properties. Because they are small and flexible even after training, unlike many traditional models, these neural networks could be applied to any task that requires gaining insight into data over time.

The team calls the new network the “closed-form continuous-time” neural network (CfC). In their paper published in Nature Machine Intelligence, the researchers describe a type of machine learning system called continuous-time neural networks that can handle representation learning on spatiotemporal decision-making tasks. These models are generally defined by continuous differential equations, where differential equations are used to describe the state of a system at distinct, discrete points or stages of a process. For instance, differential equations help in understanding how a body X would move from point A to point B in space with time.

The ordinary differential equation (ODE) based continuous neural network designs are expressive models helpful in modeling data with complicated dynamics. These models enable parameter sharing, adaptive computations, and function approximation for non-uniformly sampled data by transforming the depth dimension of static neural networks (SNNs) and the temporal dimension of recurrent neural networks (RNNs) into a continuous vector field.

On comparatively small benchmarks, ODE-based neural networks with careful memory and gradient propagation design outperform advanced discretized recurrent models. However, due to the employment of complex numerical differential equation solvers, their training and inference are slow. Consider the same body X now has to move from point A to point B via point C, then point D, back to Point A before pausing at point E – implying the need for costly and complex calculations. This becomes increasingly evident when the complexity of the data, task and state space rises, as in open-world issues like processing medical data, operating self-driving vehicles, analyzing financial time series, and simulating physics. In simple words,  numerical differential equation solvers, impose a limit on their expressive power when used in advanced computation applications. This restriction has significantly slowed down the scaling and interpretation of many physical processes that occur in nature, such as the understanding dynamics of nervous systems.

Read More: Mechanical Neural Network: Architectured Material that adapts to changing conditions

The closed-form continuous-time neural network models preserve the impressive characteristics of liquid networks without the need for numerical integration by replacing the differential equation governing the computation of the neuron with a closed-form approximation. These networks can scale exceptionally well compared to other deep learning instances, which is a significant improvement over conventional differential equation-based continuous networks. Moreover, since these models are developed from liquid networks, they outperform advanced, recurrent neural network models in time-series modeling.

Closed-form continuous-time neural network models are causal, compact, explainable, and economical to train and predict, according to MIT Professor Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and senior author of the paper. They also pave the way for reliable machine learning in safety-critical applications and can solve the issue with an even less number of neural nodes, making the process quicker and less computationally costly.

While evaluating the performance in making predictions and finishing tasks, it has already outperformed a number of other artificial neural networks. It also executes faster and with better accuracy when identifying human activities from motion sensors, simulating the physical dynamics of a walker robot, and performing event-based sequential image processing. On a sample of 8,000 patients, the Cfc’s medical predictions were 220 times faster than their equivalents.

MIT researchers are optimistic that they will be able to create models of the human brain that measure the millions of synaptic connections using closed-form continuous-time neural networks, which is now not conceivable. The team also speculates that this model could be able to automatically generalize outside of its distribution (out-of-distribution generalization) by using the visual training it acquired in one environment to solve problems in a completely another one.

Advertisement

British PM announces new scheme for world’s 100 most talented young professionals in AI

British scheme 100 most talented professionals AI

British Prime Minister Rishi Sunak announced a new scheme for the world’s 100 most talented young professionals in AI as part of the vision to make the UK a center to attract the brightest talent from around the world.

He pledged to set up one of the world’s most attractive visa regimes for highly skilled people/entrepreneurs and use the Brexit freedoms to initiate trade deals with the world’s fastest-growing economies.

The UK is currently in the process of a free trade agreement (FTA) with India, which the PM had previously told Parliament to get done as quickly as possible. We simply cannot allow the world’s top AI talent to be taken by America or China, said Sunak.

Read More: India To Take Over The Chair Of Global Partnership Of Artificial Intelligence From France

“That is why, building on the Master’s conversion courses and AI scholarships I instigated as the chancellor, we are launching a program to attract the world’s top 100 young talents on AI,” Sunak said.

Sunak said that harnessing innovation to boost economic growth, incorporating invention in public services, and making people learn skills to become innovators is how he believes the problems can be overcome.

Advertisement

Chinese Platforms to Test Metaverse During Qatar FIFA World Cup 2022

chinese platforms test metaverse during FIFA 2022

The FIFA World Cup 2022 in Qatar has been a hub for several artificial intelligence-driven technologies to improve the viewer experience. Some of these include an AI Metaverse League, AI-based cameras to monitor facial expressions, etc. With the same goal, many Chinese platforms are looking into and planning to test the metaverse. 

Earlier in November, the Chinese government announced its plans to explore the metaverse and innovate in the virtual reality domain. The idea is to enhance VR headsets and make them more functional, including odor simulation, eye tracking, gesture tracking, and many other components.

As the FIFA World Cup is one of the biggest sporting events in the world and is around the corner, it provides a window of opportunity for many firms to test their ideas and implementations of metaverse technology.

Read More: Soccer’s Governing Body FIFA Aims to Improve Offside Decisions with AI

Migu, a Chinese platform, is looking forward to developing and testing a “world-first” virtual environment for users to enjoy the tournament. Migu’s CCO, Gan Yuqing, announced the FIFA metaverse. Yuqing has previously organized a “World Cup Music Festival” to be held in the metaverse and host a visitor from 2070. 

Bytedance, the parent company of Tiktok, also announced that it would enable its VR goggles for users to enjoy soccer matches in digital spaces. The goggle would also allow users to invite other people for a shared viewing experience.

Advertisement

NVIDIA Magic3D Generates 3D Mesh Models from Text

NVIDIA Magic3D generates 3D models from text

Researchers from NVIDIA have announced Magic3D, an AI model that generates 3D mesh models from text inputs. Once given a prompt, Magic3D generates a model with colored textures and contours in about 40 minutes. 

NVIDIA is mounting Magic3D in response to Google’s DreamFusion, another text-to-3D AI model. DreamFusion generates 2D images via text-to-image and optimizes them into volumetric NeRF (neural radiance fields) data. Magic3D uses a method similar to DreamFusion, but in a two-part process to take a coarse model with a lower resolution and then optimize it to a higher resolution.

In the first process, Magic3D uses a base diffuser similar to that in DreamFusion. This diffuser is used to compute gradients of the scene model via a loss defined on rendered images at a low resolution of 64 × 64. In the second stage, LDM (latent diffusion model) is used for backpropagating gradients into images of the higher resolution of 512 x 512.

Read More: TorchOpt: A New Python Library for Optimization

Magic3D is a significant enhancement to DreamFusion as it improves several design aspects. It consists of both low- and high-resolution diffusion priors that learn the 3D representations of target content. Magic3D synthesizes content with 8x higher resolution and 2x faster than DreamFusion.


Researchers hope that Magic3D will enable 3D model creation without prior model training and could accelerate video game development and VR-based applications. They concluded the research paper by saying, “We hope with Magic3D, we can democratize 3D synthesis and open up everyone’s creativity in 3D content creation.”

Advertisement