Researchers at the Department of Automatic Control Systems Engineering and Advanced Manufacturing Research Center (AMRC), Sheffield, have planned to develop a VR-controlled robotics system to treat injured UK soldiers during wars.
The robotic system will enable trained medics to check on the soldiers using the virtual reality (VR) headset and remotely control a robot to perform medical triage. It will send photos and videos of injuries to medics and detect patients’ information like temperature, blood pressure, mouth swap, and blood sample.
Often injured soldiers in wars are checked by a medical technician, similar to a paramedic. The equipment and facilities provided on the battlefield are limited, and moving injured soldiers to hospitals can take time and even days. Therefore, to overcome this challenge, researchers at AMRC planned to design a remotely operated robot that will save a soldier’s life in extreme situations.
Professor Sanja Dogramadzi, the head of Digital Design at the University of Sheffield AMRC, will lead the VR-controlled, remotely operated robots project. Professor Sanja Dogramadzi stated that the remotely operated robotic system would improve safety by reducing the danger soldiers face during medication. The project is funded by the Defence Science Technology Laboratory and Nuclear Decommissioning Authority with the Defence and Security Accelerator.
Last year, researchers at MIT created a unique kind of neural network that learns while performing tasks. Dubbed liquid neural network, this deep learning model is capable of adjusting its inherent behavior after the initial training phase and was believed to be the key to bringing exceptional advancements in dynamic scenarios where conditions can change quickly — like autonomous driving, controlling robots, or diagnosing medical conditions. In other words, a liquid neural network can actively adapt to new data inputs in real-time to anticipate future behavior, allowing algorithms to make decisions based on data streams that change frequently.
The research team eventually discovered that as the models’ number of neurons and synapses grows, they become computationally costly and necessitate cumbersome computer programs to solve the underlying, complex math necessary for the algorithms. Due to the magnitude of the equations, the math problems become increasingly challenging to solve, frequently taking multiple computing steps to arrive at a solution.
On Tuesday, MIT researchers reported that they had developed a solution to that constraint, not by expanding the data pipeline, but by solving a differential equation that had puzzled mathematicians since 1907. This differential equation explains how two neurons connect through synapses and could be the key to developing new, quick artificial intelligence systems. These modes are orders of magnitude quicker and more scalable than liquid neural networks, yet they share the same flexible, causal, robust, and explainable properties. Because they are small and flexible even after training, unlike many traditional models, these neural networks could be applied to any task that requires gaining insight into data over time.
The team calls the new network the “closed-form continuous-time” neural network (CfC). In their paper published in Nature Machine Intelligence, the researchers describe a type of machine learning system called continuous-time neural networks that can handle representation learning on spatiotemporal decision-making tasks. These models are generally defined by continuous differential equations, where differential equations are used to describe the state of a system at distinct, discrete points or stages of a process. For instance, differential equations help in understanding how a body X would move from point A to point B in space with time.
The ordinary differential equation (ODE) based continuous neural network designs are expressive models helpful in modeling data with complicated dynamics. These models enable parameter sharing, adaptive computations, and function approximation for non-uniformly sampled data by transforming the depth dimension of static neural networks (SNNs) and the temporal dimension of recurrent neural networks (RNNs) into a continuous vector field.
On comparatively small benchmarks, ODE-based neural networks with careful memory and gradient propagation design outperform advanced discretized recurrent models. However, due to the employment of complex numerical differential equation solvers, their training and inference are slow. Consider the same body X now has to move from point A to point B via point C, then point D, back to Point A before pausing at point E – implying the need for costly and complex calculations. This becomes increasingly evident when the complexity of the data, task and state space rises, as in open-world issues like processing medical data, operating self-driving vehicles, analyzing financial time series, and simulating physics. In simple words, numerical differential equation solvers, impose a limit on their expressive power when used in advanced computation applications. This restriction has significantly slowed down the scaling and interpretation of many physical processes that occur in nature, such as the understanding dynamics of nervous systems.
The closed-form continuous-time neural network models preserve the impressive characteristics of liquid networks without the need for numerical integration by replacing the differential equation governing the computation of the neuron with a closed-form approximation. These networks can scale exceptionally well compared to other deep learning instances, which is a significant improvement over conventional differential equation-based continuous networks. Moreover, since these models are developed from liquid networks, they outperform advanced, recurrent neural network models in time-series modeling.
Closed-form continuous-time neural network models are causal, compact, explainable, and economical to train and predict, according to MIT Professor Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and senior author of the paper. They also pave the way for reliable machine learning in safety-critical applications and can solve the issue with an even less number of neural nodes, making the process quicker and less computationally costly.
While evaluating the performance in making predictions and finishing tasks, it has already outperformed a number of other artificial neural networks. It also executes faster and with better accuracy when identifying human activities from motion sensors, simulating the physical dynamics of a walker robot, and performing event-based sequential image processing. On a sample of 8,000 patients, the Cfc’s medical predictions were 220 times faster than their equivalents.
MIT researchers are optimistic that they will be able to create models of the human brain that measure the millions of synaptic connections using closed-form continuous-time neural networks, which is now not conceivable. The team also speculates that this model could be able to automatically generalize outside of its distribution (out-of-distribution generalization) by using the visual training it acquired in one environment to solve problems in a completely another one.
British Prime Minister Rishi Sunak announced a new scheme for the world’s 100 most talented young professionals in AI as part of the vision to make the UK a center to attract the brightest talent from around the world.
He pledged to set up one of the world’s most attractive visa regimes for highly skilled people/entrepreneurs and use the Brexit freedoms to initiate trade deals with the world’s fastest-growing economies.
The UK is currently in the process of a free trade agreement (FTA) with India, which the PM had previously told Parliament to get done as quickly as possible. We simply cannot allow the world’s top AI talent to be taken by America or China, said Sunak.
“That is why, building on the Master’s conversion courses and AI scholarships I instigated as the chancellor, we are launching a program to attract the world’s top 100 young talents on AI,” Sunak said.
Sunak said that harnessing innovation to boost economic growth, incorporating invention in public services, and making people learn skills to become innovators is how he believes the problems can be overcome.
The FIFA World Cup 2022 in Qatar has been a hub for several artificial intelligence-driven technologies to improve the viewer experience. Some of these include an AI Metaverse League, AI-based cameras to monitor facial expressions, etc. With the same goal, many Chinese platforms are looking into and planning to test the metaverse.
Earlier in November, the Chinese government announced its plans to explore the metaverse and innovate in the virtual reality domain. The idea is to enhance VR headsets and make them more functional, including odor simulation, eye tracking, gesture tracking, and many other components.
As the FIFA World Cup is one of the biggest sporting events in the world and is around the corner, it provides a window of opportunity for many firms to test their ideas and implementations of metaverse technology.
Migu, a Chinese platform, is looking forward to developing and testing a “world-first” virtual environment for users to enjoy the tournament. Migu’s CCO, Gan Yuqing, announced the FIFA metaverse. Yuqing has previously organized a “World Cup Music Festival” to be held in the metaverse and host a visitor from 2070.
Bytedance, the parent company of Tiktok, also announced that it would enable its VR goggles for users to enjoy soccer matches in digital spaces. The goggle would also allow users to invite other people for a shared viewing experience.
Researchers from NVIDIA have announced Magic3D, an AI model that generates 3D mesh models from text inputs. Once given a prompt, Magic3D generates a model with colored textures and contours in about 40 minutes.
NVIDIA is mounting Magic3D in response to Google’s DreamFusion, another text-to-3D AI model. DreamFusion generates 2D images via text-to-image and optimizes them into volumetric NeRF (neural radiance fields) data. Magic3D uses a method similar to DreamFusion, but in a two-part process to take a coarse model with a lower resolution and then optimize it to a higher resolution.
In the first process, Magic3D uses a base diffuser similar to that in DreamFusion. This diffuser is used to compute gradients of the scene model via a loss defined on rendered images at a low resolution of 64 × 64. In the second stage, LDM (latent diffusion model) is used for backpropagating gradients into images of the higher resolution of 512 x 512.
Magic3D is a significant enhancement to DreamFusion as it improves several design aspects. It consists of both low- and high-resolution diffusion priors that learn the 3D representations of target content. Magic3D synthesizes content with 8x higher resolution and 2x faster than DreamFusion.
Researchers hope that Magic3D will enable 3D model creation without prior model training and could accelerate video game development and VR-based applications. They concluded the research paper by saying, “We hope with Magic3D, we can democratize 3D synthesis and open up everyone’s creativity in 3D content creation.”
The Department of the Air Force recently awarded SandboxAQ, an enterprise SaaS company that provides governments and the Global 1000 with the cumulative benefits of AI and quantum (AQ) technology, a Phase 1 Small Business Innovation Research (SBIR) contract to carry out post-quantum cryptographic inventory analysis and performance benchmarking.
As part of the agreement, SandboxAQ will evaluate the encryption currently in use and identify any software enhancements that offer an end-to-end, crypto-agile framework to protect the Air Force and Space Force data networks from potential quantum technology-based attacks. It is known that Phase 1 SBIR payments typically range from $225,000 to $350,000 for projects lasting 6 to 12 months. Nevertheless, the company chose not to reveal the contract’s financial details.
This new partnership, which is a part of the Air Force’s effort to get ready for The Quantum Computing Cybersecurity Preparedness Act, which mandates that US federal agencies upgrade to post-quantum encryption, is SandboxAQ’s first military contract since being spun off from Alphabet in March earlier this year. In general, the collaboration between the Air Force and SandboxAQ shows that the risk posed by post-quantum computing is real, for which businesses must start preparing.
The SandboxAQ software package simplifies the implementation of post-quantum cryptography. The startup claims that its software package includes both conventional and encryption techniques resistant to quantum technology. There are also a number of tools included that can make it simpler for businesses to integrate the algorithms into their applications.
The corner stone of The New York Federal Reserve Bank REUTERS/Brendan McDermid
Following the FTX meltdown, the US agreed to begin experimenting with a CBDC (central bank digital currencies) digital dollar over the following 12 weeks. On November 15, the Regulated Liability Network (RLN), a proof-of-concept digital currency platform, was made public by the Federal Reserve Bank of New York’s Innovation Center (New York Fed), or NYIC.
According to the New York Fed, the initiative would investigate the viability of a shared multi-entity distributed ledger on a regulated liability network that operated as an interoperable network of central bank wholesale digital money and commercial bank digital money. It will engage central banks, commercial banks, and regulated non-banks to enhance financial settlements.
As part of the pilot, massive financial institutions, including BNY Mellon, Citi, HSBC, Mastercard, PNC Bank, TD Bank, Truist, U.S. Bank, and Wells Fargo, will issue tokens and settle transactions using simulated central bank reserves to represent US Dollar via a shared multi-entity distributed ledger. The technology for the pilot initiative, which will be used in a test environment, is being provided by SETL and Digital Asset.
According to the group, the initiative would have a regulatory framework that is in line with current laws requiring know-your-customer (KYC) and anti-money laundering measures. The viability of expanding the platform to include additional digital assets like stablecoins will also be investigated.
Following the completion of the study, the group says it will disclose the results of the pilot program, but participants are not compelled to engage in future endeavors.
Michelle Neal, head of the New York Fed’s market department, stated earlier this month that the central bank saw promise in adopting a digital dollar to shorten settlement times in currency markets. Banking authorities have long been interested in CBDCs, which similar to stablecoins, are digital representations of a state’s fiat currency that are paired 1:1 with a particular fiat currency. But, while stablecoins produced by central banks might aid in the fight against fraud, they ultimately maintain the centralized structure of FIAT currency and plainly include a mass monitoring component.
It is unclear what kind of regulatory structure the US would adopt for cryptocurrencies, especially given that the collapse of FTX appears to have set the tone for a complete prohibition. Even though the EU and UK are also investigating CBDCs, at least their legislative initiatives seem to be in favor of crypto assets. Meanwhile, China has made progress in this direction by testing the digital yuan since April in numerous regions, and the currency is even available to WeChat users. Recently, it has added four provinces to its list of CBDC testing regions. In September, Australia announced a pilot project for a digital dollar based on Quorum, an enterprise-grade, private variant of Ethereum. Last month, the Reserve Bank of India (RBI) announced a pilot of its version of CBDC.
The NYIC pilot initiative was introduced shortly after the center released research on its wholesale central bank digital currency initiative on November 4. Project Cedar, the first stage of the CBDC experiment, investigated foreign exchange spot trades to see if a blockchain solution might speed up, lower the cost of, and provide easier access to cross-border wholesale payments.
In a related development, on November 10, the New York Fed and the Monetary Authority of Singapore (MAS) launched a collaborative experiment to see how wholesale CBDCs could simplify multi-currency cross-border payments.
Advait Danke, an Indian DJ, spiritual catalyst, and sound alchemist, has officially launched the ‘Resonance Living Mindful Metaverse,’ called the first sound meditation metaverse to be launched in India.
The newly-launched metaverse is easy to navigate and user-friendly. It can be accessed through mobiles, laptops, and desktops on the Spatial App. To provide a hassle-free user experience, the metaverse developers have ensured that there is no login issue and no subscription fee is charged, thus providing free access to everyone.
The platform has immersive visuals and mindful movements that give the user a spiritual and meditative experience. Further, this experience can be enhanced by incorporating VR headsets. The metaverse was co-developed with the team at Wow Labz, Xarvel, and Metawood.
Since its launch, the platform by Advait Danke has been appreciated by people for its unique, engaging features and for bringing forth the concept of how the science of sound, music, meditation, and consciousness combined impacts human body-mind energies.
Advait also integrated blockchain technology into the metaverse through “The Sounds of Universe,” an enduring audio NFT collection based on the science of brainwave technology. The audio NFTs induce mindful musical patterns and vibrations to impact an individual’s mental and emotional state positively.
Abu Dhabi will launch a Metaverse version of its entertainment capital, Yas Island, as its entertainment arena has various sections like theme parks, homes, aquariums, and malls. Moreover, the island is accessible to a global audience.
Yas Island is the entertainment capital of the UAE. The country has a total of 200 islands, but in 2023, one more island will be added. This will be unique because it will be a digital island.
Once it is set up, the audiences can meet up or play games while exploring the digital island. The CEO of Miral, Mohammed Abdalla Al Zaabi, announced the initiative.
According to him, VR is the best medium for a global audience to discover Yas Island uniquely. He added that the visitors would be able to create virtual homes and even theme parks.
The central tourist attraction will be the Ferrari World, where visitors will get a chance to ride the world’s fastest rollercoaster F1 racetrack.
India will be taking over the chair of Global Partnership on Artificial Intelligence (GPAI) from France today, November 21. Rajeev Chandrasekhar, Minister of State for Electronics and Information Technology, will represent India at the GPAI meeting in Tokyo as the symbolic takeover from France.
The development comes in light of India assuming the Presidency of the G20, which is a league of the world’s largest economies. GPAI is an international initiative to support human-centric development and the use of artificial intelligence.
The GPAI comprises 25 member countries, including the US, UK, Italy, Japan, Mexico, New Zealand, South Korea, Singapore, European Union, Australia, Canada, France, and Germany.
India joined the GPAI in the year 2020 as one of its founding members. Artificial intelligence is expected to contribute $957 billion to the Indian economy by 2035. It may also add $450 to 500 billion to the GDP of India by 2025, accounting for 10% of its 5 trillion dollar GDP target.
India occupying the chair also signifies how today’s world perceives it as a trusted technology partner and has always advocated for the ethical use of technology to transform citizens’ lives.