DeepMind has introduced a framework to build AI agents that can perform human actions in video game worlds. With its paper titled “Improving Multimodal Interactive Agents with Reinforcement Learning from Human Feedback,” DeepMind is putting together early steps in building video game AIs that are familiar with human concepts and can interact with people on their own.
Mimicking human behavior is considerably challenging for artificial intelligence. It requires a grave understanding of natural language and situated intent. As per the majority of researchers, generating code involving all nuances of interactions is practically impossible. As an alternative to extensive coding, they are now focusing on modern machine learning to make the models learn from data.
DeepMind developed a research paradigm that enables agent behavior improvement via grounded and open-ended human interaction. It is a new paradigm, yet it can create AI agents that can listen, talk, search, ask questions, and navigate in real time.
DeepMind created a virtual “playhouse” with recognizable objects and random configurations designed for navigation and search. The interface also includes a chat for communication without any constraints. The idea begins with people imbued with an unrefined set of behaviors interacting with others. The “prior behavior” enables humans to judge agents’ interactions.
This judgment is optimized using reinforcement learning to produce better agent behaviors. To further learn more goal-oriented behavior, the AI agent must pursue an object and master movements around it.
Ultimately, to measure the effectiveness of the agents in the game, a reward model is necessary. DeepMind researchers trained a reward model with human preferences and used it to place the agents into a simulator to make them go through a question-answer set. The reward model scored their behavior as the agents listened and responded in the environment.
The technique is still in its infancy, and researchers welcome all comments and feedback on the same.
Vinci Protocol, a company providing NFT data and web3 development service, raised US$2.1M in a seed funding round for its NFT infrastructure. The company provides infrastructure for users, developers, and other organizations to interact and trade in the NFT space. Vinci Protocol recently launched its official mainnet after completing the CertiK Audit.
To all Vincians,
We have completed a 2.1M seed round fundraising to build an #NFT infrastructure to empower #Web3 builders and applications.
People at the Vinci Protocol believe that the future of NFT is uncertain yet opportunistic as it holds a vast pool of knowledge. After launching its first community NFT in 2022, the company has cemented itself in the NFT space and lending pools on the Ethereum mainnet.
The new funding will enable Vinci Protocol to expand its resources and continue developing a world-class NFT-driven product. The funding will also help in furthering the NFT research to more non-standard use cases.
Florian, CEO of Vinci Protocol, said, “It’s easy to have a vision of a complete NFT infrastructure. However, our approach is to identify the most needed user cases and build them from the ground up.” He also added that the company would focus on the financialization of NFT and NFT data oracles to design more developer and organizational tools with democratized property rights.
Imagine a scenario where a robot could self-assemble itself – thus cutting human effort for the same. Recently, MIT researchers have made great strides in the direction of developing robots that could effectively and inexpensively manufacture almost anything, even objects that were considerably larger than themselves, such as automobiles, buildings, and larger robots. The most recent accomplishment is detailed in a paper by professor and Center for Bits and Atoms (CBA) Director Neil Gershenfeld, doctorate student Amira Abdel-Rahman, and three other authors in the journal Nature Communications Engineering.
Robots that can put together structures on their own are called self-assembling robots. Many tend to believe that they are self-building robots, which is a legitimate interpretation of the phrase ‘self-assembling’, but it is not what people in the robotics industry mean.
The team acknowledges that it will be years before they achieve their real goal of a completely autonomous self-replicating robot assembly system that is capable of both planning the optimum building sequence and creating larger structures, including larger robots. However, the new research makes significant progress in that direction by figuring out difficult problems like when to produce more robots, how big to make them, and how to coordinate swarms of robots of various sizes to construct a building effectively without colliding with one another.
This advancement draws on years of research, including various studies showing that deformable airplane wings and working race cars could be put together from small, identical lightweight bits and that some of this assembly work might be done by robots. By demonstrating both the assembler bots and the components of the structure being built can be made of the same subunits and that the robots can move autonomously in vast numbers to complete large-scale assemblies swiftly.
MIT’s new self-assembling robot technology follows a similar assortment of small identical subunits called voxels (the volumetric equivalent of a 2-D pixel) to create huge, useable structures as in earlier tests. This time the team has created advanced voxels where each can transfer both power and data from one unit to the next, unlike the mechanical structural previous voxels. This might enable the building of structures that can not only withstand weights but also perform tasks, such as lifting, moving, and manipulating materials, even the voxels themselves.
Credit: Amira Abdel-Rahman/MIT Center for Bits and Atoms
The robots themselves are made up of a string of many voxels linked end to end. These can migrate like inchworms to the desired position, where they can grasp another voxel using attachment points on one end, connect it to the developing structure, and release it there.
Earlier, an MIT team was developing ElectroVoxels, which are tiny, intelligent, self-assembling robots created for space. These robots were tested on NASA’s vomit comet, a large padded airplane with the seats removed for the purpose of giving scientists and pilots a brief moment of zero gravity during looping parabolic flights. The goal of this research was to employ them as unique tools, reorganize mass for spinning motions that would provide a kind of artificial gravity by centrifugal force, or put mass in the way of a potentially harmful solar flare.
While previous systems designed by the institution could theoretically build arbitrarily enormous structures, MIT warns that once the size of those structures reached a certain threshold in relation to the size of the assembler robot, the process would become undeniably impractical due to the ever-longer journeys each bot would have to traverse to transport each piece to its destination. This also caused path planning complexity. In other words, these systems were certainly an upgrade from the conventional monolithic robots (competent but rigid) and modular robots (flexible but less capable). However, because their numbers and sizes were fixed, scaling led to decreased performance and throughput.
When the voxels became available, the bots could determine if it was time to build a larger version of themselves that could travel farther and faster. While parts of a building with lots of fine detail may need more of the smallest robots, an even greater structure might need yet another similar stage, with the new larger robots generating even larger ones.
The new voxel-based robot system has the ability to assemble robots sequentially, recursively, and hierarchically to create larger robots. To do this, the construction is discretized into a series of uncomplicated, basic building parts that can be rearranged to generate a variety of capabilities. The discretization makes the coordination, navigation, and mistake correction of the swarm considerably lot easier. The component composition is assisted by an algorithm to assemble the building blocks into swarms and strategize the best possible construction journey.
The hardware is based on an earlier system that used passive structural lattice voxels as a foundation for the mobility of specially designed inchworm robots that could place and rearrange additional voxels. Through targeted registration to the underlying lattice, coupling voxels with robots creates a material-robot system that enables the exact assembly of massive structures using simple robots. The new system improves the connection between the material-robot system by developing a modular robotic toolkit with active lattice voxels serving as the main structural building elements. When these active voxels are integrated with actuators, control, and power, they enable novel capabilities such as robotic self-replication and hierarchical robot construction.
The six sides of the voxels were created by laminating an acetal face to a printed circuit board (PCB) substrate. These sides are later joined together to form the whole unit cell, which includes soldered intravoxel electrical connections and epoxy reinforcement. A 1.6 mm FR4 substrate and 1oz copper layers were used in the printing of voxel circuit boards using PCBWay custom prototyping. A Trotec Speedy-100 Flexx was used to laser cut the acetal faces, and Loctite SF-770 primer and Loctite 401 adhesive were used for laminating the acetal faces to the voxel circuit boards.
Power, ground, and a single serial communication line are routed through each voxel face using a pair of 4.7625 mm 3.175 mm (3/16 in. 1/8 in.) magnets of opposite polarity to create an orientation-independent structural connection, while a 6-pin spring-connector creates a hermaphroditic interface for the three electrical circuits. The maximum transmission capacity for a face-to-face connection is 8 A at 10 V and 50 N of tensile force. The structural-robotic system is then finished with supplemental active components, including an ESP32-based microcontroller with a 7.4 V lithium polymer battery pack, two rotary actuators where an elbow rotates parallel to the plane of attachment and a wrist that rotates perpendicular to the attachment plane, as well as a gripper made to clamp lattice components for placement, mobility, and assembly.
The voxel-based robot assembling algorithm was governed by two conditions. First, following assembly, the new robot has to be able to move freely. Although this requirement may seem unimportant, the nature of the magnetic connections prevents assembling flat on the lattice substrate and then lifting the finished robot into place. In order to get around this, child carrier robots are built from a foundation that consists of a control voxel and a gripper that can attach to the lattice.
Second, the gripper can only directly operate the base voxels, control and power voxels, and elbow joints from the robotic construction kit. The wrist and gripper modules are both made to snap onto a free voxel face. A carrier robot initially picks up one of the first three-component kinds, then maneuvers to connect wrists and grippers to the base part before placing it in the assembly, a process known as accessorizing.
The team believes that these robot swarms have a wide range of possible applications in sectors that now demand huge capital investments for permanent infrastructure or are infeasible. These include seismic metamaterials, automotive assembly lines, aircraft components, and airframes.
While the currently described algorithms are centralized, MIT also noted in their paper that if the system size increases, scalable compilers, and decentralized control mechanisms would be required. Although these algorithms gave useful examples, they were not demonstrated to be the best options. More advanced path planning and collision avoidance strategies could be used to shorten the construction period, and exploring the number and location of pickup stations is a crucial design factor that significantly influences the throughput of the robots. Further, the team will continue their work to ensure the continuity of metamaterial behavior in structures with greater performance.
Harvey, a startup founded by Winston Weinberg and Gabriel Pereyra, uses artificial intelligence (AI) to answer legal questions using a natural language interface. The startup received funding of US$5m from the OpenAI Startup Fund.
After being inspired by OpenAI’s GPT-3 text and code-generating system, Weinberg realized such systems to be influential in legal workflows. Pereyra said, “Our product provides lawyers with a natural language interface for their existing legal workflows.”
Harvey enables paralegals to describe tasks they wish to complete and receive the generated output. The AI saves time because lawyers can simply instruct the model instead of manually editing legal documents. Harvey leverages large language models to function and understand users’ intent and generate output.
For instance, Harvey can respond to questions like “Tell me if this clause in a lease is in violation of California law, and if so, rewrite it so it is no longer in violation.” Despite how well Harvey performs, it cannot replace human lawyers.
Pereyra says that with Harvey, they want to serve as an “intermediary” and not as a “replacement.” Harvey would bridge the legal and tech landscape gap by making lawyers more efficient and reallocating their time to more valuable parts of the job.
Given the intensely private nature of legal disputes, attorneys and law firms could be hesitant to grant Harvey access to case files. Additionally, language models tend to spread harmful information and social biases. Following the apprehension, Harvey comes with a disclaimer that it should not be used to provide legal advice to non-lawyers and always be under the supervision of licensed attorneys.
A health-tech platform, Redbrick AI, that harnesses artificial intelligence has raised $4.6 million in seed funding led by Sequoia India and its startup accelerator in Southeast Asia, Surge. Y Combinator and angel investors also took part in the round.
The company aims to facilitate solutions for building medical imaging AI. It was founded in 2021 by University of Michigan alumni Derek Lukacs and Shivam Sharma. Its SaaS platform provides web annotation tools for 3D/2D data to provide experts with access to specialized tooling right from their browsers.
“With the growth of AI in clinical settings, researchers need excellent tools to create high-quality models and datasets at scale. Our customers oversee this growth, pioneering everything from the automated detection of cancers and surgical robots. The new funds will be indispensable to the growth of our engineering team in India and to diversify our suite of products,” said Sharma, the company’s CEO.
RedBrick AI’s tools are built to address challenges that are unique to medical data annotation, such as quality control, machine learning integration, and the complexity of existing annotation tools.
The company has joined a growing list of health-tech startups that have received significant funding recently, including diabetes care startup Beato and D2C health-tech startup Good Health Company.
As per a report by MIXED, a German VR publication, residents of Germany may soon be able to order Quest 2 and Quest Pro by the end of this year. As the antitrust case ages, Meta may resume sales in Germany.
Over the past 2 years, Meta has been blocked from promoting its VR headsets on allegations that Meta’s social accounts can be wrongfully used to log in to the devices. After the sales hit a halt in 2022 following a probe over the company’s plan to tie Facebook accounts to Oculus, consumers had to procure the VR devices from neighboring European countries.
The probe involved Germany’s Federal Cartel Office (FCO) following up on the claim and leading Meta to untie its data with its VR devices due to Meta’s so-called ‘superprofiling’ of users. Herein, Meta pooled usage data and linked it to a single user ID for ad targeting purposes.
On the ongoing issue, the FCO informed that Meta expressed interest in finding a middle-ground in the Facebook/Oculus matter in August 2022. The suggested solution would allow users to use Quest 2 and Quest Pro without social media accounts.
The FCO suggested Meta tweak its proposal in the least possible manipulative way so that users can decide how to set the headsets. It added, “Following corresponding adjustments, particularly with regard to user dialogues, the Quest 2 and Quest Pro headsets are also expected to be available in Germany soon.”
The NFT marketplace Magic Eden announces a collaboration with the Ethereum scaling layer-2 blockchain Polygon in order to expand into the blockchain gaming and NFT ecosystems.
In addition to hosting some of the biggest Web3 game projects and publishers, including well-known brands like Ubisoft, Atari, Animoca, and Decentraland, Polygon also serves as a scaling solution for the Ethereum blockchain. It is the perfect choice for projects that need a lot of digital assets since it enables transactions that are considerably faster and cheaper than those possible on Ethereum’s own mainnet.
Since adding the Ethereum blockchain in August, Magic Eden, which was previously only recognized as the leading marketplace for Solana NFTs, has grown into a multi-chain platform. Magic Eden claims that it chose Polygon as the next addition for two main reasons: the platform’s potential for hosting massive Web3 gaming events and the rising acceptance among leading Web2 companies. Meanwhile, the Solana blockchain is facing an uncertain future since the value of its native token, SOL, has dropped by more than 50% following the collapse of one of the blockchain’s largest investors, FTX, in recent weeks.
After expanding to Polygon, Magic Eden will be able to collaborate with strategic intellectual property owners, world-class game developers, and emerging creators – who will be eager to avail Magic Eden’s cross-chain audience.
With major game development companies BORA by Kakao Games, IntellaX, no, Boomland, Block Games, Planet Mojo, and Taunt Battleworld poised to launch on its Polygon Launchpad coming out in December, the collaboration has already brought significant partnerships to Magic Eden. The distribution and go-to-market competence of Magic Eden, which includes project/concept positioning, pre-launch timetable planning, and links to Web3 groups, will be advantageous for these games.
Following the launch of Magic Eden’s Polygon Launchpad, a MATIC-compatible NFT market will be introduced. This marketplace will respect royalties while looking at additional cutting-edge strategies to encourage other sources of creator monetization.
After that, Magic Eden plans to roll out additional Polygon support for NFTs, including Magic Eden List (an audience targeting and allowlist tool), Drop Calendar, and analytics which will boost NFT discovery and trading.
There was a brief decline in trade volumes last month as a result of the Magic Eden switching to an optional royalty model. Despite conflicting opinions over the discontinuation of royalty payments, the integration of Polygon NFTs may aid in restoring greater trading activity to the platform.
In July, Magic Eden announced the launch of Magic Ventures, a new fund dedicated to investing in innovative blockchain games and gaming infrastructure. In the same month, it also started Eden Games, its own gaming subsidiary, as the platform advances into blockchain gaming.
Together with startup Suishi Intelligent Technology and CEC Cloud Brain, a joint venture between the university and the state-owned China Electronics Corporation, Tianjin University in China developed and released a brain-computer interface (BCI) platform called MetaBCI.
Brain-computer interface technology enables signal exchange between the human brain and an external device. It enables direct human control of machines without being constrained by physical limitations. In simple words, it is a system that converts electrical impulses that are created when neurons fire and communicate into commands that are transferred to an output device to perform the desired action.
The primary goal of BCI technology development was to enable paralyzed individuals to use their thoughts to operate assistive devices. However, new use cases are always being found. For instance, Neurable garnered much media attention in 2017 for developing the first virtual reality (VR) game ever to be controlled by the brain. Players were instructed to operate a remote-controlled car with their minds while sitting in front of a computer, wearing an EEG (electroencephalography) headset. Recently, Science Eye announced working on a brain-computer interface device that targets retinitis pigmentosa (RP) and dry age-related macular degeneration (AMD), both of which can cause serious vision loss.
By analyzing signals and connecting devices, MetaBCI aims to disrupt the software side of the technology. According to Xu Minpeng, a professor at Tianjin University, who announced the project on Sunday at a seminar in Tianjin city, Chinese researchers claim that this open-source platform is capable of consolidating the BCI data structure and processing process and developing a common decoding algorithm framework, which provides a solution for the entire process as a BCI software. According to Xu, the system can already accommodate 14 BCI data sets, apply 16 data analysis methods, and use 53 decomposition models. The team plans to keep introducing more capabilities.
There are typically two kinds of brain-computer interface: invasive and non-invasive. Invasive BCI involves surgically implanting electrodes or other specialized equipment into the human brain or onto certain neuron sets. In contrast, non-invasive BCI devices are simple to install and remove. These devices monitor and record brain activity using sensors placed on or close to the head. Researchers often favor non-invasive BCI since it is the safer and more affordable option.
MetaBCI, written in the popular Python programming language, is now available on GitHub, a code-hosting site for version control and collaboration. According to its GitHub introduction, the platform only applies to non-invasive BCI devices and excludes, for instance, chips placed under a person’s scalp.
Many people hesitate to contact a digital marketing agency. Several thoughts occur in the minds of business owners while making this decision. It may not be an easy portion of your money to spend, but worth it when you see the results. Getting digital marketing from edgeonline.com.au or similar services can be an excellent choice if you look forward to business expansion. The most important suggestion to remember is to choose an experienced agency.
A digital marketing agency can help you with SEO, Google Ads, Social Media Marketing, etc. Select an expert who offers complete transparency regarding their marketing operations. Your first experience with an inappropriate agency might put you in a dilemma to contact another agency. However, the right services will help you regain confidence in your business and digital marketing tools. From graphic designing to website development, much can be accomplished with the help of a digital marketing firm. Please read below to learn about the top benefits of working with them.
Perks of Working With a Digital Marketing Agency
If you need clarification about hiring an agency, the below points will help you make a sound decision.
You are on top of the latest trends
If you are aware of trends and algorithm updates in the digital world, you must also know their importance. There are different aspects of digital marketing, and only the experts can implement them correctly. When your business is in the hands of a marketing firm, it is their responsibility to ensure you stay on top of the latest trends. They will employ different strategies uniquely and help your business stay on top.
Extended Marketing Team
Hiring a digital marketing agency will speed up your game if you already have an internal team working on the marketing efforts. You will have an extended team working for a common goal: acquiring more customers with a higher conversion rate. Even if you don’t have an internal team, hiring a firm will ensure you have a specialist on your side. It is more convenient and wise for small and medium-sized companies to hand over their marketing responsibilities to a digital marketing agency.
Getting Advanced Insights
There are so many perks to working with a team of experts; one is getting access to advanced insights. They will help you get more details about your customers and overall performance on the digital platforms. These tools can make or break your marketing campaign. They can help you dive deeper into your data and offer suggestions that can be game-changers.
Getting Accountability
A quality agency will be accountable and reliable. Reliability means that you can call the experts and learn about what’s going on with the strategy. They are answerable to you and tell you why an ad is not working out. If a paid campaign doesn’t achieve the desired results, you have someone to ask questions and hold accountable. However, a reputed agency will provide you desired results and be transparent with their operation.
Bottom Line
Now that you know the advantages of having a digital marketing agency on your team, it’s time to start. You can learn about digital marketing from edgeonline.com.au or someone similar. Incorporating these strategies into your business operations will help you achieve your sales targets.
Times have changed, and they demand new tools and techniques for business expansion. Digital marketing is one of the essential tools in today’s times. Your marketing strategy is incomplete without it. So, contact a marketing expert and see how it makes a difference in the overall business.
On Tuesday, the Department for Business Energy and Industrial Strategy (BEIS), the UK government, announces an innovation program to support artificial intelligence for reducing carbon emissions. The innovation program, also known as AI for decarbonization, is part of the UK government’s Net Zero Innovation Portfolio.
The program comprises two separate streams for grant funding that will be launched in two initial stages. The stream1 worth up to £500,000 will be used to co-fund the virtual center of excellence on AI innovation and decarbonization till March 2025. Stream2, worth up to £1 million, is designed to fund projects that will empower the development of AI technologies to enable decarbonization.
Science Minister George Freeman said, “AI decarbonization program offers an exciting opportunity to use and develop UK’s AI expertise. Putting AI into action can enable people to save energy costs for businesses and households.”
The program aims to stimulate further innovations in the AI sector in the UK to drive growth and achieve net zero ambitions by encouraging collaboration across technology, energy, and industry sectors. The AI decarbonization program is built on the ideas developed in the National AI strategy published last year.