Home Blog Page 129

Bionaut Labs Develop Robots to Deliver Drugs Directly into the Brain

robots deliver drugs into brain

The team of researchers behind Apple’s Face ID, from Bionaut Labs, has developed robots that deliver drugs directly into the brain. The trials aim to deliver drugs with the help of tiny robots to treat certain types of brain tumors at complex locations and a rare neurological disorder called Dandy-Walker Syndrome. The robots will poke holes in the located cyst and eventually release a drug to the targeted area.  

Bionaut Labs is an LA-based research lab that revolutionizes treatments requiring medical professionals to reach deep locations in the human body safely and precisely. The company has raised approximately US$43.2mn in a funding round led by Khosla Ventures to fund its latest clinical trials. 

Read More:  DeepMind Builds AI Agents That Can Perform Actions in Video Game Worlds

Micheal Shpigelmacher, Bionaut’s chief executive, said that robots have the potential to become a “holy grail,” opening more pathways to treating brain diseases, diseases across the central nervous system, and much more. 

These robots, a few millimeters long with a robust micro-magnet, could also perform biopsies. They use external control frameworks and, under predetermined magnetic fields, poke a hole in the targeted area and release the drug.

Advertisement

How’s AI changing the fashion industry narrative?

artificial intelligence in fashion

Fashion is one of the largest industries globally. The revenue in the global apparel market is expected to be $2.25 trillion by 2025. However, it’s not the first industry that pops into your head when you think about artificial intelligence.

However, brands and designers that embrace the latest technology to diversify design and production have a greater chance of coming out on top in the fast-changing fashion world. Here are a few ways artificial intelligence in fashion is transforming the apparel industry. 

Trend Forecasting with Machine Learning

Fashion brands are reshaping their product design and development approach by predicting what customers want to wear next. Artificial intelligence in fashion is helping with the same.  Trend forecasting is usually labor-intensive, involving digital or manual observation and data collection from fashion influencers and designers. By gathering data directly from users, brands like StitchFix and Finery can easily access information to plan the styles people will love and the quantities to manufacture.

Stylumia deploys its machine-learning platform to assist fashion and lifestyle brands forecast demand, manage inventory, spot trends to make informed business decisions. Shoppers can also benefit directly from AI during their online browsing experiences. TrueFit uses behind-the-scenes data to help its customers find the perfect fit, whereas B2B solution Virtusize helps brands build virtual sizing tools to raise customer satisfaction and reduce returns.

Read More: US Air Force Teams Up With SandboxAQ For Post-Quantum Cryptography Deal

Virtual Models for Designers

AI in fashion industry is setting a unique precedent for models. Virtual models not only help designers showcase their latest creations worldwide, but they also help reduce waste by showing clothes that have not yet been manufactured. 

Shudu Gram is a CGI model developed by Cameron-James Wilson, a visual artist, and fashion photographer. Since its creation, Shudu has gained a following of over 200,000 on Instagram. Many of its followers assumed it was an actual human after Wilson launched its online presence, and followers were surprised to find out that Shudu is entirely digital. Now Wilson is the CEO of the world’s first all-digital modeling agency, The Diigitals.

Sustainable Digital Fashion

Fashion is one of the world’s most polluting industries. As a result, several companies are using technology to make fashion more sustainable and eco-friendly. The Fabrikant creates interactive experiences for brands through the creation of photo-real, 3-D fashion designs, and animation. The clothes are never made in the physical world, making this a more sustainable way for companies of all sorts to make a statement without creating a large footprint.

Meanwhile, with Dress-X, one can buy digital clothes for their online persona on social media. One can simply upload a photo of themselves, pay for the garment, and get back an image of themselves wearing the new item. You’ll never see the article in person or hang it in your closet. However, you can give your online persona a makeover while cutting on textile waste in the fashion industry.

Conclusion

It is evident from the above-mentioned examples that technological advances are transforming fundamental aspects of the fashion industry, right from the initial sketches to fashion shows and individual online shopping experiences. Predictive modeling and automation are creating better customer service experiences while the fashion industry is taking innovative steps to reduce waste through digital fashion. With the help of AI in the fashion industry, we will soon be able to purchase a new outfit for our Zoom persona and start sporting the latest in fashion on those remote conference calls.

Advertisement

Does India need the RBI-issued e-rupee? 

Does India need e-rupee

RBI recently rolled out its first-ever pilot test program to review and improve India’s digital currency e-rupee’s functionality. The Reserve Bank of India (RBI) launched the e-rupee on November 1 for the wholesale segment. Although the digital currency of India is gaining traction, some are still asking what the e-rupee is, how it differs from cryptocurrency, and, most importantly, how it will benefit the Indian population. Let’s have a look. 

What is a digital currency?

Digital currency is an electronic form of money which can be used for contactless transactions. In India, Central Bank Digital Currency (CBDC) or the e-rupee is a digital form of rupee issued by the central bank, i.e., RBI. According to RBI, “CBDC is the legal tender issued by the central bank in a digital form. It is similar to a fiat currency and is exchangeable one-to-one with the same. Only its form is different.” As per the central bank, there will be two types of digital currency, Retail CBDC (e₹-R), which would be available for the public, and wholesale CBDC (e₹-W), which is designed for restricted access to select financial institutions.

How is digital currency different from cryptocurrency?

CBDC cannot be compared to cryptocurrencies exactly. “Unlike cryptocurrencies, a CBDC is not a commodity or claims on commodities/digital assets. Cryptocurrencies have no issuer. They are not money and certainly not a currency, as the word is understood historically,” said RBI.

Read More: FIFA Will Launch An AI Metaverse League In Collaboration With Altered State Machine

The CBDC is the digital version of paper currency issued by central banks like RBI and will be exchangeable with cash. The commonly-known digital rupee is a currency that the RBI will issue, and the digital rupee will have the same function. However, it will not be a decentralized asset like the cryptocurrencies. 

The digital rupee will be a currency issued by the central bank responsible for managing and governing the assets, which means one can use it to buy what they want. Through blockchain technology, a person can safely send money to another person without going through a bank or financial services provider. In the case of digital currency, blockchain will act as a decentralized and distributed digital ledger that will be used to record transactions across many computers. The record will not be altered retroactively without altering all subsequent blocks and the network consensus. A blockchain-based CBDC will enable the central bank to control the currency while protecting the privacy and independence of the CBDC’s use to the end users. 

Does India need digital currency?

RBI’s most important reason for launching a digital rupee is to push India forward in the virtual currency race. E-rupee will benefit by reducing the transaction cost. A digital currency will make it easier for the government to access all transactions within authorized networks. It will also help the government to control how money enters and leaves the country. It is impossible to escape the gaze of the government through CBDC, as it can neither be torn, burnt, physically damaged, or lost. Moreover, it is more durable compared to physical notes. Using the digital rupee can possibly make the interbank market more efficient and help reduce dependence on the dollar. 

Besides the above-mentioned advantages, the digital rupee will maximize transparency and efficiency with blockchain technology. Blockchain will also enable ledger maintenance and real-time tracking. The payment system will be available to retail wholesale customers 24/7. Indian buyers can pay without an intermediary, and one does not have to open a bank account to use a digital rupee. Lower transaction cost, real-time account settlements, fast cross-border transactions, and no risk of volatility are some of the other benefits of the e-rupee. 

But with popular payment systems like UPI around, can CBDCs be a game changer? According to an RBI survey, cash remains to be the most preferred mode of payment for receiving and sending money for regular expenses. In India, cash is used predominantly for small-value transactions (amounts up to INR 500).

Conclusion

By introducing the digital rupee, the RBI aims to address problems associated with already existing physical currencies and cross-border transactions. Converting money into foreign currency and cross-border money transfer is expensive and tedious. The instant cross-border money transfer with the launch of the digital rupee is set to make bank cash operations and management more seamless. 

In India, cash placement and its tracking are a challenge. CBDC can address anonymity and resolve the issue in a non-intimidatory way, as well as reduce the demand for cash. The government will save operational, storing, printing, and distributing costs, thus empowering the government’s vision toward a cashless economy. Therefore, considering all the benefits that the e-rupee will render, it is clear that India does need a digital currency.

Advertisement

Korean researchers develop a new deep-learning model to generate social behaviors in robots

Researchers at the Electronics and Telecommunication Research Institute (ETRI), Korea, develop a new deep learning model to produce engaging social behaviors like hugging or shaking someone’s hand in robots. Their research on the new deep learning model is presented in a paper on arXiv

According to the researchers, deep learning techniques have shown exciting results in computer vision and natural language processing. They wanted to apply deep learning to social robotics to allow robots to learn social behavior from human interactions.

Read more: The University of Birmingham and IIT Madras launches Master’s programs

The deep-learning model’s architecture combines the sequence-to-sequence model introduced by Google researchers with generative adversarial networks (GAN). GAN is a machine learning model where two neural networks compete to become more accurate in the prediction. The architecture in the deep learning model is trained on the AIR-Act2Act dataset, which consists of 5000 human interactions occurring in 10 different scenarios. 

Korean researchers have tested the new deep-learning model on a simulated version of Pepper, a humanoid robot. The model generates five non-verbal behaviors for robots: bowing, staring, shaking hands, blocking their face, and hugging. The new deep learning model will help to make social robots more adaptive and socially responsive. In the future, the new deep learning model can be tested on many robotics systems like home service robots, guide robots, delivery robots, educational robots, and more.

Advertisement

The University of Birmingham and IIT Madras launches Master’s programs

The University of Birmingham and IIT Madras announce the establishment of joint master’s programs to explore studies in data science, biomedical engineering, and energy systems.

The University of Birmingham and IIT Madras have decided to launch the first postgraduate program next year before developing further study programs in subsequent years. During a visit to Chennai, the partnership between both universities was reached by Professor Adam Tickell, Principal and Vice-Chancellor of the University of Birmingham.

Read more: UK government launches a new innovation program, AI decarbonization

The joint master’s programs will be carried out at the campuses of both institutions, and the mutual recognition of the academic credits will lead to a single degree certificate. As per IIT Madras, the joint master’s program is the first education partnership at the master’s level between any UK Russell Group University and IIT. The new partnership will benefit students with academic flexibility to learn and work through the latest technology to define the future of global engineering.

Adam Tickell stated that the University of Birmingham is a global ‘civic’ institute committed to shaping education and research partnerships in India. The new post-graduation programs between the two institutions will offer students better opportunities to pursue a world-class education and have their educational achievements recognized by both.

Advertisement

DeepMind Builds AI Agents That Can Perform Actions in Video Game Worlds

deepmind builds ai agents

DeepMind has introduced a framework to build AI agents that can perform human actions in video game worlds. With its paper titled “Improving Multimodal Interactive Agents with Reinforcement Learning from Human Feedback,” DeepMind is putting together early steps in building video game AIs that are familiar with human concepts and can interact with people on their own.

Mimicking human behavior is considerably challenging for artificial intelligence. It requires a grave understanding of natural language and situated intent. As per the majority of researchers, generating code involving all nuances of interactions is practically impossible. As an alternative to extensive coding, they are now focusing on modern machine learning to make the models learn from data.

DeepMind developed a research paradigm that enables agent behavior improvement via grounded and open-ended human interaction. It is a new paradigm, yet it can create AI agents that can listen, talk, search, ask questions, and navigate in real time.

Read More: Harvey Uses AI to Answer Legal Questions, Receives Funding from OpenAI

DeepMind created a virtual “playhouse” with recognizable objects and random configurations designed for navigation and search. The interface also includes a chat for communication without any constraints. The idea begins with people imbued with an unrefined set of behaviors interacting with others. The “prior behavior” enables humans to judge agents’ interactions.

This judgment is optimized using reinforcement learning to produce better agent behaviors. To further learn more goal-oriented behavior, the AI agent must pursue an object and master movements around it. 

Ultimately, to measure the effectiveness of the agents in the game, a reward model is necessary. DeepMind researchers trained a reward model with human preferences and used it to place the agents into a simulator to make them go through a question-answer set. The reward model scored their behavior as the agents listened and responded in the environment.

The technique is still in its infancy, and researchers welcome all comments and feedback on the same.

Advertisement

Vinci Protocol Raised US$2.1M in Seed Funding for its NFT Infrastructure

vinci protocol seed funding

Vinci Protocol, a company providing NFT data and web3 development service, raised US$2.1M in a seed funding round for its NFT infrastructure. The company provides infrastructure for users, developers, and other organizations to interact and trade in the NFT space. Vinci Protocol recently launched its official mainnet after completing the CertiK Audit

People at the Vinci Protocol believe that the future of NFT is uncertain yet opportunistic as it holds a vast pool of knowledge. After launching its first community NFT in 2022, the company has cemented itself in the NFT space and lending pools on the Ethereum mainnet. 

Read More: Meta AI predicts over 600M protein structures with ESMFold, approximately 60x faster than DeepMind’s AlphaFold AI

The new funding will enable Vinci Protocol to expand its resources and continue developing a world-class NFT-driven product. The funding will also help in furthering the NFT research to more non-standard use cases. 

Florian, CEO of Vinci Protocol, said, “It’s easy to have a vision of a complete NFT infrastructure. However, our approach is to identify the most needed user cases and build them from the ground up.” He also added that the company would focus on the financialization of NFT and NFT data oracles to design more developer and organizational tools with democratized property rights.

Advertisement

MIT Ups the Robotics Domain With Self-Assembling Robots

voxel MIT self assembly robots
Credits: MIT

Imagine a scenario where a robot could self-assemble itself – thus cutting human effort for the same. Recently, MIT researchers have made great strides in the direction of developing robots that could effectively and inexpensively manufacture almost anything, even objects that were considerably larger than themselves, such as automobiles, buildings, and larger robots. The most recent accomplishment is detailed in a paper by professor and Center for Bits and Atoms (CBA) Director Neil Gershenfeld, doctorate student Amira Abdel-Rahman, and three other authors in the journal Nature Communications Engineering.

Robots that can put together structures on their own are called self-assembling robots. Many tend to believe that they are self-building robots, which is a legitimate interpretation of the phrase ‘self-assembling’, but it is not what people in the robotics industry mean.

The team acknowledges that it will be years before they achieve their real goal of a completely autonomous self-replicating robot assembly system that is capable of both planning the optimum building sequence and creating larger structures, including larger robots. However, the new research makes significant progress in that direction by figuring out difficult problems like when to produce more robots, how big to make them, and how to coordinate swarms of robots of various sizes to construct a building effectively without colliding with one another.

This advancement draws on years of research, including various studies showing that deformable airplane wings and working race cars could be put together from small, identical lightweight bits and that some of this assembly work might be done by robots. By demonstrating both the assembler bots and the components of the structure being built can be made of the same subunits and that the robots can move autonomously in vast numbers to complete large-scale assemblies swiftly.

MIT’s new self-assembling robot technology follows a similar assortment of small identical subunits called voxels (the volumetric equivalent of a 2-D pixel) to create huge, useable structures as in earlier tests. This time the team has created advanced voxels where each can transfer both power and data from one unit to the next, unlike the mechanical structural previous voxels. This might enable the building of structures that can not only withstand weights but also perform tasks, such as lifting, moving, and manipulating materials, even the voxels themselves.

Credit: Amira Abdel-Rahman/MIT Center for Bits and Atoms

The robots themselves are made up of a string of many voxels linked end to end. These can migrate like inchworms to the desired position, where they can grasp another voxel using attachment points on one end, connect it to the developing structure, and release it there.

Earlier, an MIT team was developing ElectroVoxels, which are tiny, intelligent, self-assembling robots created for space. These robots were tested on NASA’s vomit comet, a large padded airplane with the seats removed for the purpose of giving scientists and pilots a brief moment of zero gravity during looping parabolic flights. The goal of this research was to employ them as unique tools, reorganize mass for spinning motions that would provide a kind of artificial gravity by centrifugal force, or put mass in the way of a potentially harmful solar flare.

Read More: MIT Researchers Solved Differential Equation Problem in Liquid Neural Network

While previous systems designed by the institution could theoretically build arbitrarily enormous structures, MIT warns that once the size of those structures reached a certain threshold in relation to the size of the assembler robot, the process would become undeniably impractical due to the ever-longer journeys each bot would have to traverse to transport each piece to its destination. This also caused path planning complexity. In other words, these systems were certainly an upgrade from the conventional monolithic robots (competent but rigid) and modular robots (flexible but less capable). However, because their numbers and sizes were fixed, scaling led to decreased performance and throughput.

When the voxels became available, the bots could determine if it was time to build a larger version of themselves that could travel farther and faster. While parts of a building with lots of fine detail may need more of the smallest robots, an even greater structure might need yet another similar stage, with the new larger robots generating even larger ones.

The new voxel-based robot system has the ability to assemble robots sequentially, recursively, and hierarchically to create larger robots. To do this, the construction is discretized into a series of uncomplicated, basic building parts that can be rearranged to generate a variety of capabilities. The discretization makes the coordination, navigation, and mistake correction of the swarm considerably lot easier. The component composition is assisted by an algorithm to assemble the building blocks into swarms and strategize the best possible construction journey.

The hardware is based on an earlier system that used passive structural lattice voxels as a foundation for the mobility of specially designed inchworm robots that could place and rearrange additional voxels. Through targeted registration to the underlying lattice, coupling voxels with robots creates a material-robot system that enables the exact assembly of massive structures using simple robots. The new system improves the connection between the material-robot system by developing a modular robotic toolkit with active lattice voxels serving as the main structural building elements. When these active voxels are integrated with actuators, control, and power, they enable novel capabilities such as robotic self-replication and hierarchical robot construction.

Read More: Humanoid Robots: Hype for Economic Gains or threat to Humans?

The six sides of the voxels were created by laminating an acetal face to a printed circuit board (PCB) substrate. These sides are later joined together to form the whole unit cell, which includes soldered intravoxel electrical connections and epoxy reinforcement. A 1.6 mm FR4 substrate and 1oz copper layers were used in the printing of voxel circuit boards using PCBWay custom prototyping. A Trotec Speedy-100 Flexx was used to laser cut the acetal faces, and Loctite SF-770 primer and Loctite 401 adhesive were used for laminating the acetal faces to the voxel circuit boards.

Power, ground, and a single serial communication line are routed through each voxel face using a pair of 4.7625 mm 3.175 mm (3/16 in. 1/8 in.) magnets of opposite polarity to create an orientation-independent structural connection, while a 6-pin spring-connector creates a hermaphroditic interface for the three electrical circuits. The maximum transmission capacity for a face-to-face connection is 8 A at 10 V and 50 N of tensile force. The structural-robotic system is then finished with supplemental active components, including an ESP32-based microcontroller with a 7.4 V lithium polymer battery pack, two rotary actuators where an elbow rotates parallel to the plane of attachment and a wrist that rotates perpendicular to the attachment plane, as well as a gripper made to clamp lattice components for placement, mobility, and assembly.

The voxel-based robot assembling algorithm was governed by two conditions. First, following assembly, the new robot has to be able to move freely. Although this requirement may seem unimportant, the nature of the magnetic connections prevents assembling flat on the lattice substrate and then lifting the finished robot into place. In order to get around this, child carrier robots are built from a foundation that consists of a control voxel and a gripper that can attach to the lattice.

Second, the gripper can only directly operate the base voxels, control and power voxels, and elbow joints from the robotic construction kit. The wrist and gripper modules are both made to snap onto a free voxel face. A carrier robot initially picks up one of the first three-component kinds, then maneuvers to connect wrists and grippers to the base part before placing it in the assembly, a process known as accessorizing.

The team believes that these robot swarms have a wide range of possible applications in sectors that now demand huge capital investments for permanent infrastructure or are infeasible. These include seismic metamaterials, automotive assembly lines, aircraft components, and airframes.

While the currently described algorithms are centralized, MIT also noted in their paper that if the system size increases, scalable compilers, and decentralized control mechanisms would be required. Although these algorithms gave useful examples, they were not demonstrated to be the best options. More advanced path planning and collision avoidance strategies could be used to shorten the construction period, and exploring the number and location of pickup stations is a crucial design factor that significantly influences the throughput of the robots. Further, the team will continue their work to ensure the continuity of metamaterial behavior in structures with greater performance.

Advertisement

Harvey Uses AI to Answer Legal Questions, Receives Funding from OpenAI

Harvey legal AI receives funding from OpenAI

Harvey, a startup founded by Winston Weinberg and Gabriel Pereyra, uses artificial intelligence (AI) to answer legal questions using a natural language interface. The startup received funding of US$5m from the OpenAI Startup Fund. 

After being inspired by OpenAI’s GPT-3 text and code-generating system, Weinberg realized such systems to be influential in legal workflows. Pereyra said, “Our product provides lawyers with a natural language interface for their existing legal workflows.”

Harvey enables paralegals to describe tasks they wish to complete and receive the generated output. The AI saves time because lawyers can simply instruct the model instead of manually editing legal documents. Harvey leverages large language models to function and understand users’ intent and generate output.

Read More: Meta Introduces CICERO, the First AI That Plays Diplomacy at a Human Level

For instance, Harvey can respond to questions like “Tell me if this clause in a lease is in violation of California law, and if so, rewrite it so it is no longer in violation.” Despite how well Harvey performs, it cannot replace human lawyers. 

Pereyra says that with Harvey, they want to serve as an “intermediary” and not as a “replacement.” Harvey would bridge the legal and tech landscape gap by making lawyers more efficient and reallocating their time to more valuable parts of the job. 


Given the intensely private nature of legal disputes, attorneys and law firms could be hesitant to grant Harvey access to case files. Additionally, language models tend to spread harmful information and social biases. Following the apprehension, Harvey comes with a disclaimer that it should not be used to provide legal advice to non-lawyers and always be under the supervision of licensed attorneys.

Advertisement

Redbrick AI raises $4.6 million to harness AI in healthcare

Redbrick AI raises $4.6 million

A health-tech platform, Redbrick AI, that harnesses artificial intelligence has raised $4.6 million in seed funding led by Sequoia India and its startup accelerator in Southeast Asia, Surge. Y Combinator and angel investors also took part in the round.

The company aims to facilitate solutions for building medical imaging AI. It was founded in 2021 by University of Michigan alumni Derek Lukacs and Shivam Sharma. Its SaaS platform provides web annotation tools for 3D/2D data to provide experts with access to specialized tooling right from their browsers.

“With the growth of AI in clinical settings, researchers need excellent tools to create high-quality models and datasets at scale. Our customers oversee this growth, pioneering everything from the automated detection of cancers and surgical robots. The new funds will be indispensable to the growth of our engineering team in India and to diversify our suite of products,” said Sharma, the company’s CEO.

Read More: NeurIPS 2022 Announces Winner Of The Test Of Time Award

RedBrick AI’s tools are built to address challenges that are unique to medical data annotation, such as quality control, machine learning integration, and the complexity of existing annotation tools.

The company has joined a growing list of health-tech startups that have received significant funding recently, including diabetes care startup Beato and D2C health-tech startup Good Health Company. 

Advertisement