Saturday, November 22, 2025
ad
Home Blog Page 252

Meta to build a Digital Voice Assistant for Metaverse

Meta digital voice assistant

Meta, formerly known as Facebook, announced its plan to build a digital voice assistant to support the growth and demand of metaverse. 

Meta claims that the digital voice assistant will allow users to interact with physical gadgets such as the company’s Portal video-calling device and augmented-reality glasses without using their hands. 

The announcement was made by the CEO of Meta, Mark Zuckerberg, during a recently held live event. A Meta representative mentioned that this is the first time Meta has dedicated an entire event to presenting its AI developments. 

Read More: Meta announces Free AI Learning Alliance to help Talents enter AI Industry

Meta wants to develop a robust digital assistant that will be able to detect context hints in conversations, as well as other data points about our physical bodies, such as our facial emotions, hand movements, and many others. 

According to Zuckerberg, in order to assist consumers in navigating this new online world, digital assistants will need to “understand how people do.” 

Zuckerberg said, “When we have glasses on our faces, that will be the first time an AI system will be able to really see the world from our perspective – see what we see, hear what we hear, and more.” 

Apart from a digital voice assistant, Meta is also building a new universal language translator for the metaverse to streamline user interactions in the virtual world. 

The AI-powered translation system would not only be able to provide instantaneous translation of widely spoken languages but also dialects that do not have a standardized writing format. 

As of now, Meta has not decided on a name for its new digital assistant, but it is called the development program ‘Project CAIRaoke.’ 

People will soon be able to access Zuckerberg’s envisioned metaverse hands-free by wearing a pair of smart glasses because of AI technology that can learn and anticipate people’s behavior.

Advertisement

WellSky to Acquire TapCloud to Strengthen Patient Engagement Technology

WellSky Acquire TapCloud

Software and service for healthcare providing company WellSky announces its plans to acquire leading online healthcare platform TapCloud to further strengthen its patient engagement technology. 

The acquisition will allow WellSky to use Tap Cloud’s expertise in virtual patient engagement technology to provide better services to its customers. 

WellSky plans to integrate TapCloud with its in-house developed healthcare technology solutions to increase its capabilities. To date, more than 5 million caregivers use WellSky’s healthcare solutions each day.

Read More: Meta plans to build a Universal Language Translator for Metaverse 

TapCloud’s AI-driven, interoperable platform provides real-time, patient-generated information, allowing providers to implement care interventions focused on minimizing avoidable hospital readmissions and emergency care. 

TapCloud’s technology will considerably help WellSky improve consumer experience and deliver value-based care. Patients can use virtual visit technology, secure messaging, and remote symptom screening processes to exchange their symptoms and other pertinent data with clinicians utilizing TapCloud’s EHR-agnostic patented technology. 

CEO of WellSky, Bill Miller, said, “WellSky is connecting every part of health and community care, and TapCloud represents a significant addition to our suite of solutions. By adding these robust capabilities, WellSky will further extend our position as the leading technology and analytics partner across the continuum.” 

He further added that WellSky and TapCloud would work together to help providers make evidence-based decisions using actionable insights. 

United States-based online healthcare platform providing firm TapCloud was founded by Tom Riley in 2013. The company is known for its platform that is used to connect patients and doctors through real-time data, critical care information & record. 

“TapCloud has worked tirelessly to close the communication gap between patients and providers through the use of data and technology. With WellSky, we gain access to a larger network and increased investment, which will broaden our reach and allow even more patients and families to be active participants in their care journeys,” said CEO of TapCloud, Phil Traylor. 

He also mentioned that they are in a good position to grow the ways they can assist clients to succeed, regardless of the EHR platform they employ.

Advertisement

Meta announces Free AI Learning Alliance to help Talents enter AI Industry

Meta Free AI Learning Alliance

Meta, formerly known as Facebook, announces its new free AI learning Alliance to help diverse talents enter the artificial intelligence industry. 

The new initiative of Meta and Georgia Tech will help the artificial intelligence industry to boom further by bringing in new talents to encourage and support innovations. 

The company is collaborating with a group of colleges to train more people from underrepresented groups in artificial intelligence and to make free online education available. 

Read More: Meta plans to build a Universal Language Translator for Metaverse

It’s a semester-long deep learning course that has been meticulously designed to teach learners the principles of neural networks and applications like computer vision and language understanding. 

The curriculum involves most of the critical aspects of artificial intelligence that the global industry currently demands. Earlier, Meta had also collaborated with Georgia Tech to build a deep learning course curriculum that has been taken online by 2,400 students since 2020. 

This new AILA initiative is the company’s next step in educating students in the field of artificial intelligence. Chair of Computing at Georgia Tech, Charles Isbell, said, “By moving AI instruction online, we can reach more people from a wider range of backgrounds than ever before. This is not only a great opportunity for learners, but also for the field as a whole, which needs a diverse set of voices if it is to responsibly serve a diverse set of communities.” 

Meta aims to teach millions of people with this new initiative by making the AILA Education Hub available to everyone, including educators, students, researchers, and hobbyists alike, through Meta’s online education platform named Meta Blueprint. 

Meta has tied up with numerous colleges and universities such as Georgia Tech University, Florida Agricultural, and Mechanical University, Morgan State University, Florida International University, University of California Irvine, and many more to offer study material in its AILA program. 

Additionally, Meta has collaborated with Georgia Tech’s Dr. Kira to design a series of webinars that will be available on the AILA Education Hub and will assist professors in teaching the course content. 

Interested candidates can register for this program from the official website of Meta. 

Advertisement

Meta plans to build a Universal Language Translator for Metaverse

Meta Universal Language Translator metaverse

CEO of Meta, Mark Zuckerberg, recently announced that the company plans to build a universal language translator for the metaverse. 

The new universal translator will play a vital role in streamlining user interactions in the virtual world of the metaverse. Zuckerberg recently unveiled the plans of creating a translator during a live virtual event hosted on 23rd February 2022. 

He claimed that the technology would use artificial intelligence solutions to deliver language translations for metaverse users. Meta believes that when people start to get virtually connected with individuals from different parts of the world, it will be crucial to deploy an effective and accurate translator for the users to interact with each other seamlessly. 

Read More: Staqu’s AI systems can now spot Crimes and listen to Gunshots

Mark Zuckerberg said, “This will enable a vast scale of predictions, decisions, and generation as well as whole new architectures, training methods, and algorithms that can learn from a vast and diverse range of different inputs.” 

He further added that the primary objective is to create a universal model that can combine knowledge from various modalities by collecting data through rich sensors. 

Additionally, he mentioned that the system would not only be able to provide instantaneous translation of widely spoken languages but also dialects that do not have a standardized writing format. 

Facebook, in the past, has always tried to develop technologies that help connect individuals from different parts of the world through the internet, and the company plans to expand this approach in its metaverse too. 

“The ability to communicate with anyone in any language — that’s a superpower people have dreamed of forever – and AI is going to deliver that within our lifetimes,” said Zuckerberg. 

However, the many challenges Meta is facing right now is the unavailability of quality data to train its algorithm to offer instantaneous translation capabilities for multiple languages. 

To date, most of the machine translation systems have been developed to translate only a handful of languages, which is causing data scarcity for Meta and posing a challenge to develop a universal language translator.

Advertisement

Staqu’s AI systems can now spot Crimes and listen to Gunshots

Staqu AI system listen gunshots crimes

Staqu Technologies has developed a one-of-a-kind artificial intelligence-powered surveillance system that uses CCTV cameras and microphones to accurately and effectively spot crimes and also listen to gunshots. 

The audio feature integrated into the system named Jarvis makes it a highly capable and useful technology that can be used by government agencies to make Indian roads more secure for pedestrians and travelers. 

The company has submitted a bid for audio and video monitoring as part of the Lucknow Smart City project’s bid to boost security in the city. 

Read More: General Motors to invest $7 billion in Michigan facilities for EV Production

People familiar with the matter say that it is likely that Staqu would win the tender as the company’s previous generation of Jarvis technology has already been deployed at various locations by the Uttar Pradesh Police and other state police forces. 

CEO and founder of Staqu Technologies, Atul Rai, said, “We have used audio analytics to detect incidents such as prison fights in Uttar Pradesh on a pilot basis. Our target is to implement it in smart cities.” 

The massive leap in its capabilities exponentially increases the efficiency of Jarvis as now authorities can also analyze the audio to take better and quicker actions against criminals. Jarvis uses convolutional neural networks to identify various kinds of sounds like a gunshot, screams of humans, and many more. 

Additionally, organizations in retail and manufacturing are also using the audio analytics tool to detect distress sounds. The previous version of Jarvis incorporated technology like closed-circuit television (CCTV) that captures video footage, which then gets analyzed by an artificial intelligence-powered facial recognition system to spot criminal activities within its range. 

Gurgaon-based artificial intelligence startup Staqu Technologies was founded by Abhishek Sharma, Anurag Saini, Atul Rai, Chetan Rexwal, and Pankaj Sharma in 2015. During its seed funding round, the company received funding from investors like Ajay Gupta and Neeraj Sangal. Staqu specializes in providing solutions for challenges involving analyzing images and extracting valuable information from those images. 

Advertisement

Stanford Team uses AI to set World Record for Fastest Genome Sequencing

Stanford university genome sequencing ai
Image Credit: Analytics Drift

A team of Stanford scientists set the first Guinness World Record for the fastest DNA sequencing technology, which took only 5 hours and 2 minutes to sequence a human genome. The research team led by Stanford University collaborated with NVIDIA, Oxford Nanopore Technologies, Google, Baylor College of Medicine, and the University of California at Santa Cruz to use AI to speed up the end-to-end process, from collecting a blood sample to sequencing the entire genome and identifying disease-linked variants. The record was certified by the Genome in a Bottle group of the National Institute of Science and Technology, and it is documented by Guinness World Records.

Sequencing genomes entails extracting short sequences of DNA from the 6 billion pairs of nucleobases inherited from our parents, namely adenine (A), thymine (T), guanine (G), and cytosine (C). Using a typical human genome as a reference, the sequences are then replicated and reattached together. This method, however, does not always capture the full genome of a patient and the data it gives can sometimes leave out variations in genes that point to a diagnosis. This means that locating mutations that occur throughout a wide portion of DNA might be difficult, if not impossible. Hence researchers use lengthy-read sequencing which preserves significantly longer segments of the patient’s genome, increasing the chances of finding mutations, minimizing errors, and correctly diagnosing the patient.

Genome sequencing is a vital tool for clinicians diagnosing uncommon genetic illnesses. It aids them in determining whether their patients’ genes are mutated and, if so, what genetic disorders such mutations could cause. However, it is not an easy task due to oddities such as variances in sequencing techniques and technologies, as well as data storage formats and data exchange protocols. Machine learning and deep learning are two AI technologies that are already well-known for their remarkable data processing and pattern recognition prowess. As a result, AI frameworks are used in healthcare research to allow for the efficient interpretation of massive complicated datasets, such as genomes.

The researchers were able to reach the record-breaking speed by refining each step of the sequencing process. Stanford researchers used a DNA sequencing platform from Oxford Nanopore Technologies, called PromethION Flow Cells. This device reads genomes by pulling large strands of DNA through pores that are similar in size and composition to the openings in biological cell membranes. It detects the DNA sequence by reading small electrical changes specific to each DNA letter as a DNA strand travels through the pore. Thousands of these pores are dispersed over a flow cell device. The researchers sequenced a single patient’s genome simultaneously over 48 flow cells, allowing them to read the full genome in a record duration of 5 hours and 2 minutes (7 hours and 18 minutes in total, including diagnosing it). The device also supports “long-read sequencing.” 

They generated more than 100 gigabases (one billion nucleotides) of data every hour employing high nanopore sequencing on Oxford Nanopore’s PromethION Flow Cells, then expedited base calling and variant calling using NVIDIA GPUs on Google Cloud. At this stage, the device’s raw data are converted into a string of A, T, G, and C nucleotides, which are then aligned in near real-time. The scientists quickly realized that sending the data directly to a cloud-based storage system allowed them to boost computational power enough to handle all of the data generated by the nanopore device. Because it dispersed the data among cloud GPUs, it immediately reduced latency. 

The next step was to look for little variations in the DNA sequence that could lead to a hereditary disease. The researchers used the NVIDIA Clara Parabricks computational genomics application framework for both base calling and variant calling. Clara Parabricks used a GPU-accelerated version of PEPPER-Margin-DeepVariant, a pipeline developed by UC Santa Cruz’s Computational Genomics Laboratory in partnership with Google, to speed up this stage. DeepVariant employs convolutional neural networks for very accurate variant calling. Clara Parabricks’ GPU-accelerated DeepVariant Germline Pipeline software produces findings at ten times the speed of native DeepVariant instances, reducing its time to find disease-causing variants.

Read More: Stanford’s ML Algorithm Accurately Predicts the Structure of Biological Macromolecules

Using this rapid genome sequencing approach, scientists scanned the 3-month-old patient’s entire genome in just eight and a half hours. They discovered that the baby’s CSNK2B gene was altered. CSNK2B is a gene linked to Poirier-Bienvenu syndrome, a rare neurodevelopmental condition characterized by early-onset epilepsy. Doctors diagnosed the patient with Poirier-Bienvenu within a few days, administered the appropriate antiseizure prescription, and provided disease-specific counseling and a prognosis to the patient’s family. On contrary, an epilepsy gene panel (which did not include CSNK2B) that had been ordered at the time of the presentation, and the findings, which arrived two weeks later, revealed only several nondiagnostic variations of questionable significance.

This highlights a huge milestone in genome sequencing and health diagnostics. With the ability to sequence a person’s entire DNA in just hours, super rapid genome testing could become a life-saving technology for detecting inheritable disorders in humans. This will also enhance patient prognosis by discovering certain disorders early. Rapid genome sequencing could also be the key to identifying and classifying undiagnosed adult patients with unknown genetic disorders.
For more information visit here.

Advertisement

Cnvrg.io announces AI Blueprints, an open-source suite of ML Pipelines

Cnvrg.io AI Blueprints ML Pipelines

Full-stack data science platform providing company cnvrg.io announces AI Blueprints. The newly launched machine learning pipelines are an open-source suite that allows users to quickly deploy artificial intelligence applications. 

It is a very easy-to-use and user-friendly platform that can work on any infrastructure. AI Blueprints are designed for developers who wish to create machine learning-powered apps and services for typical commercial applications such as recommender systems, sentiment analysis, object detection, and more. 

According to the company, AI blueprints are based on cnvrg.io’s experience working with some of the world’s top data and AI teams, studying and recognizing recurring bottlenecks and technical difficulties that arise when deploying machine learning. 

Read More: Aurora and U.S. Xpress collaborates to develop Driverless Truck Networks

Cnvrg.io has packed AI Blueprints with multiple features and resources, including a complete library of data connectors, pre-built ML pipelines, and cnvrg ready-to-use blueprints. 

CEO and Co-founder of Cnvrg.io, Yochay Ettun, said, “The cnvrg.io AI Blueprints are developer-friendly, open-source, and fully customizable – enabling any developer to easily add ML to their applications.” 

He further added that AI Blueprints is a way of allowing data scientists to easily share their work while also assisting organizations in keeping up with demand and applying AI to a broader range of applications. 

Israel-based artificial intelligence and machine learning company Cnvrg.io was founded by Leah Kolben and Yochay Ettun in 2016. The firm specializes in providing end-to-end solutions that allow organizations to accelerate innovation and build high-impact machine learning models. 

Cnvry has a vast customer base of companies of all sizes, including multiple Fortune 500 organizations. To date, the firm has raised $8 million in a funding round held in 2019 from investors like Hanaco Venture Capital and Jerusalem Venture Partners. 

In 2020, semiconductor manufacturing giant Intel acquired Cnvrg.io in a deal. However, no information was provided regarding the valuation of the acquisition deal. 

VP Intel AI Strategy and Execution, Kavitha Prasad, said, “We’re excited to work closely with cnvrg.io to enable developers to get more value from their AI initiatives.” 

Interested users can check out AI Blueprints from the official website of Cnvrg.io. 

Advertisement

General Motors to invest $7 billion in Michigan facilities for EV Production

General Motors invest EV Production

Global automobile manufacturing giant General Motors (GM) announces its plans to invest $7 billion in four Michigan facilities to accelerate the production of electric vehicles (EV). 

With the new investment, GM plans to start production of electric pickup trucks by 2024. General Motors says that the new investment will help in creating more than 4000 new job opportunities while exponentially increasing the production capabilities of electric trucks and battery cells. 

It is a milestone development as this is the single largest investment that GM has ever made in its history. CEO and Chair of General Motors, Mary Barra, said, “Today we are taking the next step in our continuous work to establish GM’s EV leadership by making investments in our vertically integrated battery production in the US, and our North American EV production capacity.” 

Read More: Top 10 Python Data Science Libraries

She further added that GMC HUMMER EV, Cadillac LYRIQ, Chevrolet Equinox EV, and Chevrolet Silverado EV have all received excellent customer feedback and reservations since their recent electric vehicle releases and debuts. 

Therefore, GM will construct a new Ultium cell battery plant in Lansing and the conversion of GM’s Orion Township assembly factory to support the p[roduction of its highly rated electric vehicles. 

Once the facilities start operating at full scale, GM will be able to manufacture 600,000 electric trucks. The company expects to have the biggest EV portfolio of any carmaker, solidifying its route to EV leadership in the United States by the middle of this decade. 

This investment, according to GM, is the next step in the company’s plan to become the EV market leader in the US by 2025. 

“These important investments would not have been possible without the strong support from the Governor, the Michigan Legislature, Orion Township, the City of Lansing, Delta Township, as well as our collaboration with the UAW and LG Energy Solution,” said Barra.

Advertisement

Aurora and U.S. Xpress collaborates to develop Driverless Truck Networks

Aurora U.S. Xpress Driverless Truck Networks

Self-driving vehicle software developing company Aurora has partnered with trucking company U.S. Xpress to develop driverless truck networks. 

According to the agreement, the companies will determine the best deployment tactics for Aurora Driver-powered vehicles to meet the demand while improving operational efficiency and productivity. 

Aurora is developing Aurora Horizon, a service that comprises a fully automated driving system and a suite of fleet management tools and services. The newly announced partnership will allow Aurora to further refine its autonomous Driver-as-a-Service product, Aurora Horizon, enabling the companies to initiate the commercialization of the product. 

Read More: MeitY’s Data Policy unlocks Government Data for all

Aurora, which has been in business for six years, has previously formed partnerships with multiple vehicle manufacturing companies, including truck makers PACCAR, Volvo Truck, and others like FedEx Corp and Uber Technologies’ Uber Freight. 

The new strategic partnership will enable the companies to identify the sectors and areas where autonomous technology can make the most difference, leveraging Aurora and U.S. Xpress’ expertise in the subject. 

CEO and President of U.S. Xpress, Eric Fuller, said, “Professional truck drivers will always have a place with our company, while autonomous trucks will supplement and help provide much-needed capacity to the supply chain.” 

He further added that Aurora is creating new technology for the future of trucking, which is the reason they are cooperating early to ensure they are the first to market with self-driving trucks.

Additionally, Aurora and U.S. Xpress will also integrate application programming interfaces (APIs) into Variant’s platform to improve dispatching and dynamic routing following the launch of Aurora Horizon. 

United States-based autonomous driving software developing firm Aurora was founded by Chris Urmson, J. Andrew Bagnell, and Sterling Anderson in 2017.The company specializes in providing a platform that brings software and data services to operate passenger vehicles. 

“Aurora carefully designs its industry collaborations to enhance the value and maximize the impact our product can deliver for our partners’ businesses,” said Co-founder and Chief Product Officer of Aurora, Sterling Anderson. 

He also mentioned that they are pleased to work with U.S. Xpress to provide the benefits of this game-changing technology for their company and customers. 

Advertisement

Another Phishing attack on OpenSea: Are Phishing threats on rise in NFT Marketplaces?

Opensea phishing attack
Image Credit: Analytics Drift

In yet another alarming development, OpenSea, the world’s largest NFT marketplace, disclosed that it had been attacked by a phishing attempt, with at least 32 customers losing NFTs valued at US$1.7 million. This comes after Devin Finzer, the co-founder and CEO of Opensea has rebutted reports that the NFT marketplace has been breached. 

The incident occurred when OpenSea was migrating to its new Wyvern smart contract system, which started on Friday and is expected to be finished by February 25. Wyvern smart contract is an open-source standard commonly used in NFT smart contracts and Web3 applications, notably OpenSea. Users of OpenSea were obliged to convert their listed NFTs from the Ethereum blockchain to a new smart contract as part of the contract upgrade. Users that do not transfer from Ethereum risk losing their old, inactive listings, which presently do not require gas fees for migration. The contract upgrade to remove inactive NFTs from the platform had a one-week deadline attached to it. To assist them, the platform sent out emails to all users with advice on how to confirm the listings’ migration.

The news of attackers hacking the to-be-listed NFTs broke just hours after OpenSea announced its update. The phishing actors took advantage of this process and sent the message from OpenSea to authenticated individuals using their own email addresses, fooling them into thinking their original confirmation had failed.

According to an explanatory thread posted by Finzer, the victims were asked to sign half of a Wyvern order. Except for call data and a target of the attacker’s contract, the order was practically empty, with the victim signing half and the attacker signing the other.

Following signature, the attacker calls their own contract listed in the double-signed order, which initiates the transfer of the victim’s NFTs to the attacker.

According to Finzer, OpenSea determined that neither its website nor a previously unknown weakness in the platform’s NFT minting, purchasing, selling, or listing functions was used in the attack. Clicking on the site’s banner, signing the new Wyvern smart contract, and migrating listings to the new Wyvern contract system through OpenSea’s listing migration tool were all found to be secure.

As per Peckshield, a blockchain security firm, up to 254 tokens were taken, including NFTs from Decentraland, Azuki collections, and the Bored Ape Yacht Club. Molly White, the creator of the Web3 is Going Great blog, estimated the loot to be worth 641 Ethereum. In addition, according to the security firm, the OpenSea hacker(s) allegedly misused the privacy mixer application Tornado Cash to wash ETH 1,100. Tornado Cash has the ability to mask the Ether tokens’ final destination.

The phishing attack is currently being investigated by OpenSea. Current examination indicated that the NFTs were stolen using phishing emails before being moved to OpenSea’s new smart contract. At the moment, OpenSea has denied that the attack was caused by the new contracts and the phishing emails had originated from outside the platform.

Finzer stated that they have yet to determine which websites were fooling users into maliciously signing mails at this time. In the meanwhile, OpenSea is notifying concerned users to offer assistance with the next steps.

NFTs are digital tokens that serve as proofs of authenticity for, and in certain cases, ownership of, assets ranging from high-end ape paintings to collectibles such as celebrity signatures and tangible commodities such as a case of rare whiskey.

With over one million active user wallets and a market capitalization of $13 billion, OpenSea is one of the largest NFT marketplaces. According to Dune Analytics, a Blockchain analytics business, its average daily trade volume is over $260 million, with a monthly volume of over $2 billion in January 2022. According to blockchain tracking service DappRadar, the platform has done $21.8 billion in lifetime trades, which is around $5 billion more than the second-largest platform — LooksRare.

Read More: Polygon and GameOn to Develop NFT based Games: How will it impact the market?

CheckPoint Research issued a security alert in October 2021 regarding a vulnerability in OpenSea that, if abused, may have let attackers take over user accounts and empty their crypto wallets by sending malicious NFTs. In fact, according to a report released earlier this month by Chainalysis, illicit wallets amassed nearly $11 billion in cryptocurrency in 2021 alone.

Last year, a phishing scam led to the loss of 15 NFTs worth $2.2 million from Todd Kramer’s Ethereum wallet. Among the NFTs hacked were four from the Bored Ape Yacht Club. The British heavy metal legend, Ozzy Osbourne debuted his CryptoBatz collection in January, which consists of 9,666 digital bats modeled after Osbourne’s personality. However, only two days after the tokens were issued, collectors reported being targeted by a phishing scam that drains cryptocurrency from their wallets, via a faulty link posted by the project’s official Twitter account. Wormhole Portal, a crypto platform, was hacked in February of this year, losing $322 million, making it the second-largest hack in the Defi industry.

It also noted that the tendency of hackers to create enthusiasm around a project in order to inflate pricing before abandoning it has become more common in the last year or so. 

Advertisement