Saturday, November 22, 2025
ad
Home Blog Page 262

Introducing Voice NFTs: World’s first collection gets sold out in 10 minutes

Voice nfts ethereum nft

On January 29th, the world’s first voice NFTs, offered by Voiceverse, went on sale with 8,888 NFTs. In a highly rare occurrence for NFT collections stored on the Ethereum blockchain, the collection title ‘Voiceverse Origins’ got sold out in less than 10 minutes after being on sale to the general public.

Voiceverse, founded by three BAYC (Bored Ape Yacht Club) members, aims to lead the NFT space into the next generation of high intrinsic value NFTs, allowing holders to possess a voice in the metaverse. People can now add a voice to their Profile Picture (PFP) NFTs with Voice NFTs, giving them a new level of personalization and pseudonymity. Voiceverse further claims that, contrary to popular belief, Voice NFTs will disrupt the business by providing a new stream of possibilities for Voice Actors.

Holders of verbal NFTs can use their voices in a variety of metaverse settings, including games, video calls, and other social media. They can also mint a voice NFT using their own voice or, with explicit permission, the voices of those they near and dear. Owners can even generate a new NFT by combining different voice NFTs. Users can also give their existing PFP NFTs a new life by adding voices to them.

LOVO, the parent company of Voiceverse, is headquartered in San Francisco and Seoul and starts in the AI Speech Synthesis space. Many celebrity voice actors have already collaborated with the venture capital-backed startup to make Voice NFT project. Voiceverse promises the actors that lent their voices to Voice NFTs would get royalties.

voice nfts
To hear the voice clips, click here.

Voiceverse Origins sold out in 10 minutes after a week of pre-sale and has been trending across all categories on OpenSea. It has also consistently ranked in the top ten in terms of sales volume (by count). Voiceverse, with its background in AI and speech synthesis, is poised to disrupt the industry and kickstart a trend toward second-generation NFTs.

Read more: YetAi to Launch SOLANA blockchain-based AI-Generated NFTs in 2022

While the mass-market potential of NFTs is yet to be unfolded, the introduction of Voice NFTs after recent trend of iNFTs does bring a new dimension to the NFTs bubble. Meanwhile, proponents of metaverse believe that blockchain-based products will completely dominate the technology hype in the coming years. As NFTs are slowly finding application and adoption in IoT, healthcare, smart city projects, the advent of voice NFTs can unlock new opportunities.

Advertisement

Indian Government announces to Launch Digital Rupee From RBI

Indian Government Digital Rupee

The government of India announces its plans to launch its new Digital Rupee, a central bank digital currency (CBDC) from the Reserve Bank of India (RBI), in 2022-2023. India has become one of the few countries to announce the launch of an official digital currency. 

The groundbreaking announcement was made by India’s Finance Minister Nirmala Sitharaman while presenting the country’s new budget yesterday. CBDC is a digital form of legal money issued by central banks, similar to a currency issued in paper. 

According to the announcement, the new Digital Rupee will be based on blockchain technology like other cryptocurrencies. Although, an official name for digital currency is yet to be decided. This new development is a bold move to support the government’s Digital India initiative. 

Read More: Diem Shuts Down and Confirms Asset Sale to Silvergate

Nirmala Sitharaman said, “The introduction of central bank digital currency will give a big boost to the digital economy. Digital currency will also be a more cheaper and efficient currency management system.” 

She further added that as a result, it is recommended that the Reserve Bank of India create a Digital Rupee based on blockchain and other technologies, beginning in 2022-23. However, the newly announced Digital Rupee will be different from mainstream cryptos like Bitcoin, which are decentralized. 

The minister then explained that the Digital Rupee would be regulated differently than other digital assets and cryptocurrencies. In comparison to the current digital payment experience, a benefit of the Digital Rupee is that transactions will be immediate. 

Apart from India, nine other countries, including Nigeria and the Eastern Caribbean, have officially launched their digital currencies. 

“Taking forward this agenda, and to mark 75 years of Independence, it’s proposed to set up 75 digital banking units in 75 districts of the country by scheduled commercial banks. The financial support for the digital payment ecosystem – announced in the previous budget – will continue in 2022-23 as well,” added Sitharaman.

Advertisement

Diem Shuts Down and Confirms Asset Sale to Silvergate

Diem Shuts Down and Confirms Asset Sale to Silvergate

After multiple congressional hearings, rebrands, and several high-profile staff departures, Diem, the cryptocurrency project Facebook founder Mark Zuckerberg, is finally calling it quits. Diem Association has announced its sale of intellectual property and other assets to Silvergate Capital Corp. 

The cryptocurrency initiative was initially known as Libra. When it was launched in 2019, Facebook planned to use stablecoins, stable-value digital currencies to revolutionize global financial services. Libra originally had several dozen partners, but many of them left soon after the congress and other regulatory boards began to scrutinize it. The Libra project was eventually rebranded to Diem in an ambition to scale back.

Facebook, now Meta Platforms Inc., started Diem to make payments and money transfers cheaper and faster. However, Diem never got off the ground because of resistance from federal regulators. 

Read more: Clearview AI receives US Patent for its Facial Recognition Platform

Diem was considering selling its assets to return capital to its investors. But now, Diem will sell its assets to Silvergate for about $200 million. Silvergate is a crypto-focused bank working on launching a stablecoin pegged to the U.S. dollar. 

The decision to sell Diem to Silvergate was made after it “became clear from our dialogue with federal regulators that the project could not move ahead,” Stuart Levey, Diem CEO, said in a press release.

There’s always the chance that Silvergate or another player revives Diem because Diem’s design was more transparent and regulator-friendly than a lot of existing stablecoins. However, with its founder, David Marcus, and most of Libra’s founding team gone from Meta, the odds of Diem ever reemerging with the same level of backing seems slim. 

Advertisement

Clearview AI receives US Patent for its Facial Recognition Platform

Clearview AI US Patent Facial Recognition

Artificial intelligence-powered facial recognition system developing company Clearview Ai receives a U.S. patent for its revolutionary face recognition platform. The technology was tested in the National Institute of Standards & Technology (NIST) Facial Recognition Vendor Test (FRVT), where it showed outstanding performance. 

The “Methods for Providing Information About a Person Based on Facial Recognition” patent, U.S. Patent No. 11,250,266, is the first facial recognition system of its kind to receive a patent in the United States. 

The highly capable platform developed by Clearview AI uses data from publicly available sources and accurately matches similar photos using its proprietary artificial intelligence-powered facial recognition algorithm. 

Read More: Yellow.ai recognized in the Gartner Magic Quadrant for Enterprise Conversational AI Platforms

CEO and Co-founder of Clearview AI, Hoan Ton-That, said, “This distinction is more than an intellectual property protection; it is a clear acknowledgment of Clearview AI’s technological innovation in the artificial intelligence industry.” 

The announcement was made in December 2021 by Clearview AIthat it will soon be awarded a U.S. patent for its one-of-a-kind facial recognition system. The patent will allow other organizations to use Clearview AI’s technology after paying the required fee. 

Clearview AI received patent protection as its technology acquires information from the public internet and its facial recognition capabilities. New York-based technology company Clearview AI was founded by Hoan Ton-That and Richard Schwartz in 2017. 

The firm specializes in providing a research tool primarily used by several law enforcement agencies to identify perpetrators and victims of crimes. To date, Clearview AI has raised more than $38 million over three funding rounds from investors like Kirenaga Partners, Hal Lambert, and many more. 

Though, the company has been involved in several controversies in the past regarding its practices that violate the privacy of individuals. It has managed to get its patent, allowing Clearview AI to expand further. 

Advertisement

Cruise raises $1.35 billion from SoftBank’s Vision Funds

Cruise raises $1.35 billion SoftBank

Self-driving vehicles developing company Cruise raises $1.35 billion from SoftBank’s Vision Fund in an unknown funding round series. According to Cruise, it plans to use the fresh funds to quickly scale its self-driving technology in San Francisco and expand into other locations. 

The investment comes while the company is making arrangements to launch its robotaxi service in the United States. SoftBank had earlier invested $900 million in GM’s majority-owned subsidiary, Cruise, and has promised to invest more when its autonomous vehicles get ready for commercial deployment. 

Initially, the public launch of its autonomous vehicles was planned to be held in 2019, but it got delayed due to several factors. Cruise has been providing ride-hailing services to its employees for many years in San Francisco. 

Read More: Data2vec: Meta’s new Self-supervised algorithm for Multiple Modalities

Apart from SoftBank, Cruise has received financial support from multiple other investors like Honda and Microsoft. Chief Executive of General Motors, Mary Barra, said, “There is still so much that can be accomplished with a frictionless environment between Cruise and GM.” 

She further added that Cruise is not seeking to raise more funds from the capital markets in the near term. Interested customers can visit the official website of Cruise and sign up to join the waiting list for enjoying Cruise’s services in San Francisco. 

Co-founder and CEO of Cruise, Kyle Vogt, said regarding the ride experience, “Most people experience childlike delight during the first ride, but then the ride quickly becomes boring. One of them even fell asleep.” 

United States-based self-driving car developer Cruise was founded by Daniel Kan, Jeremy Guillory, and Kyle Vogt in 2013. Cruise believes that the best way to bring autonomous driving technology to the world is by exposing it to the same unique and complex traffic scenarios that human drivers face every day. 

General Motors, one of the largest automobile manufacturers in the world, acquired Cruise back in 2016 to expand GM’s autonomous vehicle technology.

Advertisement

Yellow.ai recognized in the Gartner Magic Quadrant for Enterprise Conversational AI Platforms

Yellow.ai Gartner Magic Quadrant Enterprise Conversational AI

Customer experience automation platform Yellow.ai announced that it had been recognized in the 2022 Gartner Magic Quadrant for Enterprise Conversational AI Platforms. The Gartner Magic Quadrant evaluates a provider’s vision and execution capability. 

This new market segment of Enterprise Conversational AI Platforms will debut in 2022. Out of thousands of players in the market, the Gartner Magic Quadrant report highlights 21 of the most advanced vendors. 

The new market category shows that companies are increasingly looking for platform-based approaches to address multiple enterprise use-case requirements. Companies have understood that this new approach allows them to better leverage their investments. 

Read More: Vidhya.ai Successfully taught AI to 5000 Students in their Local Languages

CEO and Co-founder of Yellow.ai, Raghu Ravinutala, said, “We believe this recognition strongly validates the power of our platform capabilities, the momentum we’ve experienced in addressing the unique demands across the markets we operate, and the disruption we’re bringing to the Conversational AI market.” 

He further added that they are honored to be named a Niche Player in Gartner’s Magic Quadrant for Enterprise Conversational AI Platforms for 2022. This is the first Magic Quadrant from Gartner for the Conversational AI market, which Yellow.ai believes has witnessed tremendous growth and adoption in the last year. 

Last year, Yellow.ai raised $38 million in its series C funding led by WestBridge Capital. Bengaluru-based customer experience automation startup Yellow.ai was founded by Anik Das, Jaya Kishore, Reddy Gollareddy, Raghu Ravinutala, and Rashid Khan in 2016. 

The firm specializes in providing natural language processing-based customer experience automation platforms.Yellow.ai has a customer base of nearly 700 companies and has users spread across over 70 countries worldwide. 

Many industry-leading organizations like Domino’s, Sephora, Hyundai, Biogen International, Edelweiss Broking use Yellow.ai’s solution for customer communication. 

“In just five years, our solutions have enabled over 1000 enterprises to find their niche for automation needs across Customer Experience and Employee Experience with us, driving higher competencies and ROI,” said Ravinutala.

Advertisement

Data2vec: Meta’s new Self-supervised algorithm for Multiple Modalities

data2vec Meta
Image Credits: Analytics Drift Design Team

Last month, Meta had released its first high-performance self-supervised algorithm called data2vec for multiple modalities. The moniker data2vec is a pun on “word2vec,” a Google technology for language embedding that was released in 2013. Word2vec is an example of a neural network developed for a certain sort of input, in this instance text, since it anticipated how words cluster together. 

The research team explained that data2vec is trained by predicting the model representations of the whole input data given a partial view of the input in the paper “Data2vec: A General Framework for Self-supervised Learning in Speech, Vision, and Language.” Meta AI has released data2vec source code and pre-trained models for voice and natural-language processing on GitHub under the permissive MIT license.

Earlier, AI machines were made to learn from labeled data. But, things changed since the advent of self-supervised learning that allows machines to learn about their surroundings by watching them and then decoding the structure of pictures, voice, or text. This technique allows computers to tackle new complicated data-related jobs more efficiently, such as comprehending text in more spoken languages. 

However, most of the existing models are proficient at performing only single tasks. For example, a facial recognition system cannot generate textual content nor can a credit card fraud detection system help in detecting tumors in patients. In simpler words, while we have built state-of-the-art machines for a particular application, it is confined to that niche, the machines’ AI prowess may not be transferable. Self-supervised learning research nowadays nearly often concentrates on a single modality. As a result, researchers who work on one modality frequently use a completely different technique from those who specialize in another.

This deficit in the AI industry motivated Meta to develop data2vec, which not only unifies the learning process but also trains a neural network to recognize images, text, or speech. The data2vec surpassed current processes for a variety of model sizes on the primary ImageNet computer vision benchmark. It outperformed two prior Meta AI self-supervised voice algorithms, wav2vec 2.0 and HuBERT. It was tested on the popular GLUE text benchmark suite and found to be on par with RoBERTa, a reimplementation of BERT.

Image Credit: Meta
Image Credit: Meta
Image Credit: Meta

Data2vec employs a single model but offers two modes: teacher and student. The student mode of data2vect will learn from the teacher mode and update the model parameters at each time step. In the teacher mode, a given sample is used to produce a representation of the joint probability of data input, be it images or speech, or text.  The student mode is given a block-wise masked version of the same sample and is tasked with predicting representations of the whole input data while only being provided a portion of it. This prediction is based on internal representations of the input data, which eliminates the need to operate in a single modality.

Image Source: Original Blog Paper

Here, since data2vec relies on the use of the self-attention mechanism of Transformer, the representations are contextualized in nature, i.e. they store a specific timestep as well as other information from the sample. This is the most significant distinction between this work and prior ones, which lacked context. 

Unlike other Transformer-based models such as Google’s BERT and OpenAI’s GPT-3, data2vec does not focus on creating certain output data types. Instead, data2vec focuses on inner neural network layers that represent the data before it is produced as a final output. This is due to the self-attention mechanism that allows inputs to interact with each other (i.e calculate attention of all other inputs with respect to one input.

The researchers trained data2vec on 960 hours of voice audio, millions of words from books and Wikipedia pages, and pictures from ImageNet-1K using a combination of 16 Nvidia V100 and A100 GPUs. Meta leveraged the ViT, which entails encoding a picture as a series of patches, each of which spans 16×16 pixels and is fed into a linear transformation. The ViT, or vision Transformer is a neural network, built by Alexey Dosovitskiy and colleagues at Google, particularly intended for visual applications, last year. A multi-layer 1-D convolutional neural network is then used to encode the voice data. It converts 16 kHz waveforms to 50 Hz equivalents. Even the text is pre-processed to obtain sub-word units, which are embedded in distributional space through learning embedding vectors.

Read More: Understanding The Need to Include Signed Language in NLP training Dataset

Multi-modal systems have already been proved to be vulnerable to adversarial assaults. If the word “iPod” appears in the image, OpenAI’s CLIP model, which is trained on pictures and text, will mistakenly classify an image of an apple as an iPod. However, it’s uncertain whether data2vec has the same flaws.

According to the official statement, Meta has not specifically examined how data2vec will respond to adversarial examples, but because current models are trained separately for each modality, it believes that existing research on adversarial attack analysis for each modality would apply to data2vec as well.

Advertisement

NVIDIA to Drop its $40 billion Acquisition deal of Arm

NVIDIA drop 40 billion Acquisition deal Arm

Technology company NVIDIA plans to drop its earlier announced $40 billion acquisition deal of multinational semiconductor company Arm. Last year, NVIDIA planned to acquire Arm from SoftBank Group, which acquired the semiconductor firm back in 2016. 

NVIDIA was not able to make much progress in getting approval for this $40 billion deal, which has led to multiple speculations regarding the dilution of this acquisition deal. 

Last year, NVIDIA said that the transaction was projected to boost NVIDIA’s non-GAAP gross margin and non-GAAP profits per share immediately. 

Read More: China’s Cyberspace Administration of China (CAC) Announces new proposal to curb Deepfakes

According to one individual, Nvidia warned partners that it does not anticipate the acquisition’s finalization. The source preferred to remain unnamed as the matter is still private. 

Bob Sherbin, an NVIDIA spokesperson, said, “We continue to hold the views expressed in detail in our latest regulatory filings — that this transaction provides an opportunity to accelerate Arm and boost competition and innovation.” 

The acquisition of Arm would have been the biggest deal in the semiconductor industry to date. In December, the US Federal Trade Commission filed a lawsuit to stop the deal, claiming that it would become too powerful if Nvidia obtained control of Arm’s chip designs. 

While NVIDIA plans to cancel the deal, SoftBank makes arrangements for Arm’s initial public offering (IPO). Officials claim that both NVIDIA and Arm are in close contact with their regulators, and a final decision related to the acquisition deal is yet to be made. A spokesperson from SoftBank said that they remain hopeful that the transaction will be approved soon.

Advertisement

Vidhya.ai Successfully taught AI to 5000 Students in their Local Languages

Vidhya.ai 5000 students AI local languages

One of the leading EdTech companies, Vidhya.ai, announces that it has successfully trained 5000 students in artificial intelligence and machine learning. Out of the 5000 learners, 800 students have completed an advanced certification course in AI and ML. 

The company plans to boost its efforts and train nearly one lakh students in artificial intelligence by 2024. The primary USP of Vidhya.ai is that it offers relevant training to learners in several local languages, which allows them to teach students from remote and previously inaccessible locations in India. 

CEO of Vidhya.ai Navya Jain said, “Every student in India should get an opportunity to learn artificial intelligence irrespective of socio-economic background and language proficiency. We want to break the myth that one should be an IT professional and proficient in English to develop artificial intelligence-based applications.” 

Read More: Airbnb’s AI Software blocked People from Renting Houses for Parties

She further added that they think anybody can learn artificial intelligence and contribute to the AI revolution, regardless of their educational level and language competence. The training was provided by multiple industry experts that allowed learners to gain critical skills required to kick start their careers in the artificial intelligence and machine learning domain. 

Gurgaon-based EdTech firm Vidhya.ai was founded in 2021 by Delhi University student Navya Jain. The company collaborates with universities and NGOs to provide top-notch training to students, especially from underprivileged sections of society. Vidhya.ai has held seminars, webinars, workshops, and training sessions to teach students about artificial intelligence, machine learning, and data science. 

Navya said, “Our talent lies in remote parts and villages of India. The students from these areas are closer to pressing issues such as environmental, agricultural, social, water preservation, and sanitation.” She added that they are trying to bring technology closer to the challenges and discover unique solutions to the problems through training students.

Advertisement

Airbnb’s AI Software blocked People from Renting Houses for Parties

Airbnb AI software block house parties

Online accommodation marketplace Airbnb says that its artificial intelligence-enabled system has blocked thousands of people from renting houses for parties in Florida, United States. The company has already officially banned rentals for parties in its properties. 

According to the company, its AI-powered computer system has restricted numerous property bookings with the potential intent of organizing house parties. When a potential party house renter tries to book a property using Airbnb, the platform automatically refuses. 

The data released by Airbnb points out that the platform had blocked nearly 49,600 interested house renters from booking a property in Florida in 2021. This includes the festive session, including Halloween, making 2021 a full year of the anti-party house program of Airbnb. 

Read More: DoD selects Scale AI to Accelerate US Government’s AI Capabilities

The company mentioned, “We believe it worked. Those weekends were generally quiet, and these initiatives were well-received by our host community.” Last year, besides restricting house parties, Airbnb also capped the maximum occupancy to up to 16 individuals. 

This move aims to minimize the chances of causing damage to listed properties and help reduce neighborhood nuisance drastically. Airbnb also launched its new 24/7 support helpline number that allows neighbors to communicate directly with Airbnb officials for enforcing house party ban. 

According to Airbnb, anyone under the age of 25 who does not have a pleasant history as a guest at an Airbnb property is prohibited from renting any full, vacant residence in the same city as the renter. No direct information is provided to renters regarding their ban, but they get notified that their desired property is unavailable. 

Additionally, the banned renters will be redirected to other available properties where the owner is present on-site, ensuring the no house parties get organized. 

Advertisement