Home Blog Page 263

Yellow.ai recognized in the Gartner Magic Quadrant for Enterprise Conversational AI Platforms

Yellow.ai Gartner Magic Quadrant Enterprise Conversational AI

Customer experience automation platform Yellow.ai announced that it had been recognized in the 2022 Gartner Magic Quadrant for Enterprise Conversational AI Platforms. The Gartner Magic Quadrant evaluates a provider’s vision and execution capability. 

This new market segment of Enterprise Conversational AI Platforms will debut in 2022. Out of thousands of players in the market, the Gartner Magic Quadrant report highlights 21 of the most advanced vendors. 

The new market category shows that companies are increasingly looking for platform-based approaches to address multiple enterprise use-case requirements. Companies have understood that this new approach allows them to better leverage their investments. 

Read More: Vidhya.ai Successfully taught AI to 5000 Students in their Local Languages

CEO and Co-founder of Yellow.ai, Raghu Ravinutala, said, “We believe this recognition strongly validates the power of our platform capabilities, the momentum we’ve experienced in addressing the unique demands across the markets we operate, and the disruption we’re bringing to the Conversational AI market.” 

He further added that they are honored to be named a Niche Player in Gartner’s Magic Quadrant for Enterprise Conversational AI Platforms for 2022. This is the first Magic Quadrant from Gartner for the Conversational AI market, which Yellow.ai believes has witnessed tremendous growth and adoption in the last year. 

Last year, Yellow.ai raised $38 million in its series C funding led by WestBridge Capital. Bengaluru-based customer experience automation startup Yellow.ai was founded by Anik Das, Jaya Kishore, Reddy Gollareddy, Raghu Ravinutala, and Rashid Khan in 2016. 

The firm specializes in providing natural language processing-based customer experience automation platforms.Yellow.ai has a customer base of nearly 700 companies and has users spread across over 70 countries worldwide. 

Many industry-leading organizations like Domino’s, Sephora, Hyundai, Biogen International, Edelweiss Broking use Yellow.ai’s solution for customer communication. 

“In just five years, our solutions have enabled over 1000 enterprises to find their niche for automation needs across Customer Experience and Employee Experience with us, driving higher competencies and ROI,” said Ravinutala.

Advertisement

Data2vec: Meta’s new Self-supervised algorithm for Multiple Modalities

data2vec Meta
Image Credits: Analytics Drift Design Team

Last month, Meta had released its first high-performance self-supervised algorithm called data2vec for multiple modalities. The moniker data2vec is a pun on “word2vec,” a Google technology for language embedding that was released in 2013. Word2vec is an example of a neural network developed for a certain sort of input, in this instance text, since it anticipated how words cluster together. 

The research team explained that data2vec is trained by predicting the model representations of the whole input data given a partial view of the input in the paper “Data2vec: A General Framework for Self-supervised Learning in Speech, Vision, and Language.” Meta AI has released data2vec source code and pre-trained models for voice and natural-language processing on GitHub under the permissive MIT license.

Earlier, AI machines were made to learn from labeled data. But, things changed since the advent of self-supervised learning that allows machines to learn about their surroundings by watching them and then decoding the structure of pictures, voice, or text. This technique allows computers to tackle new complicated data-related jobs more efficiently, such as comprehending text in more spoken languages. 

However, most of the existing models are proficient at performing only single tasks. For example, a facial recognition system cannot generate textual content nor can a credit card fraud detection system help in detecting tumors in patients. In simpler words, while we have built state-of-the-art machines for a particular application, it is confined to that niche, the machines’ AI prowess may not be transferable. Self-supervised learning research nowadays nearly often concentrates on a single modality. As a result, researchers who work on one modality frequently use a completely different technique from those who specialize in another.

This deficit in the AI industry motivated Meta to develop data2vec, which not only unifies the learning process but also trains a neural network to recognize images, text, or speech. The data2vec surpassed current processes for a variety of model sizes on the primary ImageNet computer vision benchmark. It outperformed two prior Meta AI self-supervised voice algorithms, wav2vec 2.0 and HuBERT. It was tested on the popular GLUE text benchmark suite and found to be on par with RoBERTa, a reimplementation of BERT.

Image Credit: Meta
Image Credit: Meta
Image Credit: Meta

Data2vec employs a single model but offers two modes: teacher and student. The student mode of data2vect will learn from the teacher mode and update the model parameters at each time step. In the teacher mode, a given sample is used to produce a representation of the joint probability of data input, be it images or speech, or text.  The student mode is given a block-wise masked version of the same sample and is tasked with predicting representations of the whole input data while only being provided a portion of it. This prediction is based on internal representations of the input data, which eliminates the need to operate in a single modality.

Image Source: Original Blog Paper

Here, since data2vec relies on the use of the self-attention mechanism of Transformer, the representations are contextualized in nature, i.e. they store a specific timestep as well as other information from the sample. This is the most significant distinction between this work and prior ones, which lacked context. 

Unlike other Transformer-based models such as Google’s BERT and OpenAI’s GPT-3, data2vec does not focus on creating certain output data types. Instead, data2vec focuses on inner neural network layers that represent the data before it is produced as a final output. This is due to the self-attention mechanism that allows inputs to interact with each other (i.e calculate attention of all other inputs with respect to one input.

The researchers trained data2vec on 960 hours of voice audio, millions of words from books and Wikipedia pages, and pictures from ImageNet-1K using a combination of 16 Nvidia V100 and A100 GPUs. Meta leveraged the ViT, which entails encoding a picture as a series of patches, each of which spans 16×16 pixels and is fed into a linear transformation. The ViT, or vision Transformer is a neural network, built by Alexey Dosovitskiy and colleagues at Google, particularly intended for visual applications, last year. A multi-layer 1-D convolutional neural network is then used to encode the voice data. It converts 16 kHz waveforms to 50 Hz equivalents. Even the text is pre-processed to obtain sub-word units, which are embedded in distributional space through learning embedding vectors.

Read More: Understanding The Need to Include Signed Language in NLP training Dataset

Multi-modal systems have already been proved to be vulnerable to adversarial assaults. If the word “iPod” appears in the image, OpenAI’s CLIP model, which is trained on pictures and text, will mistakenly classify an image of an apple as an iPod. However, it’s uncertain whether data2vec has the same flaws.

According to the official statement, Meta has not specifically examined how data2vec will respond to adversarial examples, but because current models are trained separately for each modality, it believes that existing research on adversarial attack analysis for each modality would apply to data2vec as well.

Advertisement

NVIDIA to Drop its $40 billion Acquisition deal of Arm

NVIDIA drop 40 billion Acquisition deal Arm

Technology company NVIDIA plans to drop its earlier announced $40 billion acquisition deal of multinational semiconductor company Arm. Last year, NVIDIA planned to acquire Arm from SoftBank Group, which acquired the semiconductor firm back in 2016. 

NVIDIA was not able to make much progress in getting approval for this $40 billion deal, which has led to multiple speculations regarding the dilution of this acquisition deal. 

Last year, NVIDIA said that the transaction was projected to boost NVIDIA’s non-GAAP gross margin and non-GAAP profits per share immediately. 

Read More: China’s Cyberspace Administration of China (CAC) Announces new proposal to curb Deepfakes

According to one individual, Nvidia warned partners that it does not anticipate the acquisition’s finalization. The source preferred to remain unnamed as the matter is still private. 

Bob Sherbin, an NVIDIA spokesperson, said, “We continue to hold the views expressed in detail in our latest regulatory filings — that this transaction provides an opportunity to accelerate Arm and boost competition and innovation.” 

The acquisition of Arm would have been the biggest deal in the semiconductor industry to date. In December, the US Federal Trade Commission filed a lawsuit to stop the deal, claiming that it would become too powerful if Nvidia obtained control of Arm’s chip designs. 

While NVIDIA plans to cancel the deal, SoftBank makes arrangements for Arm’s initial public offering (IPO). Officials claim that both NVIDIA and Arm are in close contact with their regulators, and a final decision related to the acquisition deal is yet to be made. A spokesperson from SoftBank said that they remain hopeful that the transaction will be approved soon.

Advertisement

Vidhya.ai Successfully taught AI to 5000 Students in their Local Languages

Vidhya.ai 5000 students AI local languages

One of the leading EdTech companies, Vidhya.ai, announces that it has successfully trained 5000 students in artificial intelligence and machine learning. Out of the 5000 learners, 800 students have completed an advanced certification course in AI and ML. 

The company plans to boost its efforts and train nearly one lakh students in artificial intelligence by 2024. The primary USP of Vidhya.ai is that it offers relevant training to learners in several local languages, which allows them to teach students from remote and previously inaccessible locations in India. 

CEO of Vidhya.ai Navya Jain said, “Every student in India should get an opportunity to learn artificial intelligence irrespective of socio-economic background and language proficiency. We want to break the myth that one should be an IT professional and proficient in English to develop artificial intelligence-based applications.” 

Read More: Airbnb’s AI Software blocked People from Renting Houses for Parties

She further added that they think anybody can learn artificial intelligence and contribute to the AI revolution, regardless of their educational level and language competence. The training was provided by multiple industry experts that allowed learners to gain critical skills required to kick start their careers in the artificial intelligence and machine learning domain. 

Gurgaon-based EdTech firm Vidhya.ai was founded in 2021 by Delhi University student Navya Jain. The company collaborates with universities and NGOs to provide top-notch training to students, especially from underprivileged sections of society. Vidhya.ai has held seminars, webinars, workshops, and training sessions to teach students about artificial intelligence, machine learning, and data science. 

Navya said, “Our talent lies in remote parts and villages of India. The students from these areas are closer to pressing issues such as environmental, agricultural, social, water preservation, and sanitation.” She added that they are trying to bring technology closer to the challenges and discover unique solutions to the problems through training students.

Advertisement

Airbnb’s AI Software blocked People from Renting Houses for Parties

Airbnb AI software block house parties

Online accommodation marketplace Airbnb says that its artificial intelligence-enabled system has blocked thousands of people from renting houses for parties in Florida, United States. The company has already officially banned rentals for parties in its properties. 

According to the company, its AI-powered computer system has restricted numerous property bookings with the potential intent of organizing house parties. When a potential party house renter tries to book a property using Airbnb, the platform automatically refuses. 

The data released by Airbnb points out that the platform had blocked nearly 49,600 interested house renters from booking a property in Florida in 2021. This includes the festive session, including Halloween, making 2021 a full year of the anti-party house program of Airbnb. 

Read More: DoD selects Scale AI to Accelerate US Government’s AI Capabilities

The company mentioned, “We believe it worked. Those weekends were generally quiet, and these initiatives were well-received by our host community.” Last year, besides restricting house parties, Airbnb also capped the maximum occupancy to up to 16 individuals. 

This move aims to minimize the chances of causing damage to listed properties and help reduce neighborhood nuisance drastically. Airbnb also launched its new 24/7 support helpline number that allows neighbors to communicate directly with Airbnb officials for enforcing house party ban. 

According to Airbnb, anyone under the age of 25 who does not have a pleasant history as a guest at an Airbnb property is prohibited from renting any full, vacant residence in the same city as the renter. No direct information is provided to renters regarding their ban, but they get notified that their desired property is unavailable. 

Additionally, the banned renters will be redirected to other available properties where the owner is present on-site, ensuring the no house parties get organized. 

Advertisement

DoD selects Scale AI to Accelerate US Government’s AI Capabilities

DoD Scale AI accelerate AI Capabilities

The United States Department of Defense (DoD) selects data infrastructure providing company Scale AI to accelerate the government’s artificial intelligence capabilities. Department of Defense (DoD) Joint Artificial Intelligence Center (JAIC) signed a $250 million blanket purchasing agreement with Scale AI for helping DoD to expand its AI abilities. 

According to the contract, all the federal agencies will receive complete access to the cutting-edge technology of Scale AI. This will allow government officials to solve their most critical challenges using artificial intelligence and machine learning solutions. 

CEO and founder of Scale AI, Alexandr Wang, said, “AI is not a one-and-done technology, and we’re thrilled to see the JAIC embrace the continuous approach to T&E that Scale was founded on.” 

Read More: China’s Cyberspace Administration of China (CAC) Announces new proposal to curb Deepfakes

He further added that the government’s AI initiatives would be more robust, accountable, and equitable if this framework is adopted, guaranteeing that US AI expenditures result in effective deployments of innovative, significant technologies. 

Scale AI will develop Test & Evaluation (T&E) capabilities for the DoD, focusing on use cases like autonomous systems, deep learning-based image analysis, humans augmented by machines, and methods to measure warfighter cognitive workloads, natural language processing-powered products, and many more. 

San Francisco-based artificial intelligence company Scale AI was founded by Alexandr Wang and Lucy Guo in 2016. The company specializes in providing a platform that manages the entire ML lifecycle, from data annotation and curation to model testing and evaluation, enabling any organization to effectively develop and deploy AI solutions. 

To date, the firm has raised more than $600 million from investors like Dragoneer Investment Group, Empede Capital, Durable Capital Partners, and several others. “I think Scale is very lucky where even early on we have been able to achieve meaningful business and production business with the DoD,” said Wang. He also mentioned that according to him, their federal government business is already viable.

Advertisement

China’s Cyberspace Administration of China (CAC) Announces new proposal to curb Deepfakes

china cac deepfakes
Source: Axios

On Friday, China’s internet regulator announced new guidelines for content providers that modify face and voice data, the latest step in the country’s fight against “deepfakes.” In addition, the creation of cyberspace that supports Chinese socialist ideals was also proposed by the Cyberspace Administration of China (CAC). The regulations are open for public comment through February 28, with the final version susceptible to revision.

According to the CAC’s statement, fraudsters will be more motivated to employ digitally generated voice, video, chatbots, or facial or gesture manipulation content in coming times. As a result, the proposal prohibits the use of such fakes in any application that might upset the social order, infringe on people’s rights, spread false information, or portray sexual activity. It also advises obtaining permission to utilize what China refers to as “deep synthesis” before it can be used for legal purposes. Here, deep synthesis has been defined as “Using deep learning and virtual reality to generate and synthesize algorithms to produce text, images, audio, video, virtual scenes, and other information.”

The “Internet Information Service Deep Synthesis Management Regulations” proposal vows to control technologies that generate deepfakes. Deepfake service providers must authenticate their users’ identities before providing them access to relevant items, according to the proposed regulation. Companies are also obliged to follow the correct political direction and respect social morality and ethics. The regulations also make it illegal to make deepfakes without the consent of the person or individuals who appear in them. The proposal also includes a user complaints system and procedures to prevent deepfakes from being used to spread false information. Providers of deep synthesis technology will be forced to suspend or delete their apps if required.

Deep synthesis service providers are now required to improve training data management, ensure legal and proper data processing, and take the required steps to secure data security. 

According to the proposal, in case, training data contains personal information, it should also adhere to the corresponding personal information protection regulations, and personal information must not be processed illegally. As per Article 12 of the draft, “Where a deep synthesis service provider provides significant editing functions for biometric information such as face and human voice, it shall prompt them (provider) to notify and obtain the individual consent of the subject whose personal information is being edited.”

For first-time violators, the laws mandate penalties of 10,000 to 100,000 yuan (US$1,600 to US$16,000), although violations can also result in civil and criminal lawsuits.

Read More: China releases Guidelines on AI ethics, focusing on User data control

China is already buckling under the pressure of regulating the use of deepfakes which has taken the nation by storm in the past few years. For instance, in August 2019, a new app called ZAO went viral, allowing users to swap their faces with those of celebrities. Meanwhile, Chinese individuals are paying for deepfake films in which the face of their choosing – a celebrity or a person they know – is superimposed over the body of a porn star. Avatarify, a Russian AI app that converts static portrait images into videos, became popular on Douyin, China’s equivalent of TikTok, in February last year. Soon, Chinese users were quick to come up with creative ways to exploit the software to make humorous videos, including one in which Elon Musk and Jack Ma appeared to be singing the famous tune Dragostea Din Tei in unison.

Source: Avatarify

According to a deepfake white paper issued by Nisos, a cybersecurity intelligence organization, the three most prominent nations where deepfake developers live are Russia, Japan, and China.

Worried about the dismal popularity of deepfakes, last March, Chinese regulators had summoned 11 domestic technology companies, including Alibaba Group, Tencent, and ByteDance, for talks on the use of ‘deepfake’ technologies on their content platforms. Regulators had also instructed the companies to perform their security evaluations and submit reports to the government when adding new functionalities or new information services that “have the ability to mobilize society.”

While the current proposed draft won’t immediately set action against deepfakes and deep synthesis service providers, it will allow the government to redeem itself in the age of manipulated, misinformed content. 

Advertisement

Elliott and Vista to acquire Citrix in a $13 billion deal

Elliott Vista acquire Citrix

Elliott Management Corp and Vista Equity Partners announce their plans to acquire Citrix in an deal worth $13 billion, reported Reuters. Recently, Elliott and Vista used the loan market to fund their $104 per share cash bid for Citrix. 

After the complete acquisition, the companies plan to merge Citrix with Vista’s own data analytics firm, Tibco. Vista is a Texas-based company that specializes in software buyouts. This new deal will be the company’s largest acquisition to date. 

According to various sources, since last October, Elliott, the hedge fund that has been looking for partners to help it go private. United States-based cloud computing company Citrix Systems was founded by Ed Lacobucci and Srikanth Tirumala in 1989. 

Read More: National Language Institute to use AI for testing Writing Proficiency

The firm specializes in providing a complete and integrated portfolio of Workspace-as-a-Service, application delivery, virtualization, mobility, network delivery, and file sharing solutions. 

Citrix aims to give people, organizations, and things that are securely connected and accessible to make the extraordinary possible. 

Citrix failed to capitalize on the increase of remote working during the ongoing pandemic as it spent too much on its salesforce and very little on its distribution partners, according to Citrix interim Chief Executive Robert Calderoni. Higher operating costs dragged on the company’s operating profits, which fell to $84.5 million in the third quarter from $128.3 million a year ago.

Advertisement

National Language Institute to use AI for testing Writing Proficiency

National Language Institute AI testing Writing Proficiency

The National Institute of Korean Language plans to use artificial intelligence for testing writing proficiency. The announcement was made by the National Language Institute’s director Chang So-won. 

According to the plans, the institute will use AI to develop a system for Korean language proficiency diagnosis. A fund of nearly $8.39 million has been allotted for the development process of the language proficiency test. 

The institute will develop the artificial intelligence-powered language proficiency test over the next five years. Currently, South Korea’s population has an average writing skill of 48 out of 100. This new AI-powered language diagnostic test will drastically help in improving the overall writing skills of the country’s citizens. 

A former professor at the Seoul National University’s Department of Korean Language and Literature, Chang, said, “While the importance of essay writing is emphasized around the world, there are no indicators to evaluate writing in Korea, so less and less colleges are having essay tests.” 

Read More: Scientists create AI Nanny to look after Babies in an Artificial Womb

He further added that if they can develop a nationwide AI-enabled evaluation system, it will be useful for various entrance exams. The AI-powered language test will help provide an objective assessment of overall language skills, which no other available tests in the country provide. 

“When I evaluated university entrance essays as a professor, I found the need for objective evaluation indicators,” said Chang. He also mentioned that the grading criteria for the SATs in the United States and the Baccalaureate in France were highly precise when he looked into them. 

According to the institute, they are expecting an initial investment of around $5 million during the first phase of the development process. Apart from the test, a new Korean language training program will also be developed to meet the ever-increasing demand for Korean language teachers in foreign countries.

Advertisement

Scientists create AI Nanny to look after Babies in an Artificial Womb

AI nanny artificial womb

Scientists from China have developed a new artificial intelligence-powered nanny to take care of babies in an artificial womb. The artificial womb provides an environment for babies to grow safely, and the AI-powered nanny can monitor the entire process. 

Researchers in Suzhou, China’s eastern Jiangsu province, built this artificial intelligence-powered system to safely take care of embryos. Professor Sun Haixuan led the research team at the Suzhou Institute of Biomedical Engineering and Technology, a Chinese Academy of Sciences subsidiary. Currently, the artificial intelligence-powered nanny is monitoring the health of several animal embryos. 

The technology includes a container that is filled with nutritious fluids that support the growth of mouse embryos inside it. The study was published last month in the domestic peer-reviewed Journal of Biomedical Engineering. 

Read More: Study uses Explainable AI to detect Lung and Bronchus Cancer Mortality Rates

Various surveys point out that young Chinese women are increasingly abandoning conventional objectives of marriage and children. Hence, this newly developed system can help mass-produce babies in a country where citizens are unwilling to bear children to regulate the country’s population. 

The research paper mentioned that the new technology “not only helps further understand the origin of life and embryonic development of humans but also provide a theoretical basis for solving birth defects and other major reproductive health problems.” 

Additionally, the artificial womb can completely eliminate the need for women to bear babies as it provides a safer and more controlled growing atmosphere for babies. 

Researchers had to manually monitor each embryo during the early stage of the research, making it difficult for them to keep track of huge numbers of embryos simultaneously. The AI-nanny was developed to tackle this issue of monitoring multiple embryos at once.

Advertisement