Home Blog Page 256

Soul Machines Raises $70 million from SoftBank

Soul Machines Raises $70 million SoftBank

Virtual beings startup Soul Machines raises $70 million in its recently held series B funding round led by SoftBank Vision Fund 2. Other new investors like Cleveland Avenue, Liberty City Ventures, and Solasta Ventures participated in the funding round. 

Additionally, the round witnessed participation from multiple existing investors, including Temasek, Salesforce Ventures, and Horizons Ventures. According to the company, it plans to use its fresh funds to accelerate its growth in the Enterprise market. 

Soul Machines will focus primarily on deepening its research into its Digital Brain technology. Additionally, the firm aims to launch the future of digital entertainment for the metaverse with hyper-realistic digital twins of real-life celebrities. 

Read More: Indian Engineering Student develops AI model to turn American Sign Language into English

Chief Business Officer of Soul Machines, Greg Cross, said, “We are in a transformational era where brands need to introduce different ways of personalization and ways to deliver unique brand experiences to customers in a very transactional digital world.” 

He further added that he’s excited to continue working with forward-thinking, global businesses that recognize the power of digital people to communicate, engage and interact with the rest of the world. New Zealand-based technology company Soul Machines was founded by Greg Cross Mark Sagar in 2016. 

The firm specializes in designing intelligent and emotionally responsive avatars that change the way people interact with machines. Soul Machines’ product allows companies to offer customer service, advertising, and entertainment using artificial intelligence attached to synthetic voices and visuals in the metaverse. 

To date, the company has raised total funding of $135 million over three funding rounds. One of the most popular products of Soul Machines is Ella, a virtual assistant that the New Zealand Police Department uses. The highly capable virtual officer can interact with visitors to a police station via a standing kiosk. 

“With strong R&D capabilities and advanced back-end solutions, we believe that Soul Machines is at the cutting edge for creating digital people that can support companies across functions including customer service, training, and entertainment,” said Investment Director at SoftBank Advisers, Anna Lo. 

Advertisement

Mastercard AI Modeling Enhances Milliman Payment Integrity Solution

Mastercard AI Modeling Milliman Payment Integrity

Global payment processing products and solutions providing company Mastercard enhance Milliman Payment Integrity (MPI) solution using Mastercard’s expertise in artificial intelligence modeling. 

MPI solutions will be able to detect suspected healthcare fraud, waste, and abuse more effectively with the help of its newly deployed artificial intelligence model. 

Mastercard and Milliman have signed a formal reseller agreement, understanding the importance of this combined solution for their clients. 

Read More: Sam Altman Invites Meta AI researchers to join OpenAI

Mastercard’s data scientists collaborated with Milliman to develop three artificial intelligence models based on the Milliman Payment Integrity solution’s outputs using its six-step AI model creation process, named AI Express. 

According to Milliman, it intended to use modern technologies such as artificial intelligence to find incremental FWA savings for its clients using non-discrete testing methodologies. Therefore Milliman and Mastercard jointly developed this AI model to cater to their needs. 

Chief Marketing, Communications Officer, and President of Healthcare at Mastercard, Raja Rajamannar, said, “Leveraging our proprietary technology to build a custom AI model helped them (Milliman) to do just that – provide enhanced fraud detection and operational efficiencies to improve their customers’ experience.” 

He further added that the healthcare business could benefit from Mastercard’s solutions designed to make payment systems run more efficiently. According to a report, annual financial losses due to healthcare financial fraud are in billions of dollars. 

Moreover, some government agencies also believe that the cost to be as high as 10% of annual health expenditures in the United States, or more than $300 billion. Therefore it becomes necessary to have a tool or solution that can considerably reduce these ever-increasing numbers. 

David Cusick from Milliman said, “Our proof-of-concept with Mastercard shows there is a very compelling value proposition when coupling our existing technology solution with Mastercard’s advanced AI and machine learning capabilities.” 

He also mentioned that Milliman Payment Integrity is pleased about the expanded fraud, waste, and abuse detection capabilities that the Mastercard FWA AI model now allows them to provide to their clients.

Advertisement

Policybazaar Launches AI-powered WhatsApp Chatbot

Policybazaar AI WhatsApp Chatbot

Online life and general insurance comparison portal Policybazaar launches its new artificial intelligence-powered WhatsApp chatbot to provide a better claim settlement process for group health insurance. 

According to the company, its newly launched AI chatbot will be used to automate and accelerate the claim settlement process for consumers. The COVID-19 pandemic caused health turmoil in the country, leading to a drastic increase in the number of claims. 

Policybazaar plans to tackle the skyrocketing claim number while not compromising on its customers’ quality services using the new technology. 

Read More: Sway AI launches No-Code Artificial Intelligence Platform

The chatbot is specifically designed to provide seamless support with intimation and settlement claims to all employees and their families who are covered under the group health plan through WhatsApp. 

Customers will now be able to effortlessly upload documents, hospitalization data, expenses, and many more using the AI chatbot just by using WhatsApp, making the chatbot highly accessible and user-friendly. 

Chief Business Officer of General Insurance at Policybazaar, Tarun Mathur, said, “Conventionally, consumers have had to follow up over phone calls or emails regarding claims with insurers. This is not only cumbersome but also leads to unpredictable wait time, which is even more complicated in a remote work setup. To iron out the friction during the crucial moment of truth, we have launched an automated communications platform.” 

He further added that the chatbot is integrated with APIs from insurers and TPAs, removing the need for human intervention to accept claim data and paperwork. After the ID gets generated after documentation, users can track the status of their claim. 

To date, more than 15,000 Policybazaar customers have already started enjoying the benefits of this AI-enabled WhatsApp chatbot, and the number is expected to grow exponentially over time. An added advantage of this chatbot is that it can be used to provide round-the-clock assistance to customers 365 days a year. 

Advertisement

Indian Engineering Student develops AI model to turn American Sign Language into English

Indian Engineering Student AI model sign language English

Indian engineering student Priyanjali Gupta has developed a one-of-a-kind artificial intelligence model that is capable of translating American sign language into English in real-time. 

Priyanjali is a third-year computer science student of Tamil Nadu’s renowned institute named Vellore Institute of Technology (VIT). 

According to Priyanjali, her newly developed artificial intelligence-powered model was inspired by data scientist Nicholas Renotte’s video on Real-Time Sign language Detection. She developed this new AI model using Tensorflow object detection API that can translate hand signs using transfer learning from a pre-trained model named ssd_mobilenet. 

Read More: Sam Altman Invites Meta AI researchers to join OpenAI

Priyanjali said, “The dataset is manually made with a computer webcam and given annotations. The model, for now, is trained on single frames. To detect videos, the model has to be trained on multiple frames, for which I’m likely to use LSTM. I’m currently researching on it.” 

She further added that it is pretty challenging to build a deep learning model dedicated to sign language detection, and she believes that the open-source community, which is much more experienced than her, will find a solution soon. 

Additionally, she mentioned that it might be possible in the future to build deep learning models solely for sign languages. She said in her Github post that the dataset was developed running the Image Collection Python file that collects images from webcams for multiple signs in the American Sign Language, including Hello, I Love You, Thank you, Please, Yes, and No. 

Despite the fact that ASL is the third most widely spoken language in the United States, not much effort has been made to create translation tools for the language. This newly developed AI model is a big step towards creating transition models for sign languages. 

Priyanjali said in an interview, “She (mother) taunted me. But it made me contemplate what I could do with my knowledge and skill set. One fine day, amid conversations with Alexa, the idea of inclusive technology struck me. That triggered a set of plans.”

Advertisement

Expert.ai expands Research and Support Capabilities at Los Alamos National Laboratory

Expert.ai expand Research Support Capabilities

Artificial intelligence company Expert.ai announces that it plans to expand research and support capabilities at Los Alamos National Library. 

According to Expert.ai, the United States National Security Research Center (NSRC) will use Expert.ai’s technology as a fundamental component of Titan Technologies’ Compendia solution to make digitized documents easier to search. 

The National Security Research Center (NSRC) is one of the world’s largest libraries with millions of items such as film, audio, journals, pictures, papers, drawings, microfiche, aperture cards, and several others. 

Read More: Intel plans to Acquire Tower Semiconductor for $5.4 billion

Expert.ai’s cutting-edge technology would allow the Compendia system from Titan Technologies to organize unstructured content into a safe digital library. 

The Compendia solution integrates knowledge and learning in the new NSRC system, Titan on the Red, with artificial intelligence-powered natural language understanding (NLU) and machine learning offered by expert.ai. 

Director, National Security Research Center, Rizwan Ali, said, “One of the greatest assets at Los Alamos National Laboratory is the information that we have generated in over 75 years of nuclear weapons work. This is what distinguishes us from any other weapons laboratory in existence. The Titan on the Red system will make this valuable information discoverable.” 

Los Alamos National Laboratory Weapons Program scientists and engineers underwent a successful six-month proof-of-concept before Expert.ai made this announcement. The PoC program provided the researchers with automated research support using AI-based natural language processing and machine learning. 

According to the company, its technology performs natural language understanding and user interface functions within Compendia by using proprietary semantics-based processes to generate granular metadata. 

Additionally, the technology can place textual content into context and detect even hidden information. 

United States-based artificial intelligence company Expert.ai, also known as Expert Systems, was founded by Marco Varone, Paolo Lombardi, and Stefano Spaggiari in 1989. The firm specializes in developing and deploying cutting-edge NLU solutions for the public sector, including law enforcement, the military, and others.

Advertisement

Sway AI launches No-Code Artificial Intelligence Platform

Sway AI No-Code Artificial Intelligence Platform

Scalable artificial intelligence solutions developing firm Sway AI announces the launch of its new no-code artificial intelligence platform. 

The newly announced AI platform will help boost the adoption rate of artificial intelligence in enterprises across the globe. Sway AI’s no-code platform would allow companies to quickly build and deploy artificial intelligence solutions without prior AI experience or investment. 

It is a highly efficient and easy-to-use platform as users do not need to have previous knowledge of coding to develop and deploy AI solutions. 

Read More: United Kingdom invests £23 million to boost Skills and Diversity in AI jobs

Data scientists and AI professionals can interact with stakeholders and design prototypes faster using Sway AI’s newly launched platform. 

CEO and Co-founder of Sway AI, Amir Atai, said, “By using Sway AI – enterprises can expect the best AI capabilities available without going through complex evaluation exercises and committing to inflexible technology choices. With Sway AI, an enterprise can reduce development and deployment costs by up to 10x and deployment time from months to hours.” 

He further added that businesses were faced with the difficult challenge of deciding which AI tools, technologies, and models to use from an expanding and complex marketplace. 

Sway AI’s platform comes as a turning point as it simplifies this ever-changing AI environment by providing best-in-class AI tooling. According to Sway AI, Enterprise AI is a trillion-dollar business, however, 85% of the artificial intelligence projects fail. 

On the contrary, Sway AI developed this highly scalable AI platform that allows organizations to enjoy the benefits of artificial intelligence solutions without putting in much effort. Sway AI’s no-code platform comes with collaboration features that help organizations improve business alignment, reduce risk, and drive increased ROIs. 

Chief Product Officer and Co-founder of Sway AI, Jitendra Arora, said, “Sway AI offers best-in-class open-source capabilities as a future-proof platform. This minimizes investment and adoption risks, especially as the growth of AI tools accelerate.” 

Advertisement

Sam Altman Invites Meta AI researchers to join OpenAI

OpenAI Meta AI researchers

Sam Altman, CEO of OpenAI, tweeted an open invitation for Meta AI researchers to join OpenAI because of Meta AI’s aversion to Artificial General Intelligence (AGI). He says that the certainty of a ‘No’ from Meta’s chief AI scientist regarding AGI explains the last five years of Meta’s AI lab.

AI research at Meta focuses on self-supervised learning for building intelligent models that can acquire new skills and perform multiple tasks without labeled data. On the other hand, OpenAI focuses on artificial general intelligence (AGI) to develop highly autonomic systems that benefit humanity and outperform humans.

An AGI is a system capable of understanding the world and has the same capacity to learn for carrying out a wide range of tasks like humans do. In theory, AGI can carry out any task similar to humans by combining human-like flexible reasoning and thinking with computational advantages.

OpenAI has been working on developing AI systems like the GPT-3 that can do anything humans can through deep learning algorithms that use neural networks to understand what it is seeing or hearing. The goal is to create AGI that can speak, listen, write, read, and learn independently. OpenAI has been in the headlines for developing the smart GPT-3 model. However, Turing award winner and Chief AI Scientist at Meta, Yann LeCun, trashed OpenAI’s GPT-3 model and its capabilities in a Facebook post.

According to LeCun, “… trying to build intelligent machines by scaling up language models is like building high-altitude airplanes to go to the moon.” He believes that one can beat altitude records with GPT-3, but going to the moon will require an altogether different approach. Yann LeCun also responded to Sam Altman’s tweet, discrediting OpenAI’s AGI approach similarly by stating, “…But if one’s goal is to get to orbit, one must work on things like cryogenic tanks, turbopumps, etc. Not as flashy.”

Unlike OpenAI, the objective of Meta’s AI Lab is to match human intelligence rather than develop artificial human intelligence. Jerome Pesenti, vice president of AI at Meta, has publicly stated that the concept of AGI is not exciting and does not mean much. He also says that deep learning and current AI have limitations and are far from achieving human intelligence. Meta AI lab believes that designing non-reproducible systems is lost investment and does not bring much value in the field.

Today, Meta has become synonymous with self-supervision and believes that this is the right path to achieving human-level intelligence in the long run. Meta AI recently introduced data2vec, the first high-performance self-supervised algorithm capable of learning the same way in multiple modalities without labeled data like vision, speech, and text. Since Meta AI focuses on self-supervised learning rather than AGI, this model can predict its representation without the input data labeled as text, speech, or audio.

Ilya Sutskever, the chief scientist of the OpenAI, tweeted on 10th Feb that “it may be that today’s large neural networks are slightly conscious.” This is the first time Sutskever has claimed that consciousness in machines has already arrived, even if he was speaking facetiously. OpenAI has become one of the leading artificial intelligence research labs in the world and has consistently produced headline-grabbing research for designing large AI models. In his tweet, Sam Altman, CEO of OpenAI, said that Meta’s approach to achieving human-level intelligence isn’t exactly the right way. However, because Meta AI is passionate about designing reproducible AI systems through self-supervised learning, which is different from OpenAI’s approach, does not mean that their method is wrong

Advertisement

Study Finds Humans Fail to Distinguish Real Image from Deepfakes: An Impending Concern

deepfakes and real images, hany farid

Every day we come across headlines that highlight how government bodies and privacy experts air their concerns about the misuse of deepfakes. For instance, in public, Barack Obama never called Donald Trump a complete dipshit, but a YouTube video alleges otherwise. Actor Tom Cruise never had a solo dance-off, yet last year, many TikTok users saw a video on the platform where a deepfake version of Tom Cruise was doing dance-offs and performing magic tricks. These are benign examples of how deepfakes are taking over the world with its misleading and manipulative side. To make things worse, a recent study published in the Proceedings of the National Academy of Sciences USA also sheds light on how humans are finding generated images to be more realistic than the actual ones. 

Dr. Sophie Nightingale of Lancaster University and Professor Hany Farid of the University of California, Berkeley, performed experiments in which participants were asked to tell the difference between state-of-the-art StyleGAN2 generated faces and real faces, as well as the level of trust the faces evoked. The findings showed that synthetically generated faces are highly realistic and practically indistinguishable from real faces and that they are perceived as more trustworthy than the latter.

For their experiment, the researchers’ team recruited 315 untrained and 219 trained participants on a crowdsourcing website to study whether they could distinguish a selection of 400 fake photos from 400 photographs of real people. Each set has 100 participants from four different ethnic groups: white, black, East Asian, and South Asian. They also recruited another 223 volunteers to judge the level of trustworthiness of a group of the same faces on a scale of 1 (very untrustworthy) to 7 (very trustworthy).

In the first experiment, 315 people were able to classify 128 faces from a total of 800 as either real or fake. Their accuracy percentage was 48 percent, which was close to a 50 percent chance performance. In a subsequent experiment, 219 new volunteers were instructed and provided feedback on how to identify faces. They classified 128 faces from the same collection of 800 faces as in the previous trial, but the accuracy rate only increased to 59 percent despite their training. 

The researchers next set out to see if people’s perceptions of trustworthiness might aid in the detection of fake images. Therefore, the third set of participants was asked to rate the trustworthiness of the 128 faces taken from the first experiment. It was observed that the average rating for synthetic faces was 7.7 percent more trustworthy than the average rating for actual faces, a statistically significant difference. On the brighter side, apart from a modest tendency for respondents to assess Black faces as more trustworthy than South Asian faces, there was little difference between ethnic groupings.

The researchers were surprised by their observations. According to Nightingale, “We initially thought that the synthetic faces would be less trustworthy than the real faces.” Nightingale emphasizes the need for stricter ethical rules and a stronger legal framework in place as there will always be those who want to exploit deepfake images for malicious purposes, which is concerning. 

Read More: Misinformation due to Deepfakes: Are we close to finding a solution?

Deepfakes, which is the portmanteau of words “deep learning” and “fake,” initially appeared on the Internet in late 2017, powered by generative adversarial networks (GANs), an exciting new deep learning technology. In layman’s words, in GAN, two deep learning algorithms are pitted against each other. The first algorithm, called generator is fed with random data to generate a synthetic image. This synthetic image is then mixed with noise (authentic photos of people) when fed to the second algorithm called discriminator, which compares them to the training data. 

A GAN pits two artificial intelligence algorithms against each other. The first algorithm, known as the generator, is fed random noise and turns it into an image. This synthetic image is then added to a stream of real images – say, of celebrities– that are fed into the second algorithm, known as the discriminator. If it can discern the difference, the generator is made to repeat the process. After numerous iterations, the generator starts producing utterly realistic faces of completely nonexistent persons.

Earlier neural networks were only capable of detecting an image or object in videos, but with the introduction of GANs, now they can generate their own content. Further, while impressive, initially, deepfake technology was still not nearly up to pace with actual video footage and could be easily detected by examining attentively. However, as technology advances at a breakneck speed, deepfakes have slowly become indistinguishable to the human eye from actual photographs. 

Not only deepfakes are a blessing to purveyors of fake news, but it is also a national security threat and medium to produce content without consent (deepfake pornography). Though misinformation existed before, the advent of deepfake accelerates the volume by whopping numbers. The above-cited example about Barack Obama is just the tip of the iceberg of how deepfake can brainwash people by mere footage of doctored clips. In less than a decade, deepfakes advanced its reputation from internet sensation to notorious breeders of identity and security threats. For example, in Gabon, a deepfake video sparked an attempted military coup in the East African country.

Acknowledging the ramifications, several nations have started taking measures to stop misinformation by deepfakes and regulate their generation by companies. Recently, the Cyberspace Administration of China has proposed a draft that promises to regulate technologies that generate or manipulate text, images, audio, or video using deep learning. The US state of Texas has prohibited deepfake videos designed to sway political elections. Although these are just the preliminary actions against artificially generated fake media, it is high time that governing bodies also come up with plans to tackle new forms of deepfake content, especially ones where humans fail to distinguish real from fake.

Advertisement

United Kingdom invests £23 million to boost Skills and Diversity in AI jobs

United Kingdom invests £23 AI

The United Kingdom government announced its plans to invest £23 million to boost the skills of its citizens and increase the diversity of jobs in the artificial intelligence sector. 

More AI and data conversion courses will be created as a part of this new initiative, allowing young people from underrepresented groups such as women, black people, and individuals with disabilities to participate in the UK’s world-leading artificial intelligence industry. 

The government will introduce multiple scholarship programs to encourage students to take admission in courses and programs related to artificial intelligence. 

Read More: Jio announces to offer Satellite-based Internet Services

Science Minister of the UK, George Freeman, said, “The UK is one of the world’s most advanced AI economies, with AI already playing a key role in everything from climate science and medical diagnostics to factory robotics and smart cities.” 

He further added that it is critical that they continue to provide our workforce with the necessary skills in this critical technology, as well as make the industry accessible to brilliant people from all walks of life. 

The United Kingdom has a third of Europe’sEurope’s total AI companies and ranks third in the world for private venture capital investment in artificial intelligence firms. This new government investment would considerably help in further increasing these numbers. 

By matching-funding AI scholarships for conversion courses, the government has been encouraging employers to create a future pipeline of AI expertise. Experts believe it is necessary that the next generation of AI researchers must be representative of the world around us. 

Chris Philip, DCMS Minister for Tech and Digital Economy, said, “The UK is already a world leader in AI. Today we’re investing millions to ensure people from all parts of society can access the opportunities and benefits AI is creating in this country.” 

He also mentioned that they plan to double the number of AI scholarships previously available to build a diverse and inclusive workforce in the country. 

Advertisement

Intel plans to Acquire Tower Semiconductor for $5.4 billion

Intel Acquire Tower Semiconductor

World’s leading microchip manufacturing company, Intel, announces its plan to acquire analog integrated circuits developing firm Tower Semiconductor in a whopping $5.4 billion acquisition deal. 

According to a definitive agreement signed by the companies, Intel will buy Tower Semiconductor for $53 per share in cash. 

This new development will considerably help Intel to accelerate its manufacturing capacity and increase its product portfolio to meet the market demand in the time of a global chip shortage. 

Read More: Intel’s Mobileye to launch Self-Driving Taxis in US by 2024

Intel will use Tower Semiconductor’s expertise to further strengthen its newly established Foundry Service subsidiary, which aims to provide chip manufacturing facilities to third-party organizations. 

Intel also announced a $20 billion investment in a microchip manufacturing facility in Ohio. The investment made by Intel would strengthen its motive of manufacturing essential electronics products within the United States. 

CEO of Intel, Pat Gelsinger, said, “Tower’s specialty technology portfolio, geographic reach, deep customer relationships, and services-first operations will help scale Intel’s foundry services and advance our goal of becoming a major provider of foundry capacity globally.” 

He further added that Intel would be able to offer a compelling range of cutting-edge nodes as well as specialized technologies on established nodes as an outcome of this deal. Additionally, the acquisition of Tower Semiconductors will open up new opportunities for current and potential customers in an era of unprecedented semiconductor demand. 

Israel-based semiconductor company Tower Semiconductor was founded in 1993. The company specializes in multiple technologies, including radiofrequency (RF), power, silicon germanium (SiGe), electronic design automation, industrial sensors, and many more. 

CEO of Tower Semiconductor, Russell Ellwanger, said, “With a rich history, Tower has built an incredible range of specialty analog foundry solutions based upon deep customer partnerships, with worldwide manufacturing capabilities. I could not be prouder of the company and of our talented and dedicated employees.” 

He also mentioned that through a broad suite of technology solutions and nodes and a considerably enlarged global manufacturing base, they would drive new and vital growth prospects and give even more value to our customers in collaboration with Intel.

Advertisement