Saturday, November 22, 2025
ad
Home Blog Page 261

Two of Google’s Ethical AI Members Leave to Join Timnit Gebru’s Institute

Two Google Ethical AI Members Leave

Bloomberg reported that recently two of Google’s ethical AI team members left to join Timnit Gebru’s new nonprofit research institute. This development is not good news for the technology giant as the two researchers belonged from the unit studying an area that is vitally important to Google for its future plans. 

The two members who left Google are Alex Hanna, a research scientist, and Dylan Baker, a software engineer. Timnit Gebru’s new nonprofit research institute named DAIR, or Distributed AI Research, was founded in December with the purpose of analyzing various points of view and preventing harm in artificial intelligence. 

Hanna, who will soon serve as the Director of Research at DAIR, said that she was dejected because of the toxic work culture at Google and also pointed out the underrepresentation of black women in the company. 

Read More: OpenAI Introduces three new embedding model families in OpenAI API

A Google spokesperson said, “We appreciate Alex and Dylan’s contributions — our research on responsible AI is incredibly important, and we’re continuing to expand our work in this area in keeping with our AI Principles.” 

The spokesperson added that they are also dedicated to building an environment where people with varied viewpoints, backgrounds, and experiences can do their best job and support one another. 

The ethical Ai team of Google had already been in a controversy in 2020 when the team’s co-head spoke about the work culture of the company regarding women and black employees. 

The severity of the matter was proven when Google’s parent company Alphabet’s CEO Sundar Pichai, had to take the issue into his hands and launch an investigation for the same. Later Googler fired two employees as an action against the allegations. 

“A high-level executive remarked that there had been such low numbers of Black women in the Google Research organization that they couldn’t even present a point estimate of these employees’ dissatisfaction with the organization, lest management risk deanonymizing the results,” said Hanna.

Advertisement

OpenAI Introduces three new embedding model families in OpenAI API

openai launches new embeddings

OpenAI API recently announced three new families of embedding models – text similarity, text search, and code search, each geared to excel at various tasks. These three models either take code or text as input and provide an embedding vector as a result. Besides, they make natural language and code tasks such as clustering, semantic search, and classification perform effortlessly.

Embeddings are numerical representations of concepts transformed into number sequences. They are beneficial for working with natural language and code since they can be easily consumed and compared with other machine learning models and algorithms such as clustering and search. 

The new endpoint maps text and code to a vector representation – “embedding” them in a high-dimensional space using neural network models. These are the descendants of GPT-3 where each dimension captures some aspects of the input. 

Read More: T-AIM invites Applications from AI Startups for Revv Up Acceleration Program

Text similarity models provide embeddings that represent the semantic similarity of texts and also help in tasks such as clustering, data visualization, and classification. 

Text search models provide embeddings that allow for large-scale search tasks, such as discovering a relevant short search query document among a collection of documents based on a text query. 

Code search models provide code and text embeddings aiming to discover the relevant code block for a natural language query from a collection of code blocks.

Advertisement

AI system Accurately Predicts How two Proteins will Attach

AI Predicts Proteins attach

Researchers at the Massachusetts Institute of Technology (MIT) have developed a unique artificial intelligence and machine learning model named Equidock that can rapidly and accurately predict the complex that will form when two proteins bind together. 

It is a groundbreaking model as currently, scientists all over the world are trying to understand and battle the COVID-19 pandemic. Scientists have to understand how protein attachments take place for developing a successful synthetic antibody. 

Scientists face a significant challenge of the unavailability of training data during the developmental process of Equidock. The newly developed AI model will considerably help researchers in this process of identifying protein attachments. 

Read More: Amazon Rolls out Alexa Skill A/B testing tool to Boost Voice App Engagement

According to researchers, this AI model is between 80 and 500 times faster than other traditional software used for this purpose. PostDoc at MIT, Octavian Eugen Ganea, said, “Deep learning is very good at capturing interactions between different proteins that are otherwise difficult for chemists or biologists to write experimentally. Some of these interactions are very complicated, and people haven’t found good ways to express them.” 

Ganea added that this AI-powered deep learning model could learn these types of interactions from data. Equidock has a high accuracy rate, making it very reliable. Apart from analyzing antibodies, the model can also be very beneficial for other biological processes involving protein interactions, like DNA replication and repair, which can help boost the speed of medication development. 
“If we can understand from the proteins which individual parts are likely to be these binding pocket points, then that will capture all the information we need to place the two proteins together,” said Ganea. If scientists can discover these two sets of points, they just have to figure out how to rotate and translate the proteins so that one set corresponds to the other.

Advertisement

Amazon Rolls out Alexa Skill A/B testing tool to Boost Voice App Engagement

amazon alexa a/b testing

Amazon has released a new testing tool for Alexa developers who want to increase the number of people who use their voice apps. The Alexa Skill A/B testing tool allows developers to set up A/B tests to learn how to get more customers to spend more time and money with their app and increase the number of distinct sessions when users restart the software. 

In general, A/B testing is a type of experiment in which users are given two or more versions of feedback at random, and statistical analysis is performed to see which one works better for a certain conversion objective. A/B testing is extensively used in various sectors, including retail, marketing, and SaaS. A/B testing may be set up on a number of channels and mediums, including Facebook advertisements, search results, email newsletter processes, email subject line text, marketing campaigns, sales scripts, and so on. You design renditions, choose a metric and see which one has the highest conversion rate. Commonly, two alternatives are presented to an audience, with each receiving one of them, and the audience’s reaction helps determine which one should be used universally. 

Voice apps employ this idea to evaluate the best key phrases to react to and the best responses to encourage repeat usage by decreasing friction in navigating the app. Setting up these tests may be time-consuming and inconvenient, which is why Alexa’s new A/B testing aims to make it easier. The setup takes only a few hours and can be evaluated over several weeks.

Amazon highlighted UK-based voice design studio, Vocala as an example of a customer that implemented A/B testing to discover that a certain prompt was 15% more successful in driving paid conversions. The test had taken the company less than two hours to finish.

“We analyzed the results of our experiment through a panel. After a few weeks, we could clearly see that the longer prompt was 15% more effective at generating paid conversions,” said James Holland, lead voice developer at Vocala.

According to a post on the Alexa developer blog, users may use the Alexa A/B testing to run experiments that measure customer engagement, retention, churn, and monetization. Developers can use this to examine a variety of products. This includes, customer perceived friction rate, In-Skill (ISP) purchase offers, ISP sales, ISP accepts, ISP offer accept rate, Skill next day retention, Skill dialogs, and Skill active days. The new features follow up on Amazon’s revelation last summer about Alexa Skill A/B Testing, which allowed users to try out new or updated Alexa skills as part of the Alexa Skill Design Guide, which was also announced at the same event.

Read More: Key Announcements From Amazon re:Invent 2021

Last year, during its third annual Alexa Live event, Amazon introduced Alexa Skill Components to help developers build skills quicker by inserting basic Skill code into existing speech models and code libraries. It also announced that the Alexa Skill Design Guide, which codifies lessons gained from Amazon’s developers and the larger skill-building community, has been improved.

Image Source: Amazon Developer

To get started with the Alexa A/B testing, go to the Alexa Developer Console and choose the skill for which you wish to perform an experiment, then click “Certification.” Look for the “A/B Testing” section and select “Create” from the drop-down menu. In the Experiment Analytics area, you can see experiment-related statistics for both the live skill version (control) and the certified skill version (treatment).

Advertisement

Max Hospital Saket launched AI-powered Cancer Treatment

Max Hospital Saket AI Cancer treatment

Max hospital Saket launched its new AI-powered cancer treatment technology named Radixact X9 Tomotherapy. It is a highly capable artificial intelligence-enabled treatment system that is integrated with the 2nd generation synchronization respiratory motion management system. 

The Department of Radiation Oncology recently launched the technology at Max Institute of Cancer Care (MICC), Saket. 

The Radixact X9 uses Artificial Intelligence to track and provide treatment in real-time, guaranteeing that the tumor is not missed due to chest or abdominal breathing movement during radiation, which ensures the high accuracy of tumor detection without worrying about breathing movement of the chest or abdomen during radiation. 

Read More: Gupshup acquires Cloud Communication Startup Knowlarity

Dr. Charu Garg, Director of Radiation Oncology, MICC at Max Hospital Saket, said, “New radiation technologies have made it possible to precisely target and destroy the cancer cells while preserving normal cells of the body. With this new technique, the Department of Radiation Oncology at Max Institute of Cancer Care (MICC) Saket has added another milestone in its journey towards providing best-in-class healthcare services in the field of oncology.” 

She further added that the launch of their new AI-based technology proves the Hospital’s commitment to cancer treatment and the welfare of cancer patients. The newly introduced technology eliminates or decreases tumors by combining the precision of intensity-modulated radiation therapy and an image-guided scan, 

It uses a slice-by-slice approach to target the tumor (helical IMRT), allowing improved radiation dose management inside healthy cells. Experts claim that TomoTherapy is a next-generation advanced radiation therapy system that can deliver extremely precise radiation in even the most severe cancer patients. 

Dr. Dodul Mondal from Max Hospital Saket said, “This technology not only allows us to perform high definition of IGRT but also enables monitoring of the changes in a tumor on a day-to-day basis.” 

He also mentioned that this form of treatment turns out to be very beneficial in several cases where patients have earlier received radiation but require re-radiation. 

Advertisement

Gupshup acquires Cloud Communication Startup Knowlarity

gupshup acquires knowlarity

Conversational messaging platform providing company Gupshup acquires artificial intelligence-powered cloud communication startup Knowlarity Communications. The company officials did not provide any information regarding the valuation of this acquisition deal. 

DC Advisory served as the financial adviser to Knowlarity in this acquisition deal, which is expected to close by February 2022. With this acquisition, Gupshup plans to use Knowlarity’s voice-based AI solutions for call centers and customer care to further expand the capabilities of its chatbot and artificial intelligence-powered messaging service. 

Last year, Gupshup achieved unicorn status after its latest funding round led by Tiger Global. Conversational AI platform Gupshup was founded by Beerud Sheth, Dr. Milind R Agarwal, and Rakesh Mathur in 2004. 

Read More: T-AIM invites Applications from AI Startups for Revv Up Acceleration Program

Gupshup’s platform processes more than 4 billion messages per month and has delivered over 150 billion messages in total. The company has a massive customer base of many industry-leading companies, including Truecaller, DishTV, HDFC Bank OYO, Ola, Zomato, HSBC, ICICI Bank, and many more. 

CEO and Co-founder of Gupshup, Beerud Sheth, said, “As business-to-consumer engagement becomes conversational, Gupshup is busy enabling more ways for businesses to deliver rich experiences. With the addition of Knowlarity’s products, businesses will now be able to build seamless conversational experiences across both messaging and voice channels.” 

Gurgaon-based cloud telephony firm Knowlarity was founded by Ambarish Gupta and Pallav Pandey in 2009. The startup specializes in providing automated communication by enabling operators to work online via Cloud. 

Knowlarity’s services include click-to-call, number masking, multi-level IVR system, AI-powered solutions like Speech Analytics & VoiceBot, and several more. Before getting acquired by Gupshup, the company had raised more than $42 million from investors like Sequoia Capital, Delta Partners Capital Limited, Emergic Ventures, and others. 

CEO of Knowlarity, Yatish Mehrotra, said, “Our customer-centric, innovation-focused cultures are perfectly aligned, and we see significant synergies and new products emerging from the combination of two great teams.” 

He further mentioned that this agreement would provide their current and prospective customers with enhanced experiences, product enhancements, and significant geographic expansion opportunities.

Advertisement

Open-source database vendor MariaDB set for going public via SPAC

MariaDB Going Public

MariaDB, a popular open-source database provider, is the latest company to capitalize on an IPO trend. This morning, MariaDB announced that it intends to become a public company through a merger with Angel Pond Holdings Corporation after closing its $104 million Series D venture round. 

The MariaDB’s public deal with Angel Pond Holdings Corporation was announced in an S-1 filing with the US Securities and Exchange Commission, which describes the Cayman Islands-based biz as a special purpose acquisition company (SPAC). 

Major companies like Redhat, Samsung, and Google have used MariaDB to store, manage, and manipulate data across their applications. The open-source feature is a significant selling point, and it also gives the companies great control of visibility into the data. 

Read more: Atlassian acquires Percept.AI, a U.S. based AI chatbot vendor 

MariaDB will hit the public market space via New York Stock Exchange (NYSE) and be listed as SPAC, set up by former Goldman Sachs partner Theodore Wang and Alibaba co-founder Shihuang Simon Xie. 

SPAC is a shell form that raises money, goes public on a stock exchange, and then acquires a private company to train them to turn into a public one while avoiding IPO processes. 

This transaction will give MariaDB an enterprise valuation of $672 million, which is expected to close in the second half of 2022. After closing of the transaction, the combined identity will be called MariaDB plc, and current CEO Michael Howard will lead it. 

Advertisement

Qpisemi launches AI 2.0 Processors based on Integrated Photonics and Newer Software Paradigm

Qpisemi AI 2.0 Processors

Bengaluru-based semiconductor manufacturing company Qpisemi announces the launch of its new AI 2.0 processors, which are based on the company’s innovation in integrated photonics and newer software paradigm. 

The processor can be used to perform various purposes such as bioinformatics, AI modeling, drug discovery, metaverse, manufacturing, and others. According to the company, it’s newly announced AI 2.0 processors use optical processors to carry out neural-network calculations with photons. 

The processor is unique as other traditional semiconductors use electrons instead of photons to perform the same operation. Qpisemi claims that AI 2.0 will significantly impact manufacturing, advanced metaverse applications, and supply chain management. 

Read More: Introducing Voice NFTs: World’s first collection gets sold out in 10 minutes

“AI 2.0 is more advanced technology than current DL/ML technologies available. AI 2.0 would enable efficient actionable information generation in real-time that would match close to human intelligence at certain tasks at the edge. AI 2.0 would model emergent behavior accurately, which is not possible currently with DL and ML technologies.” said Dr. Nagendra Nagaraja. 

He further added that this would allow for more precise modeling of megastructures such as pandemic transmission, economies, transportation networks, advanced metaverse applications, and advanced manufacturing. 

Qpisemi’s new AI 2.0 processor that has been codenamed AI20PXX will be a hundred times faster than traditional GPUs used in datacenters. The processor is intended for automotive and metaverse applications, featuring teraflops operations to enable fully autonomous vehicles, as well as an AI 2.0 technological foundation. 

Director and Co-founder of Qpisemi, Pinakin Padalia, said, “long with discrete cryo electronics, which is getting taped out in 2022, we hope to have a working chip for AI20P and also Quantum secure CPU ‘Prakhar’ at test chip and simulations levels respectively this year.” 

He also mentioned that this year, they plan to construct high-quality R&D and commercial development teams at Qpisemi in preparation for their product launch in 2023-25.

Advertisement

T-AIM invites Applications from AI Startups for Revv Up Acceleration Program

T-AIM Applications AI Startups Revv Up Acceleration Program

The Telangana AI Mission (T-AIM) has begun accepting applications from early-stage AI startups as a part of the second cohort of the Revv Up acceleration program. 

The artificial intelligence accelerator program named Revv Up was jointly launched by the Telangana government and NASSCOM last year as a part of the state’s ‘2020 Year of AI’ initiative. 

With this program, the government aims to make Telangana a global hub for artificial intelligence. The initiative is aimed towards AI startups from any industry based in Telangana or startups which want to open a facility there. 

Read More: Indian Government announces to Launch Digital Rupee From RBI

This Accelerator program allows startups and the government to collaborate on developing solutions to complicated business problems. Selected startups are also given assistance from the government and industry in order to expand their operations. 

Principal Secretary, Government of Telangana, Jayesh Ranjan, said, “The Revv Up Up accelerator is now synonymous with innovative and impactful solutions to solve real-world problems. Through T-AIM, the Govt. of Telangana is committed to providing a conducive ecosystem for AI startups.” 

He further added that Telangana welcomes startups from various parts of the country to apply for this AI accelerator program. Last year 42 startups were selected under the Revv Up program to let them work alongside the government. Selected startups also received help from the government and industry to set up their business on a larger scale. 

The Revv Up program has allowed more than 20 startups to expand into the American market with the help of T-AIM’s partner organizations. Now that Revv Up is calling for applications for its second phase, interested startups can submit their form from the official website of Revv Up. 

Advertisement

Atlassian acquires Percept.AI, a U.S. based AI chatbot vendor 

Atlassian acquires PerceptAI

Atlassian Corporation Plc, an Australian software giant, is a leading provider of productivity and team collaboration software and the maker of Confluence, Jira Software, Bitbucket, and Trello software. Atlassian acquires Percept.AI, a US-based artificial intelligence company specializing in virtual agent technology powered by natural language processing. 

Percept.AI’s platform helps companies and teams automate their tier-1 support interactions via multiple channels like email, chat, web, and portal. Percept.AI’s acquisition will help Atlassian to better understand the context behind a support query. The conversational AI engine analyses and understands the sentiment, context, intent, and profile information to personalize interactions.

Ahead of today’s acquisition, Percept.AI raised a seed round funding of undisclosed sum from companies like Builders VC, Cherubic Ventures, Tribe Capital, Hike Ventures, etc. 

Read more: Diem Shuts Down and Confirms Asset Sale to Silvergate

Edwin Wong, Head of Product Management for Jira Service Management, said that integrating Percept.AI with Jira Service Management will help the Atlassian support team deliver services faster and at scale. It’ll also create a unified platform for seamless chat-based conversations between customers and agents. 

Over the past several years, Atlassian has been investing in solutions that help build predictive, smart experiences into its products. It previously acquired Halp, Think Tilt, and Mindville to improve the functionality of its Jira Service Management software. 

The acquisition of Percept.AI, along with previous investments in AI and machine learning-based software, will help Atlassian enhance the efficiency of the Jira management system and provide an excellent customer support experience to its users. 

Advertisement