Thursday, January 15, 2026
ad
Home Blog Page 302

Cognigy launches Conversational AI Analytics Suite Cognigy Insights

Cognigy launches Conversational Artificial Intelligence Analytics Suite

Customer support automation solutions developing company Cognigy launches its new conversational AI analytics suite named Cognigy Insights. 

The new platform, when combined with the company’s current conversational artificial intelligence solution CognigyAI will enable its clients to understand conversational data better, which would help them provide a better customer support experience. 

It is challenging for conversational AI platforms to analyze and generate insights for enormous amounts of data accurately, but Cognigy’s new platform has the capability to create actionable insights that can be used to optimize workflow and allow better responses to customer interactions. 

Read More: Washington State University engineer builds world’s lightest crawling robot

Cognigy Insights uses machine learning to provide a feature called Step Explorer that helps its users to scrutinize conversations in a stepwise manner that gives them a better understanding of the intent of any conversation. 

Senior technology evangelist at Cognigy, Sebastian Glock, said, “Cognigy Insights helps customers make sense of their conversational data and give them the means to act upon insights, all within one best-in-class suite.” 

He further mentioned that improving conversational artificial intelligence analytics is the only way forward in order to provide enhanced customer communication services. The platform has a user-friendly interface that can be used to generate reports and monitor complex conversational data. 

The Germany-based conversational AI firm Cognigy was founded by Benjamin Mayr, Philipp Heltewig, and Sascha Poggeman in 2016. The company has gained much popularity in the global market for developing its low-code conversational artificial intelligence platform that uses chat and voice bots to automate customer and employee communication. 

Cognigy has raised a total funding of $54 million in two funding rounds from investors like Insight Partners, Global Brain Corporation, DN Capital, and Inventures. The company’s platform is used by many global leaders, including Lufthansa, Daimler, Bosch, Salzburg AG, and many others.

Advertisement

Researchers create new Optical Technology that Answer High Energy Demands of AI

SOLO Optical technology AI neural network energy crisis
Credits: Alain Herzog 2021 EPFL

A team of engineers at Optics Laboratory of EPFL (Swiss Federal Institute of Technology Lausanne), in collaboration with the Laboratory of Applied Photonic Devices, created a machine learning approach called SOLO, for Scalable Optical Learning Operator, that can detect and categorize information structured as two-dimensional pictures. Their research was just published in the journal Nature Computational Science.

Christophe Moser and Demetri Psaltis
Lead Researchers Christophe Moser and Demetri Psaltis © Alain Herzog 2021 EPFL

Computers have leveraged deep learning to identify objects in images, transcribe speech, translate between languages, diagnose medical conditions, drive cars, and more. Deep learning is a form of machine learning that is inspired by the human brain’s structure. Through the use of a multi-layered structure of algorithms known as neural networks, deep learning seeks to come to the same conclusions that humans would. But for the training of deep neural networks, huge datasets are required both in terms of quality and quantity. And processing the data to make it optimized for AI training is no less than a scientific feat itself. These, however, come at a significant cost in terms of computer resources and energy consumption. It is no surprise that engineers and computer scientists are working feverishly to find better techniques to train and run deep neural networks.

Current data center networks are witnessing an exponential rise in network traffic as a result of this massive data expansion. With the fast growth of server-to-server communication, meeting the expanding data storage and processing requirements with present technology has become a challenge. 

To assuage this crisis, optical networks, which employ light-encoded signals to transfer data in many sorts of networks, pose as a viable alternative.

A neural network can be thought of as a collection of interconnected nodes where each node acts as a neuron of a human brain. However, in a neural network, each neuron has a numeric value and the connection between two neurons is represented by another numeric value called weights. The strength of the neural connection and weights change on the basis of what a neural network learns by training. 

To accomplish a task, say image classification, scientists seek a set of weights that will allow the neural network to do so. Each task and dataset have a unique set of weights. The values of these weights cannot be predicted in advance, therefore the neural network must learn them. Each neuron’s state is determined by the weighted sum of its inputs, which is then subjected to a nonlinear function known as an activation function. This neuron’s output subsequently serves as an input to a variety of other neurons.

In June 2019, the University of Massachusetts at Amherst published a shocking paper on the consumption of energy by neural networks. The study reveals that the amount of electricity required for training and searching a BERT (Bidirectional Encoder Representations from Transformers) neural network architecture results in the production of around 626,155 tons of carbon dioxide. This means that the neural network has a carbon footprint of 284 tons, which is five times that of an ordinary car’s lifetime emissions. 

Optical computing has promised better performance while using far less energy than traditional electronic computers for a long time. However, scientists have struggled to create the light-based components required to outperform conventional computers, putting the idea of a viable optical computer in jeopardy. Amid these difficulties, the SOLO team’s research has promised new insights.

The objective of the study was to use different processing methods, particularly photonics, to minimize the energy requirement. As a result, the team considered utilizing optical fibers to do some calculations, which were carried out automatically by light pulses propagating inside the fiber. This simplifies the computer’s architecture, retaining only a single neuronal layer, making it a hybrid system,” says Ugur Tegin, the lead co-author of the study.

Read More: Optical Chips Paves The Way For Faster Machine Learning

A feasible optical computer (including a neural one) must include linear and nonlinear parts and input-output interfaces while maintaining the speed and power efficiency of optical interconnections, according to the paper.

As a result, the engineers devised SOLO, which is a combination of the linear and nonlinear components of an optical system in a shared volume confined in multimode fiber (MMF).

figure1
Illustration of the neuromorphic computing architectures and the experimental set-up (Source: Nature)

The data to be processed is entered into the left-hand two-dimensional SLM (spatial light modulators). The light from a pulsed light source is nonlinearly transformed as it propagates through the fiber at sufficiently high illumination peak power, and the output of the computation is projected on the two-dimensional camera. The MMF’s input-output operation is fixed and extremely nonlinear due to the qualities of the fiber and laser source. On the basis of a huge dataset of input-output pairs, the team combines a fixed nonlinear MMF mapping in the optical domain with a single-layer digital neural network (decision layer) that can detect output captured on the camera.

Engineers used a collection of X-ray pictures of lungs afflicted by various illnesses, including COVID-19, to test this novel technique. They used SOLO to identify the coronavirus-affected organs in the data. To compare the data, they put it through a standard neural network system with 25 layers of neurons.

While the X-rays were classified equally effectively by both methods, SOLO used 100 times less energy. This was the first time the team showcased quantifiable energy savings. They believe that SOLO’s improved energy efficiency may potentially offer up new possibilities in other fields of ultra-fast optical computing.

Advertisement

Washington State University engineer builds world’s lightest crawling robot

Washington State University engineer builds world’s lightest crawling robot

An engineer from Washington State University, Nestor Perez Arancibia, has set a world record by building a beetle-inspired tiny-sized crawling robot named Robeetle. Unlike mainstream robots, the uniquely designed Robeetle uses airflow and pneumatics for locomotion.

It is a highly lightweight robot that weighs nearly 88 milligrams. Perez has integrated the robot with a catalytic combustion system that enables it to perform various tasks like climbing slopes and moving around multiple surfaces. 

The Guinness Book of World Records recently certified Robeetle as the lightest crawling robot ever developed. Since the robot uses Methanol combustion technology for carrying out its movements, it can operate in very low temperate conditions as methanol has a low freezing point. 

Read More: Zebra Technologies acquires US-based artificial intelligence company Antuit.ai

The robot can load 95 milligrams of fuel that can power it for two hours. Perez said, “This is the main contribution because fuel like methanol or hydrocarbons has high energy density. The main problem in micro-robotics is that the energy density of batteries is very low.”

He further added that this technology has enabled the robot to carry higher energy density in a single load compared to traditional batteries. According to him, the robot uses a muscle alloy that changes its shape with temperature fluctuations caused by methanol combustion for performing crawling action. 

While speaking about the application of Robeetle, he said that the tiny-sized robot can be used to aid many operations like surgeries, search operations, and many more. The robot can also be used for promoting artificial pollination in plants. 

Before joining the School of Mechanical and Material Engineering at Washington State University, Nestor Perez completed his Ph.D. from the University of California and had worked with the Wyss Institute for Biologically Inspired Engineering at Harvard University.

Advertisement

Volkswagen Develops New Applications of Automotive Quantum Computing

Volkswagen Group Automotive Quantum Computing

For almost five years now, Volkswagen, a German automotive company, has been finding new ways to apply quantum computing to the automotive industry. VW has had a dedicated research team for quantum computing since 2016. Since then, VW has teamed up with Canadian quantum computing firm D-Wave and Google’s quantum computing unit for advancing its research. 

In fact, Volkswagen was the first automaker company to show a practical application of quantum computing for route and traffic management. In 2019, VW, under the leadership of data scientist Florian Neukart demonstrated the first use of quantum computing in tackling traffic congestion using a D-Wave 2000Q™ quantum computer. They collected data from 418 taxi cabs in Beijing and developed a mobile app for predicting the best route for any given destination. They tested the model with participants of the 2019 WebSummit in Lisbon to minimize wait times for passengers and bus travel times while avoiding traffic jams. 

“We see great potential for quantum computing across our entire business,” said Florian Neukart, Director, Volkswagen Group. “Many challenges in the automotive industry can benefit from the inherent power quantum computing can generate.” He also added, “We’re not interested in doing research for research’s sake. We want to bring this technology into the real world.” 

Read more: Google, Microsoft, and others to invest billions to improve Cybersecurity

The Volkswagen Quantum Computing team is now applying the technology to vehicle pricing to strike the right balance for customer demand. Further, the VW team sees the potential to apply the tech for developing new materials, figuring out the location of electric vehicle charging stations for maximizing their usefulness, and much more. 

Now, they are using the algorithms for maximizing the efficiency in the factory-paint shops. Traditionally, in paint shops, a small number of vehicles were painted with a particular primer before stopping the line and switching to another primer. Since every vehicle required one of the two primers depending on the final color, this increased cost and slowed down production. However, the new algorithm allows factories to run the production line with significantly more vehicles in a row. The new system can go online at Volkswagen factories in Germany and eventually worldwide soon. 

Advertisement

Google, Microsoft, and others to invest billions to improve Cybersecurity

Google, Microsoft, and others to invest billions to improve Cybersecurity

In a meeting at the White House with the United States President Joe Biden, many technology giants, including Google, Microsoft, and Facebook, expressed their will to invest billions in strengthening the cybersecurity efforts of the country. 

Microsoft and Facebook said they would invest $20 billion and $10 billion, respectively, over the next five years to contribute towards this new initiative. Google has also announced that it will invest $10 billion and will work to secure the software supply chain. 

Representatives from IBM, Amazon, and Apple were also present in the meeting that was held last week at the White House. The investment will be used to train new skilled specialists who will strengthen the country’s cybersecurity ecosystem. 

Read More: Inside World’s largest Chip: Introducing Cerebras CS-2

This new development comes after the recent cyber threats that the United States government has been facing, like attacks on critical infrastructures, fraudulent money transactions of high value, and several more. 

According to Joe Biden, the federal government alone cannot cater to the needs of cybersecurity using existing infrastructure. “I’ve invited you all here today because you have the power, the capacity, and the responsibility, I believe, to raise the bar on cybersecurity,” said Biden. 

Recently IBM announced that it would take responsibility for training more than 150,000 individuals in cybersecurity over the next three years. The meeting was arranged to analyze the root cause of such ransomware cyber-attacks and to come up with solutions for tackling them effectively. 

Senior Vice President of Global Affairs in Google, Kent Walker, said regarding the meeting, “comes at a timely moment, as widespread cyber attacks continue to exploit vulnerabilities targeting people, organizations, and governments around the world.” 

However, experts believe that this initiative taken by most industry-leading information technology companies will not entirely stop future cyberattacks but will play a vital role in reducing vulnerabilities.

Advertisement

PyTorch Announces PyTorch Developer Day 2021

PyTorch Developer Day

PyTorch announced its 2nd virtual PyTorch Developer Day (#PTD2) that’ll bring together a community of Machine Learning researchers, academics, and developers to discuss and learn more about PyTorch software releases and developments, use cases in academia and industry, and networking opportunities.

Register for #PTD2 for technical discussions, project talks, and poster exhibitions with the opportunity to learn more about recent PyTorch projects and network with the machine learning community. Participants will have the chance to find more about the research being done around Responsible AI.

Read more: Zebra Technologies acquires US-based artificial intelligence company Antuit.ai

The first PyTorch Developer Day was celebrated in 2020. The 2nd PyTorch Developer day will be held on 1st and 2nd December 2021. The first day will have technical talks on topics ranging from new PyTorch tools, libraries, and core frameworks that support development for a responsible AI. There will be a poster exhibition & networking on the second day. Participants will have the opportunity to learn more about their PyTorch project, meet authors, and network with community members. 

The PyTorch networking day is an invite-only event limited to experts, long-time stakeholders, contributors, and PyTorch maintainers. You can apply for an invitation to the networking event. Community members can submit poster abstracts with titles and summaries of their tools, projects, and libraries before September 24, 2021. 

Advertisement

Inside World’s largest Chip: Introducing Cerebras CS-2

World's largest chip CS-2, Cerebras Systems, CS-2 Wafer Scale Engine 2 processor
Image Credit: Cerebras Systems

Almost two years ago, American semiconductor company Cerebras challenged Moore’s Law with the Cerebras Wafer Scale Engine (WSE). And now it has unveiled the world’s first multi-million core AI cluster architecture. On its second-generation platform, the new Cerebras CS-2 Wafer Scale Engine 2 processor is also the world’s largest chip – which happens to be a “brain-scale” microprocessor that can power AI models with over 120 trillion parameters. 

A chip starts as a cylindrical ingot of crystallized silicon that is roughly a foot in diameter and is sliced into circular wafers that are a fraction of a millimeter thick. A method known as photolithography is then used to “print” circuits onto the wafer. Chemicals that are sensitive to ultraviolet light are meticulously layered on the surface; UV beams are then projected through intricate stencils called reticles, causing the chemicals to react and form circuits.

Traditionally, a 12-inch (30 cm) silicon disc called a wafer is used to produce hundreds or even thousands of computer chips, which are then split up into individual chips. Cerebras, on the other hand, utilizes the complete wafer.

Cerebras Wafer Scale Engine Is A Massive AI Chip Featuring 2.6 Trillion  Transistors & Nearly 1 Million Cores
Comparison of WSE-2 with the largest GPU (Image credit: Cerebras)

The Cerebras CS-2 chip is designed for supercomputing activities, and it is the second time Cerebras, based in Los Altos, California, has presented a chip that is essentially a complete wafer since 2019. A single WSE-2 chip has the size of a wafer, measuring 21cm across, and contains 2.6 trillion transistors and 850,000 AI-optimized cores — all packed on a single wafer-sized 7nm processor. The largest GPU, by comparison, is less than 3cm across and contains only 54 billion transistors and 123x fewer cores. Cerebras’ latest interconnect technology allows it to chain together numerous CS-2 systems (machines powered by WSE-2) to support much larger AI networks, several times the size of the brain.

Cerebras has announced technology that permits the construction of very large clusters of CS-2s, up to 192 CS-2s, in addition to boosting parameter capacity. Considering that each CS-2 has 850,000 cores, 192 CS-2 clusters would be equivalent to 163 million cores in terms of core count. The Cerebras CS-2 chip is one-third of a conventional rack, requires roughly 20 kilowatts, has a closed-loop liquid cooling system, and needs quite large cooling fans like its predecessor.

Cerebras Unveils Wafer Scale Engine Two (WSE2): 2.6 Trillion Transistors,  100% Yield

Most AI algorithms are now trained on GPUs, a type of processor that was originally built for generating computer graphics and playing video games but is now well suited for the simultaneous processing required by neural networks. As neural networks are too big for any single chip to hold, most of the large AI models are split amongst dozens or hundreds of GPUs, which are connected by high-speed wiring. 

However, one of the most significant constraints was data transfer between the processor and external DRAM memory, which consumed time and energy –  making it more tedious when dealing with vast sets of data. The original Wafer Scale Engine’s creators reasoned that the solution was to make the chip large enough to carry all of the data it required with its AI processing cores.

Cerebras has incorporated four new innovative technologies on its new world’s largest chip:

  1. Cerebras MemoryX, 
  2. Cerebras SwarmX, 
  3. Cerebras Weight Streaming, 
  4. Selectable Sparsity.

Weight Streaming is a new software execution architecture that keeps model parameters off-chip while yet maintaining on-chip training and inference performance. This new execution architecture separates compute and parameter storage, allowing for unprecedented flexibility and independent scaling of model size and training performance. It also solves the problems of latency and memory bandwidth that plague big clusters of small CPUs. This dramatically simplifies the workload distribution model, allowing users to scale from using 1 to up to 192 CS-2s with no software changes.

These parameters are saved in the external MemoryX cabinet, which expands the 40GB on-chip SRAM memory with up to 2.4PB of extra memory, making it act like it is on-chip. The additional memory can help companies run larger brain-scale AI models. MemoryX also offers the ability to process weight updates via internal computation. This is accomplished by streaming the weights onto the CS-2 systems, one layer at a time, to calculate each layer of the network. The gradients are sent in the opposite way back to the MemoryX on the backward pass, where the weight update is done in time to be used for the next training iteration.

Communication across many chips can be difficult using standard approaches that split the whole workload amongst processors, with 20 petabytes of memory capacity and 220 petabits of aggregate fabric bandwidth. Furthermore, because of the chip’s 15kW power consumption rate, scaling the system’s performance across multiple systems is extremely difficult. This necessitates specialized cooling and power supply, making cramming more wafer-sized processors onto a single system almost impossible.

Cerebras leverages the new SwarmX fabric technology, which enables multi-system scalability, to address these challenges. This is an AI-optimized communication fabric that uses Ethernet at the physical layer but employs a customized protocol to transport compressed and reduced data across the fabric. SwarmX effectively transforms the CS-2 hardware into a black box to which weights and gradients are delivered and processed. Since each CS-2 has 850,000 cores and 40 GB of on-die memory to execute the whole model, there is no requirement for model parallelism in this method. 

Overall, SwarmX allows CS-2s clusters in achieving near-linear performance scaling and can connect up to 163 million AI-optimized cores across up to 192 CS-2s.

Cerebras is lowering computational complexity by exhibiting up to 90% sparsity with almost linear advantages of speedup utilizing Selectable Sparsity, whereas most AI processors can only manage 50% sparsity. Selectable Sparsity allows users to choose how much weight sparsity they want in their model, resulting in a direct reduction in FLOPs and time to solution. It aids CS-2 in producing faster responses by reducing the amount of computing work necessary to arrive at solutions.

Read More: Baidu Announces Mass Production of Kunlun II AI Chip at its Annual Event

Unlike prominent names like NVIDIA, Cerebras is beating its own drums by innovating against physical realities and economic limitations of Moore’s law and the AI chip industry. Using Cerebras CS-2 chip, the company aims to go beyond the usage of chips for neural network training to performing massive parallel mathematical operations. Some of its customers are Argonne National Labs, Lawrence Livermore National Lab, Pittsburgh Supercomputer Center, GlaxoSmithKline and AstraZeneca, and “military intelligence” organizations. 

Advertisement

Zebra Technologies acquires US-based artificial intelligence company Antuit.ai

Zebra Technologies acquires US-based artificial intelligence company Antuit.ai

Software developing firm Zebra Technologies announces its plans to acquire artificial intelligence company Antuit.ai through a press release. Antuit.ai’s unique Software as a service (SaaS) platform enables businesses to analyze and understand the demand of their products across various markets. 

With this acquisition, Zebra Technologies wants to integrate its platform with Antuit.ai’s SaaS solution to help its customers effectively plan and execute consumer product development for higher revenue generation. 

The artificial intelligence technology developed by Antuit.ai enables enterprises to accurately predict and analyze product demand, shelf level, store level and optimize the pricing of products. 

Read More: Zendesk Acquires Cleverly.ai to Boost Automated Customer Services

Chief Executive Officer of Zebra Technologies, Anders Gustafsson, said, “Through its synergies with our retail store execution portfolio, the acquisition of antuit.ai will further drive our ability to bring the power of AI to our customers and meet the demands of today’s consumer. It will also enable us to offer our customers in the CPG industry an analytics, AI, and automation solution that supports more efficient planning and operations with greater visibility across the supply chain.” 

He further added that they are incredibly excited to work with the team of Antuit.ai to benefit their customers. Texas-based Antuit.ai was founded by Arijit Sengupta, Kumar Ritesh, and Neeraj Bhargava in 2013. The company has raised a total funding of $60 million in its series A funding round from investors like Goldman Sachs and Zodius Capital. 

co-CEO of Antuit.ai, Yogesh Kulkarni, said, “Our AI solutions will influence planning and bridge the gap to execution, enhancing Zebra’s retail and CPG solutions that address associate productivity and inventory management.” 

The acquisition deal is expected to close by the end of this year. The company officials have not provided any information regarding the valuation of the contract. Jenner & Block LLP and Goodwin Proctor LLP are serving as the legal consultant for Zebra Technologies and Antuit.ai, respectively.

Advertisement

Wipro and DataRobot Announce Partnership to Offer Augmented Intelligent Solutions

Wipro And DataRobot Partnership for Augemented intelligence ai solutions
Image Credit: Analytics Drift Team

Wipro, a global IT powerhouse located in Bengaluru, has announced that it would collaborate with Boston-based AI solutions provider DataRobot to develop augmented intelligence products.

The strategic alliance will help customers become AI-driven organizations and expedite their business impact by delivering Augmented Intelligence at scale. According to Wipro, DataRobot’s Augmented Intelligence platform complements Wipro’s expertise in enterprise AI. As a result, their partnership would aid in the execution of AI strategies and provide faster data to value for enterprises. The collaboration will also make it easier for customers to incorporate AI-driven intelligence into their business choices, which will benefit their bottom line.

Harish Dwarkanhalli, President – Applications & Data, iDEAS, Wipro Limited said in an official statement that the company is committed to helping clients in their journey to become intelligent enterprises and implement AI at scale. Harish elaborates, “Our approach is to simplify AI deployment in enterprises using a democratized methodology and utilizing diverse skill sets to collaborate with our technology partners along with our Wipro Holmes AI platform. We are excited to work with DataRobot, a leader in this segment, to further enhance the value we create for our customers.”

This partnership will strengthen Wipro’s partner ecosystem in the dynamic Enterprise AI area and underline the company’s commitment to making AI accessible. Furthermore, DataRobot’s Augmented Intelligence platform will enable key stakeholders across enterprises to undertake enterprise-wide cutting-edge data science.

Augmented intelligence leverages unified amalgamated algorithms of machine learning, natural language processing, and data analytics with the goal of producing relevant data for targeted decisions. According to Gartner, augmented intelligence is a human-centered partnership paradigm in which humans and artificial intelligence collaborate to improve cognitive function. This includes things like learning, making decisions, and trying new things. In fact amid the fears of artificial intelligence replacing human workers, augmented intelligence promises to work in conjunction with humans.

Read More: DataRobot 7.1 Introduced Enchantments To Enhance Artificial Intelligence

Wipro’s iDEAS stands for Integrated Digital, Engineering, and Application Services and is one of the company’s two new business lines. Its other business line is the iCORE, which covers cloud infrastructure, digital operations, and cyber security services.

“As leaders in AI, Wipro and DataRobot are perfectly suited for collaboration,” said Gardner Johnson, Vice President, Worldwide Channels, DataRobot. “We couldn’t be more excited about our partnership with Wipro as we bring the power of AI to more organizations. We look forward to helping customers across every industry and geography achieve more value from their data.”

Advertisement

Zendesk Acquires Cleverly.ai to Boost Automated Customer Services

Zendesk Acquires Cleverly.ai
Image Credit: Zendesk

Zendesk announced the acquisition of Cleverly.ai, a Lisbon, Portugal-based platform that creates a knowledge layer on top of apps to find solutions to customer concerns. Zendesk says it would incorporate Cleverly’s technology into its existing offerings, allowing teams to automate additional procedures while meeting consumer demand. 

Zendesk writes on its blog that ensuring agents have the proper tools and processes to offer outstanding client experiences is a vital component of customer service. However, the support staff is failing to keep up with a surge of more than 20% year over year in conversation volume, across an increasing number of channels. As a result, businesses are increasingly turning to artificial intelligence (AI) to provide faster and more dependable service while also enhancing the efficiency of their workforce.

Cleverly.ai’s product platform, according to Techcrunch, has a number of AI-powered features, including a triage function that automatically tags incoming service requests to help categorize workflow. With its agent assist capacity, the company also offers what it calls AI-powered human augmentation, which tries to aid customer support agents in providing the correct answers to inquiries via automated replies. It also employs machine learning to classify, prioritize, and route customer support tickets based on their intentions and queries.

Zendesk already has a number of AI-enabled tools that can assist enterprises in automating customer interactions, increasing agent productivity, and operational efficiency using features like Answer Bot, macro suggestions, and Content Cues. Answer Bot is a chatbot that provides answers extracted from Zendesk’s knowledge base. Macros use machine learning to recommend a response based on ticket context, and they use prebuilt connections with Zoom, Microsoft Teams, and Monday.com to keep teams connected. Further, Zendesk’s AI-powered Content Cues function assists in the automatic review of support issues and identifying locations where the content in a company’s help center can be updated to be more beneficial to users.

Read More: Deepen AI launches Artificial Intelligence-Powered Annotation Tool

Together, Zendesk and Cleverly.ai share a vision of democratizing AI and creating better AI applications to help in the provision of enhanced customer service. Cleverly.ai says in its newsroom blog that AI democratization will result in customers benefiting from the technology without struggling with its endless complexity and pitfalls.

Zendesk will use Cleverly to give a suite of features that will automate important insights, eliminate manual tasks, and optimize processes, resulting in happier, more productive support teams that can keep up with customer demand. It will also provide quick responses to agents, proactive workflow advice to managers, and operational insights to administrators.

Advertisement