Monday, November 10, 2025
ad
Home Blog Page 347

How Scanta Is Fortifying Machine Learning-Based Chatbots From Cyberattacks

Scanta

The adoption of artificial Intelligence-based conversational systems is at an all-time high to maintain business continuity during the pandemic. According to Statista, the chatbots market is going to increase and reach 454.8 million in revenue by 2027, up from $40.9 million in 2018. Such a trend has also opened up opportunities for hackers to attack chatbots and disrupt customer services. The attacks, however, are not limited to dated practices; today, hackers are leveraging cost-efficient machine learning techniques to attack conversational AI systems at scale. One of the biggest challenges of AI attacks is the sophistication and ability to adjust behavior based on a systems’ defences. To address these problems, Scanta Inc, an AI-based cybersecurity firm, is assisting businesses in protecting virtual assistant chatbots against machine learning attacks.

Scanta provides security for conversational systems like Chatbots, robo-advisors, and virtual assistants against adversarial attacks. “By using AI against AI attacks, we hope to lessen the burden on security professionals. We provide a SaaS solution for integration, which acts as a proxy to provide inline defence,” says Anil Kaushik, CTO of Scanta. The company carries out significant research on machine learning attacks and collects several signals to identify the sophistication of malicious activities. By gathering metadata, Scanta is able to devise a solution to identify cyberattacks of AI systems that learn chatbots’ behavior and bypass the threshold to stay undetected.

Powered by machine learning algorithms, Scanta’s solution–VA Shield–builds deep models capturing features for all inputs and outputs in conversational systems to build unique unbreachable attributes in real-time. VA Shield also builds models for chatbots in addition to user inputs so it can be used to isolate any malicious insider activities prevalent in chatbots. The multiple layers in VA Shield allow it to build such unique DNA that protects conversational systems from within. To make a resilient solution, the multiple layers include capturing device information used for interaction, network characteristics, end-user input, and system output. “Both the input and output are extremely critical aspects of protecting conversational AI. VA Shield performs deep inspection of each conversation in systems to defend inputs and outputs efficiently,” explains Kaushik. The solution is specifically designed to collect multiple attributes and analyze the differences in the conversations. Identification of these differences enables VA Shield to isolate any malicious activities in the system.

Equipped with state-of-the-art machine learning techniques, Scanta is working closely with organizations that either offer or have integrated chatbots for providing a superior customer experience. Since this is the first foray for most companies in monitoring conversational systems at scale, Scanta simplifies the onboarding process for its clients by helping them implement VA Shield effectively. As one of the first companies to focus on cybersecurity for conversational platforms, Scanta offers enterprises the opportunity to protect conversational AI systems in ways that were not possible before. By having an AI-based system to uncover and block these attacks, Scanta hopes to automate the tasks of security professionals, leaving them more time to focus on higher-order security issues.

Scanta, being a pioneer in AI-based cybersecurity solutions for conversational systems, credits its team which is behind the ability of its solutions. To hire the right team, Scanta uses a referral system to maximize access to proficient cybersecurity talent in the market; but being independent of location helps Scanta source talent over a wide geographic footprint. “We have current executives with expertise in cybersecurity giving us a large pool to recruit from,” says Chaitanya Hiremath, CEO of Scanta. “We also provide an opportunity for engineers to work in the cutting-edge field of AI security and so we believe we are in a good position to build an industry-leading team to drive product innovation.” 

In the future, Scanta is committed to protecting conversational systems like chatbots, robo-advisors, virtual assistants, social media, group chats, virtual agents, and email from bad actors.

Advertisement

Walmart Joins Hands With Cruise To Deliver Orders Through Self-Driving Cars

Walmart Cruise Deliver

Walmart collaborates with Cruise, a self-driving car provider, to deliver your orders from early 2021. The pilot program will allow Walmart to offer contactless delivery of orders to its customers to reduce the transfer of coronavirus. The association between the world’s largest retailer and Cruise is also a move toward Walmart’s zero-emission by 2040 to protect the planet from harmful pollutants.

Since the pandemic, Walmart has double down on autonomous delivery of grocery and health and wellness products through drones in the US. While a pilot program was launched in Fayetteville, North Carolina, on September 9, Walmart deployed more drones to deliver COVID-19 self-collection kits on September 22.

In late 2020, autonomous driving vehicle providers, especially self-driving car providers like Waymo, Tesla, Cruise, and others, have gained momentum due to the approval of offering robotaxi service without safety drivers. After years of delay in deploying self-driving cars, various companies are eventually being able to deliver on the promise of revamping the transport industry with complete autonomy.

Also Read: Top Data Science Podcasts You Should Follow

However, for delivering your Walmart orders, Cruise will have a safety driver in place during its pilot phase. On November 14, 2018, Walmart also had a similar deal with Ford for the autonomous delivery of groceries in Miami. And in 2019, the retailer joined forces with Nuro to grocery delivery in Texas.

Several companies, including food delivery firms like DoorDash and Postmates, have adopted self-driving cars, but Walmart has been at the forefront of trying to revamp the delivery service for superior customer experience.
“You’ve seen us test drive with self-driving cars in the past, and we’re continuing to learn a lot about how they can shape the future of retail. We’re excited to add Cruise to our lineup of autonomous vehicle pilots as we continue to chart a whole new roadmap for retail,” notes Tom Ward, SVP of Customer Product, Walmart US, in a press release.

Advertisement

IBM Is Offering Free Certification On Coursera For Attending Its Data & AI Conference

IBM Data & AI Conference

IBM’s Digital Developer Conference Data & AI is going to be held on November 10 for Americas & Europe and November 24 for India & Asia Pacific, where you can get a free specialization or professional certification by completing a data science course on Coursera. The four-track free conference by IBM is focused on AI in production, Data & AI Essentials Course, 5 Hands-on labs, and Data Competitions & Open Source.

IBM Data & AI Conference

Digital Developer Conference Data & AI is ideal for both machine learning beginners and practitioners to learn from experts on a wide range of data science topics. Some of the most interesting sessions would be the deployment of models in production and overcoming challenges associated with productizing. The conference will also have sessions on best practices while developing machine learning models, covering design patterns used by developers, fairness in AI, bias detection and mitigation, building AutoAI pipeline for cyber threat detection, and more.

Unlike other conferences, what makes the IBM Digital Developer Conference Data & AI a must-attend is that the event does not binge you with a plethora of information. The sessions are very industry-relevant with topics like the future of open-source, optimizing models for accuracy, among others. Besides, the conference will host a data competition–Call for Code Spot Challenge on Wildfires. Further, IBM will also release a new geospatial dataset from the IBM Weather Operations Center, going back to 2005, for machine learning enthusiasts to blaze their trails.

Also Read: Real-Time Collaboration Tool–Deepnote–Is Now Open For All

As a part of the fourth Digital Developer Conference, IBM is providing a special digital badge and certification of data science courses on Coursera. After the event, you will be receiving the offer of a course that you can redeem before March 30, 2021, to take a specialization/course from any of these programs: IBM AI Enterprise Workflow Specialization, IBM Machine Learning Professional Certificate, IBM Data Science Professional Certificate, and Advanced Data Science with IBM Specialization.

Register for the Data & AI conference for free: India & Asia Pacific and Americas & Europe.

Advertisement

Top AI News Of The Week [November 9, 2020]

AI News

You might have heard of robots replacing humans, but this week it was the other way around as Walmart removed robots to replace with humans. Nevertheless, other machine learning-based systems are penetrating into human workflows and setting them free from repetitive or non-creative tasks like driving cars, extracting values from documents; while Goole launched DocAI platform to process unstructured data, AutoX will expand its testing of autonomous vehicles to four more cities. This week in AI news has more exciting announcements and developments–read more.

Here are the top AI news of the week: –

Google Introduces Document AI Platform

Google introduced a DocAI platform to automatically extract information from documents. Unstructured data have valuable information, but organizations fail to streamline the process of gaining value from it, as processing unstructured data is a strenuous task. Addressing this challenge, Google is offering DocAI platform through GCP to quickly collect information like address, name, date, and more. To collect data, you can either use default parsers or make a customized template. You can currently use generalized parser like OCR, form parser, and document splitter but will have to request access to leverage specialized parsers.

Walmart Replaces Robots With Humans

At the time of termination of the contract with Bossa Nova Robotics, Walmart had 500 robots in 4,700 stores to check stocks of products on shelves. The robots used to wander in stores and notified in case of probable out-of-stock circumstances. However, Walmart believes that humans can be more efficient than robots in scanning shelves while being cost-effective. In a recent interview with Squawk Box, Doug McMillion, CEO of Walmart, noted that one of the challenging tasks is to ensure products are in-stock.

Intel Acquires

Intel acquired cnvrg.io, an end-to-end machine learning platform, to help data scientists deploy models at scale. cnvrg.io is an Israel-based company that assists organizations in quickly bringing AI-based products to the market. As per a spokesperson of Intel, cnvrg.io will remain an Intel independent firm and continue to serve its clients. Intel, with cnvrg.io, will now compete with the likes of DataRobot, Databricks, Dataiku, and others, with an end-to-end data science platform. cmvrg.io is now Intel’s second accusation in two weeks after SigOpt.

Self-Driving Cars In AutoX

AutoX tested its autonomous vehicle in California in July without a safety driver, becoming the second firm after Waymo. Backed by Alibaba Group Holding Ltd, the company is about to test its vehicle in four more cities. The company is already offering robotaxi service in Shanghai and is about to test level 4 autonomous vehicles in China. This is a year of self-driving cars as numerous companies like Cruise, Waymo, and Tesla, have received the approval for offering raid-hailing with autonomous vehicles.

AWS Announces The Expansion Of Its Cloud Service In India

AWS will add another AWS Region in Hyderabad, Telangana, by 2022. This will be the second region in India after Mumbai, which was opened on June 27, 2016. Currently, AWS has 24 regions across the globe to ensure the speed and reliability of its services. The Hyderabad region will have 3 Availability Zone (AZ), making it a total of 6 AZ in India. AWS has plans to add another 15 AZ across India, Indonesia, Japan, Spain, and Switzerland. Google is also working on commissioning its second cloud region in India, which will be launched next year.

Advertisement

AWS Will Launch Second Cloud Region In India By 2022

AWS Second Region In India

AWS announced the expansion of its cloud region in India as a part of its Global Infrastructure initiative. The new AWS data center will be commissioned in Hyderabad, Telangana, the second AWS Region in India after its infrastructure region in Mumbai, which was opened on June 27, 2016, and later expanded in 2019 with a third Availability Zone (AZ). The AWS Hyderabad region will allow organizations and developers to leverage cloud computing with low latency to make superior products.

The second region will consist of 3 zones, joining the other three regions in Mumbai. AWS has a total of 24 infrastructure regions across the globe and has a plan of 15 more AZ in India, Indonesia, Japan, Sapin, and Switzerland.

“Businesses in India are embracing cloud computing to reduce costs, increase agility, and enable rapid innovation to meet the needs of billions of customers in India and abroad. Together with our AWS Asia Pacific (Mumbai) Region, we’re providing customers with more flexibility and choice, while allowing them to architect their infrastructure for even greater fault tolerance, resiliency, and availability across geographic locations,” said Peter DeSantis, Senior Vice President of Global Infrastructure and Customer Support, Amazon Web Services.

In a continuous attempt to deliver the best cloud computing platform to businesses, AWS has also expanded its service through edge locations in Bengaluru, Chennai, Hyderabad, Mumbai, New Delhi, and Kolkata. These edge locations are used for cache copies to reduce the delay in data delivery.

Also Read: Machine Learning Behind Google Meet Replaced Background

Prominent startups and big tech companies like Aditya Birla Capital, Axis Bank, Mahindra Electric, Ola, OYO, Swiggy, Tata Sky, Zerodha, and more use AWS for performance, availability, security, scalability, and flexibility of their products and services. 

Earlier this year in July, Google Cloud Platform had also announced the launch of its second cloud region in 2021, after its first launch in Mumbai in 2017.

Advertisement

Microsoft Announces The Support Of Hindi For Sentiment Analysis

Hindi Sentiment Analysis

Microsoft announces the support of Hindi in its services for sentiment analysis. Hindi is the most widely used language in India and the fourth most spoken language in the world, thereby allowing organizations to understand their customers who use Hindi as a medium to communicate. With the addition of Hindi, Azure services support more than 20 languages, including French, Italian, German, Russian, Greek, Chinese, Dutch, Spanish, Danish and more, for sentiment analysis.

Screenshot of Microsoft Text Analytics in Hindi language

According to Chris Wendt, program manager — Azure language services, text analysis gives broad insight into the perception of products to help organizations make corrective actions based on the analysis.

With text analysis, organizations will be able to evaluate the entire documents or even a sentence with a score between 0 to 1 for positive, neutral or negative. When combined with Azure speech-to-text service, companies can also analyze sentiments in audio.

“Underlining our commitment to helping empower every business to achieve more, Microsoft has added Hindi to the already robust set of international languages supported by Text Analytics service. We are helping brands break language barriers and reach out to Hindi-speaking customers to understand the customer’s sentiment about their products, services, and broaden their user feedback reach. With this release, we are bringing in cutting edge cloud services, AI, and natural language processing to deepen the trust between brands and customers in India,” says Sundar Srinivasan, general manager — AI & Search of Microsoft India.

Also Read: Real-Time Collaboration Tool–Deepnote–Is Now Open For All

Natural language processing in Hindi can assist organizations in analyzing feedback, opinions, and other forms of customer support interaction, to shed light on how people resonate with products and services. Sentimental analysis in Hindi by Microsoft is not limited to Azure; organizations can use with on-prem services. To further streamline the process, organizations can display the results in Power BI dashboards for understanding trends in real-time.

Advertisement

Intel Acquires cnvrg.io, An MLOps & Model Management Firm

Intel Acquires cnvrg.io

Intel acquires Israel-based end-to-end machine learning platform cnvrg.io for an undisclosed amount. cnvrg.io allows data scientists to quickly deploy machine learning models at scale. Trusted by companies like NVIDIA, NetApp, LogMeIn, and more, cnvrg.io is a robust platform for data-driven organizations that likes to transform ideas into products effortlessly. Compatible with NVIDIA DGX, cnvg.io helps organizations unleash their potential by maximizing investment returns on machine learning initiatives.

According to TechCrunch, an Intel spokesperson said that cnvrg.io will be an independent Intel company and will continue to serve its existing and future customers. Although Intel has not released any official details of the acquisition, cnvrg.io was valued at $17 million after raising $8 million in Series A, which was led by Hanaco VC, on November 12, 2019.

Since the change in Intel’s technology management structure in July, the company has been making significant moves by either acquiring or selling its business. While the chipmaker agreed to sell its NAND SSD business to SK Hynix for $9 billion on October 19, Intel acquired SigOpt in an undisclosed deal on October 29.

With cnvrg.io and SigOpt, Intel will double down on increasing its AI revenue from $3.8 billion in 2019. Since cnvrg.io and SigOpt have a proven record of unlocking several organizations’ AI capabilities, Intel will double down on AI-related initiatives. Intel has been diversifying its portfolio by aggressively tapping in several markets. For one, the chipmaker is not trying to penetrate the GPU business with Iris® Xe MAX for laptops. “It is Intel’s first Xe-based discrete graphics processing unit (GPU) as part of the company’s strategy to enter the discrete graphics market,” according to Intel.

With Intel’s acquisition of cnvrg.io, it will now compete with prominent data science platform providers like Sagemaker, Databricks, Dataiku, and DataRobot.

Advertisement

Data Science Real-Time Collaboration Tool–Deepnote–Is Now Open For All

Deepnote

Deepnote–a Jupyter-compatible notebook–is now open for all to enhance collaboration in data science projects. One of the most tedious things in data science is to collaborate on a project. Be it version control or real-time collaboration, data scientists have struggled for years to streamline the workflows. However, with Deepnote, you can collaborate in real-time without the need for setting up specific environments. One can directly share the notebook and ask for help, thereby expediting the collaboration process.

Founded in 2019, Deepnote is also working on fixing various data science challenges such as versioning, code review, and reproducibility. These features, however, are under development and will be released in the coming months. But, the real-time collaboration feature is now available for all to use. Earlier, real-time collaboration was not generally available. Selected enthusiasts were granted access after joining the private beta waitlist.

Although still in beta, our experience with Deepnote was smooth and look suitable for general usage. Currently, Deepnote comes with three plans: Free, Team, and Enterprise. While the free plan allows the unlimited (750) standard machine hour and three collaborators, the team plan allows unlimited collaborators for $49 per month. And the enterprise plan is a bespoke service based on the requirements.

Also Read: Machine Learning Behind Google Meet’s Blurry and Replaced Background

The free plan, though, is enough for any hobbyist to get started as the 750 standard machine-hours, according to the Deepnote, is enough to run one machine non-stop for the whole month. You can also add several integrations such as MongoDB, PostgreSQL, Amazon S3, BigQuery, Google Cloud Storage, and more, to your notebook. But, you cannot share integration with your notebook yet. Further, you can integrate your GitHub repository to use its code, make commits, and pull requests. “Any project collaborator will be able to read and write to the project repository using the generated deploy key,” notes Deepnote.

Try Deepnote for free here.

Advertisement

Machine Learning Behind The Blurry & Replaced Background In Google Meet

Machine Learning in Google Meet

After releasing a blurry background feature in Google Meet in mid-September 2020, Google is now rolling out a more advanced version of Google Meet, where you can replace the background with hand-picked images. Although late, Google’s machine learning algorithm behind the blurry/replace background looks superior to the early adopters like Zoom and Skype. But how is it any better? Unlike other video conferencing applications that fail to remove and replace your image precisely, especially while moving around, Google’s blurry/replace does not look artificial; your image will never look detached from the background. So, what is the machine learning technique in Google Meet for blurry/replace backgrounds?

Machine Learning in Google Meet

According to Google, existing solutions require installation of software, but Google Meet is equipped with state-of-the-art technology built with MediaPipe, an open-source framework that enables users to build and run machine learning pipelines on mobile, web, and edge devices, to work seamlessly in browsers.

WebML Pipeline

Google released MediaPipe for the web on January 28, 2020, which was enabled by WebAssembly and accelerated by XNNPack ML Inference Library. Running a machine learning process in a browser is not a straightforward task when it comes to speed of execution. While Skype and Zoom rely on desktop applications for the best experience to get the computational power, Google Meet entirely relies on web browsers. To expedite the process on the web, Google made browsers convert WebAssembly instructions into native machine code that executes much faster than the traditional JavaScript code.

Machine learning in Google Meet segments users from the original background with inference to compute a low-resolution mask. On further refining the mask, the video is rendered with WebGL2 (Web Graphics Libraries)–a JavaScript API for rendering high-performance interactive 3D and 2D graphics without the need for plugins. 

The segmentation model is run in real-time on the web using the client’s CPU, which is accelerated with the XNNPACK library and SIMD. To further consider the speed, the model varies its performance on high-end and low-end devices; a full pipeline is run on high-end devices, but on low-end devices, mask refinement is bypassed. Further, to reduce the need for floating-point operations (FLOPs) in browsers while processing every frame, the researchers downsample the image size before feeding it to the model. 

Model architecture with MobileNetV3 encoder (light blue) and symmetric decoder (light green)

“We modified MobileNetV3-small as the encoder, which has been tuned by network architecture search for the best performance with low resource requirements. To reduce the model size by 50%, we exported our model to TFLite using float16 quantization, resulting in a slight loss in weight precision but with no noticeable effect on quality. The resulting model has 193K parameters and is only 400KB in size,” mentions researchers.

Blurry Background

After the segmentation, the researchers used OpenGL–Open Graphics Library–for video processing and effect rendering. For refinements, they applied a bilateral filter, a non-linear, edge-preserving, and noise-reducing smoothing filter for images, and flushed the low-resolution mask. Besides, pixels were weighted by their circle-of-confusion (CoC) to ensure the background and foreground look separate. And to remove halo artifacts surrounding the person, separable filters were used instead of the popular Gaussian pyramid.

Background replacement with custom image

However, for a background with custom images, the researchers used the light wrapping technique. Light wrapping helps soften segmentation edges by letting background light spill over onto foreground elements, thereby making the compositing more immersive. The method also helps minimize halo artifacts in case of a large contrast between the foreground and the replaced background.

Advertisement

Top AI News Of The Week [November 1, 2020]

Top AI News of The Week

This week, we witness tech giants’ effort in simplifying the use of artificial intelligence among developers and non-experts alike. While Microsoft released a no-code platform for creating ML models within a few minutes, Google is working on converting web pages into videos. However, the most defining news that gained most of the traction was Yann LeCun’s opinion on GPT-3. After the release of the largest language model by OpenAI, the hype around the use case of AI was expedited with examples of trivial demonstration of GPT-3’s implementations. The opinions are divided among experts on GPT-3’s superiority, but LeCun being a Turing Award winner, his views on the NLP model made the most news.

Here are the top AI News Of the Week (November 1, 2020):

Yann LeCun Busts GPT-3 Bubble

Yann LeCun GPT-3

In a recent Facebook post, Yann LeCun, VP and chief AI scientist of Facebook, pinpointed the flaws in the largest language model, GPT-3, of OpenAI. While citing Nabla’s work that evaluated GPT-3, LeCun wrote the model does not know how the world works. 

The cynicism by LeCun came after a simple explanatory study was enough to conclude that OpenAI’s model is not a breakthrough in terms of the development in artificial intelligence. Developing a model with a colossal amount of data is not the approach that will lead what is being promised to the world about artificial intelligence’s ability. And it is apparent with models with 99% fewer parameters outperforming GPT-3.

“But trying to build intelligent machines by scaling up language models is like building a high-altitude airplanes to go to the moon. You might beat altitude records, but going to the moon will require a completely different approach,” Yann LeCun on GPT-3.

Read more here.

Google AI Converts Web Pages Into Videos

Google researchers introduce URL2Video, to convert web pages into videos in various aspect ratio. The Automated Video Creation From a Web Page, extracts text, images, or videos from web pages and makes a sequence of shorts for videos. It also retains the design and theme of organizations’ brand using the color, fonts, and layouts from web pages. 

“Given a user-specified aspect ratio and duration, it then renders the repurposed materials into a video that is ideal for product and service advertising,” mentions the author in a blog post.

The team also evaluated the generated videos with the designers at Google to understand the effectiveness of the final product. The results show that URL2Video effectively extracted design elements from a web page and supported designers by bootstrapping the video creation process.

Read more here.

Microsoft & Netflix Launched Data Science Training Modules

Microsoft Over The Moon

Microsoft and Netflix collaborated to release three data science modules to train beginners to learn data science while working on a real-world project of planning for a moon mission with genuine data provided by NASA. The modules are inspired by Netflix Original–Over the Moon–where a girl uses science, technology, engineering, and mathematics.

The three modules: Plan a Moon Mission using the Python Pandas LibraryPredict Meteor Showers using Python and VC CodeUse AI to Recognize Objects in Images using Azure Custom Vision, are a perfect way to getting started with data science learning for beginners because of the real-world project-driven training.

Read more here.

Intel Acquires SigOpt

In undisclosed transaction terms, Intel acquired SigOpt, a firm that offers a platform for the optimization of artificial intelligence software models at scale. “SigOpt’s AI software technologies deliver productivity and performance gains across hardware and software parameters, use cases and workloads in deep learning, machine learning and data analytics. Intel plans to use SigOpt’s software technologies across Intel’s AI hardware products to help accelerate, amplify and scale Intel’s AI software solution offerings to developers,” according to Intel’s press release. The deal is expected to close this quarter with Scott Clark, CEO, and Patrick Hayes, co-founder of SigOpt joining the Machine Learning Performance team at Intel.

Read more here.

Microsoft Released Lobe, A No-Code Machine Learning Platform

Microsoft Lobe

Microsoft released Lobe to enable enthusiasts as well as developers quickly build machine learning models. Currently, the platform only supports image classification models, but Microsoft will further enhance Lobe to handle other data types.

The ability to manage end-to-end ML model development workflows uniquely places Microsoft Lobe to democratize artificial intelligence even among non-experts. With Microsoft Lobe, you can also export models in formats like API, TensorFlow, TensorFlow Lite, and Core ML to further use the model for application development.

Read more here.

Advertisement