Home Blog Page 277

Microsoft Releases Deep Learning model BugLab for better bug detection in Codes

Microsoft BugLab GAN

Miltos Alamanis, a principal researcher at Microsoft Research, and Marc Brockschmidt, a senior principal research director at Microsoft Research, recently unveiled their newly built deep learning model, BugLab. According to the researchers, BugLab is a Python implementation of a new approach for self-supervised learning of both bug detection and repair. This newly developed model will help developers discover flaws in their code and troubleshoot their applications. 

Finding bugs or algorithm flaws is crucial as it can enable developers to remove bias, improve the technology, and reduce the risk of AI-based discrimination against certain groups of people. As a result, Microsoft is developing AI bug detectors that are trained to look for and resolve flaws without using data from actual bugs. According to the journal, the need for “no training” was sparked by a scarcity of annotated real-world bugs to aid in the training of bug-finding deep learning models. While there is still a significant amount of source code available, most of it is not annotated.

BugLab’s current goal is to uncover difficult-to-detect flaws rather than critical bugs that can be quickly detected using traditional software analysis. Researchers assert that the deep learning model saves money by eliminating the time-consuming process of manually developing a model to discover faults.

BugLab employs two competing models that learn by engaging in a “hide and seek” game based on generative adversarial networks (GAN). A bug selector model selects whether or not to introduce a bug, where to introduce it, and what form it should take (for example, replacing a certain “+” with a “-“). The code is changed to introduce the problem based on the selector option. The bug detector then tries to figure out if a flaw has been introduced in the code, and if so, where it is and how to repair it.

Animated diagram of BugLab deep learning system.
Source: Microsoft

These two models are jointly trained on millions of code snippets without labeled data, i.e., in a self-supervised manner. The bug selector tries to “hide” interesting defects within each code snippet, while the detector seeks to outsmart the selector by detecting and repairing them.

The detector improves its ability to discover and correct defects as a result of this process, while the bug selector improves its ability to create progressively difficult training samples.

A chart showing the timeline of BugLab. Code is entered into the Bug Selector, which modifies the code and enters it into the Bug Detector. Bug Detector then determines whether a bug exists, and if so where and how should it be fixed.
Source: Microsoft

While GANs are conceptually similar to this training process, the Microsoft BugLab bug selector does not create a new code snippet from scratch but rather rewrites an existing one (assumed to be correct). Furthermore, code rewrites are – by definition – discontinuous, and gradients from the detector to the selector cannot be transmitted.

Apart from having knowledge about various coding languages, a programmer must also devote time to the arduous task of correcting errors that occur in various codes, some simple and others so difficult that they can go unnoticed even by large artificial intelligence models. BugLab will relieve programmers of this burden when dealing with these types of trivial errors, giving them more time to focus on the more complex bugs that an AI couldn’t detect.

Read More: Microsoft launches Tutel, an AI open-source MoE library for model training

To assess performance, Microsoft manually annotated a small dataset of 2374 real Python package errors from the Python Package Index with such bugs. The researchers observed that the models trained with its “hide-and-seek” strategy outperformed other models, such as detectors trained with randomly inserted flaws, by up to 30%. 

The findings are encouraging, indicating that around 26% of defects may be detected and corrected automatically. However, the findings also revealed a high number of false-positive alarms. While several known flaws were uncovered, just 19 of BugHub’s 1,000 warnings were indeed true bugs. Eleven of the 19 zero-day faults discovered were reported to GitHub, six of which were merged, and five were still awaiting approval. According to Microsoft, their approach seems promising, although, further work is needed before such models can be used in practice. Furthermore, there is a possibility that this technology will be available for commercial usage at some point soon.

The findings have been published in paper titled Self-Supervised Bug Detection and Repair, which was presented at the 2021 Conference on Neural Information Processing Systems (NeurIPS 2021).

Advertisement

South Korea to use AI-powered Facial Recognition for COVID-19 testing

South Korea AI facial recognition covid-19

South Korea is all set to use a new artificial intelligence-powered facial recognition system to track the movements of COVID-19 patients across the country. The new AI system will use thousands of CCTV cameras installed at various locations to track the activity of patients. 

The South Korean government plans to roll out the AI system in one of the most densely populated cities of the country, Bucheon, by the beginning of 2022. According to officials, CCTV footage from over 10,000 CCTV cameras will be analyzed by an artificial intelligence-enabled algorithm to monitor the movements of infected individuals as a precautionary measure to prevent a rise in COVID-19 cases in the country. 

However, this new development has been criticized by opposition parties due to various privacy concerns. Park Dae Chul, Lawmakers from the main opposition party in South Korea, said, “It is absolutely wrong to monitor and control the public via CCTV using taxpayers’ money and without the consent from the public.” 

Read More: Fujitsu and MIT Center for Brains, Minds, and Machines Build AI model to Detect OOD data

He further added that the South Korean government’s plan on the pretext of COVID is a neo-totalitarian idea. The newly developed AI system will also check whether infected individuals are following the pandemic protocols like wearing masks and maintaining social distance in public areas. 

A 110 pages document was submitted to the Ministry of Science and Information and Communication Technology for the same. According to government officials, the AI-powered facial recognition system will help deployed workers reduce their work pressure and control the spread of COVID-19 in densely populated areas of the country. 

Regarding the criticisms encompassing the facial recognition system, an official said, “There is no privacy issue here as the system traces the confirmed patient based on the Infectious Disease Control and Prevention Act. Contact tracers stick to that rule so there is no risk of data spill or invasion of privacy.” 

Earlier this year, Greece also deployed an artificial intelligence system called Eva to determine which travelers entering the country should be tested for COVID-19.

Advertisement

Fujitsu and MIT Center for Brains, Minds, and Machines Build AI model to Detect OOD data

fujitsu CBMM MIT ood data deep neural network

Generally, deep neural networks (DNNs) are trained using the closed-world assumption, which assumes that the test and training data distributions are similar. But when used in real-world tasks, this assumption does not hold true, resulting in a significant drop in performance. While these AI models may sometimes match or even outperform humans, there are still issues with recognition accuracy when contextual circumstances like lighting and perspective alter dramatically from those in the training datasets. 

Though this performance loss is acceptable for applications like AI recommendation, it can lead to fatal outcomes if deployed in healthcare. Deep learning systems must be able to discriminate between data that is aberrant or considerably different from that used in training in order to be deployed successfully. When feasible, an ideal AI system should recognize Out-of-Distribution (OOD) data that deviates from the original training data without human assistance.

This inspired Fujitsu Limited and the Center for Brains, Minds and Machines (CBMM) to make collaborative progress in understanding AI principles enabling recognition of OOD data with high accuracy by drawing inspiration from the cognitive characteristics of humans and the structure of the brain. The Center for Brains, Minds, and Machines (CBMM) is a multi-institutional NSF Science and Technology Center headquartered at the Massachusetts Institute of Technology (MIT). It is committed to the study of intelligence. In other words, it focuses on how the brain produces intelligent behavior and how we might be able to reproduce intelligence in machines.

At the NeurIPS 2021 (Conference on Neural Information Processing Systems), the team will present highlights of their research paper, demonstrating advancements in AI model accuracy. According to the group’s paper, they developed an AI model that leverages the process of diving deep neural networks to enhance accuracy, which was ranked as the most accurate in an evaluation assessing image recognition accuracy against the “CLEVR-CoGenT” benchmark.

The data distribution in real-world activities generally drifts with time, and tracking a developing data distribution is expensive. As a result, OOD identification is critical in preventing AI systems from generating predictions that are incorrect.

“There is a significant gap between DNNs and humans when evaluated in out-of-distribution conditions, which severely compromises AI applications, especially in terms of their safety and fairness,” said Dr. Tomaso Poggio, the Eugene McDermott Professor in the Department of Brain and Cognitive Sciences at MIT and Director of the CBMM. Dr. Poggio also adds that this neuroscience-inspired research may lead to novel technologies capable of overcoming dataset bias. “The results obtained so far in this research program are a good step in this direction.”

The study’s outcomes show that the human brain can accurately record and classify visual information, even when the forms and colors of the things we experience change. The novel method creates a one-of-a-kind index depending on how neurons see an item and how the deep neural network classifies the input photos. The model encourages users to grow their index in order to enhance their ability to recognize OOD example items.

Read More: Researchers Use Neural Network To Gain Insight Into Autism Spectrum Disorder

It was previously thought that training the deep neural networks as a single module without dividing it up was the best way to construct an AI model with high recognition accuracy. Researchers at Fujitsu and CBMM have effectively achieved greater recognition accuracy by separating the deep neural network into different modules based on the newly generated index’s forms, colors, and other aspects of the objects.

Fujitsu and CBMM intend to improve the findings to create an AI capable of making human-like flexible decisions, with the goal of using it in fields such as manufacturing and medical care.

Advertisement

Lenovo’s ThinkEdge SE450 Enables Business Transformation with AI at the Edge

Lenovo ThinkEdge SE450, edge ai

Lenovo Infrastructure Solutions Group (ISG) has announced the addition of the new ThinkEdge SE450 server to the Lenovo ThinkEdge portfolio, bringing an artificial intelligence (AI) platform to the edge for faster business insights. The ThinkEdge SE450, according to Lenovo, extends intelligent edge capabilities with best-in-class, AI-ready technology to bring quicker insights and processing performance to more environments for real-time edge decision-making.

Lenovo cites that its edge-driven data sources are used by Lenovo customers to make real-time choices on manufacturing floors, retail shelves, city streets, and mobile telephony locations. Lenovo’s ThinkEdge line of products extends outside the data center to provide enhanced computing capability.

Lenovo ThinkEdge SE450 Cover

According to Khaled Al Suwaidi, Vice President Fixed and Mobile Core at Etisalat, “Expanding our cloud to on-premise enables faster data processing while adding resiliency, performance and enhanced user experiences. As an early testing partner, our current deployment of Lenovo’s ThinkEdge SE450 server is hosting a 5G network delivered on edge sites and introducing new edge applications to enterprises. Khaled also adds that “It gives us a compact, ruggedized platform with the necessary performance to host our telecom infrastructure and deliver applications, such as e-learning, to users.”

Lenovo’s ThinkEdge SE450 offers real-time insights with greater computing power and flexible deployment features that can handle diverse AI workloads while allowing customers to grow, and is designed to surpass the constraints of server locations. With an accessible and distinctive form factor and a reduced depth that allows it to be readily put in areas with limited space, ThinkEdge SE450 fits the needs of a range of important workloads. The GPU-powered server is designed primarily to satisfy the demands of vertically defined edge environments, with a robust design that can endure a broader operating temperature as well as increased dust, shock, and vibration for tough situations. 

The ThinkEdge portfolio also includes a new lock framework to help prevent unwanted access and advanced security capabilities to further safeguard data. Further, it provides a number of connection and security solutions that can be simply deployed and more securely maintained in today’s remote situations.

Read More: Will data encryption protect user privacy when using edge AI for personalized ads?

Lenovo’s edge solution is powered by the NVIDIA® JetsonTM XavierTM NX platform and was developed in collaboration with Amazon Web Services (AWS) using AWS Panorama. The NVIDIA Jetson Xavier NX is a cloud-managed, production-ready, high-performance, compact form factor, power-efficient system-on-module that can train and deploy a range of AI and machine learning models to the edge. It can handle data from many high-resolution sensors at up to 21 trillion operations per second (TOPS). It also operates sophisticated neural networks in parallel. Jetson Xavier NX is built on NVIDIA CUDA-XTM, a full AI software stack with highly optimized, domain-specific libraries that minimizes complexity and accelerates time to market.

Starting in the first half of 2022, the Lenovo ThinkEdge SE70 will be accessible in various selected regions throughout the world.

Advertisement

H2O.ai Announces its New H2O Document AI system to Automate Document Processing

H2O.ai H2O document AI

Artificial intelligence software company H2O.ai announces the launch of its new system named H2O Document AI. The newly launched system will help in automating the task of document processing. 

The highly capable artificial intelligence-powered system can process, store, and manage extensive amounts of different documents and unstructured data that businesses manage every day across the globe. The AI system can also help companies reduce significant costs involved in document management while simultaneously streamlining and optimizing the process. 

Companies can use H2O Document AI to segregate documents, extract text, tables, images, graphs, and many other elements. H2O Document AI will allow businesses to seamlessly extract data from documents to analyze them and use the generated results to upscale their operations. 

Read More: Ophthalmic Sciences reveals World’s first AI device to measure Eye Fluid

Expert in Residence for AI, UCSF Center for Digital Health Innovation, Bob Rogers, said, “When we started this journey, we were hopeful that information extraction from semi-structured documents was possible, but we weren’t sure. Some in the industry told us it couldn’t be done. Working with H2O.ai has opened up many possibilities.” 

San Francisco-based artificial intelligence firm H2O.ai was founded by Cliff Click and Sri Satish Ambati in 2012. To date, the startup has raised $251 million over eight funding rounds from investors like Commonwealth Bank of Australia, Goldman Sachs, Celesta Capital, Crane Ventures Partner, and many others. H2O.ai is known for its open-source machine learning platform that makes smart applications development more accessible.

Founder and CEO of H2O.ai, Sri Ambati, said, “Our banking, insurance, health, audit, and public sector customers each process billions of documents every year. Documents are the fastest growing source of data in the enterprise, ranging from contracts, bank statements, invoices, payroll reports, regulatory reports, and medical referrals to customer conversations in text, chat, and email.”

He further added that their new system would enable customers to sieve intelligence across a wide variety of document types very accurately at an unprecedented rate, which was not possible until now.

Advertisement

United States Department of Defense Awards C3 ai $500 million Contract

US Department of Defense C3 ai

Enterprise artificial intelligence software providing company C3 ai received an award of a $500 million contract from the United States Department of Defense. C3 ai received a five-year Production-Other Transaction Agreement with the DoD.  

The new contract will allow any agency of DoD to acquire C3 ai’s highly capable suite of enterprise software. This new development is a part of the department of defense’s efforts to further strengthen its capabilities to tackle new threats. 

CEO of C3 ai, Thomas M. Siebel, said, “The new Agreement has a DoD-wide scope, accelerating research projects in simulation and modelling and production deployments for operations and sustainment.” 

Read More: Key Announcements From Amazon re:Invent 2021

He further added that they are thrilled to have been selected to be a part of this initiative of the department of defense, and the company looks forward to expanding their work in order to help the government of the United States of America. 

The US government identified the use of artificial intelligence solutions could considerably help various departments to upscale their operations. The contract between DoD and C3 ai would help the department of defense to rapidly address additional use cases and scale AI applications across all branches. 

C3 ai’s range of suite and defense artificial intelligence-powered solutions are already being used by RSO, F35 JPO, DISA, space command, the United States Air Force, and many other departments. The institutions and agencies used C3 ai’s solutions for various purposes like security clearance adjudication, readiness, modeling, simulation, and artificial intelligence predictive maintenance. 

United States-based software developing firm C3 ai was founded by Ed Abbo, Patricia House, and Thomas Siebel in the year 2009. Till date, the company has raised total funding of $228.5 million over six funding rounds from investors like TPG Growth, Breyer Capital, Pat House, Sutter Hill Ventures, and many more.  

Advertisement

Key Announcements From Amazon re:Invent 2021

aws re:Invent 2021 annoucements
AWS CEO Adam Selipsky at re:Invent 2021. Image Credit: AWS

Every year at re:Invent, Amazon Web Services (AWS) showcases its latest products, innovations, and developer solutions. AWS opted to keep it modest last year, which resulted in a slew of announcements this year. AWS Private 5G, the new ARM-based Graviton 3 chips, new AWS data centers in the Netherlands and Belgium (local zones), AWS SageMaker machine learning solutions, a new low-code development tool, AWS Amplify Studio, and numerous AWS additions are just a few of the key announcements that dotted the event which was held in Las Vegas.

Your guide to Amazon Connect at re:Invent 2021 | AWS Contact Center
Image Source: AWS

We have listed some of the key announcements from AWS re:Invent 2021. It is important to note that this is not a ranked list.

1. Graviton3 and Trn1

AWS introduced a new Graviton chip, called Graviton3, which is the latest version of Amazon’s own ARM-based processor for AI inferencing applications. Graviton3 is up to 25% quicker for general-compute tasks, with two times faster floating-point performance for scientific workloads, two times faster performance for cryptography workloads, and three times faster performance for machine learning workloads, according to Selipsky. Furthermore, according to Selipsky, Graviton3 uses up to 60% less energy for the same performance as the previous version.

Another important key highlight of the re:Invent 2021 was the announcement of Amazon’s latest machine learning chip, the Trn1. The new Trainium-powered Trn1 instance can provide the greatest price-performance for deep learning model training in the cloud, as well as the quickest on EC2. Trn1 is the first EC2 instance with up to 800 megabytes per second bandwidth, according to Adam Selipsky, CEO of AWS, making it ideal for large-scale, multi-node distributed training use cases such as image recognition, natural language processing, fraud detection, and forecasting.

2. New SageMaker Features

Amazon SageMaker is a managed service for building, training and deploying machine learning (ML) models. During the re:Invent, AWS announced numerous additions to the existing SageMaker tools and features at re:Invent 2021. 

AWS has introduced the SageMaker Ground Truth Plus service, which employs an expert workforce to produce high-quality training datasets more quickly. SageMaker Ground Truth Plus employs a labeling workflow that includes active learning, pre-labeling, and machine validation approaches, as well as machine learning techniques. The new service, according to the company, saves up to 40% on expenditures and doesn’t need consumers to have extensive machine learning knowledge. Users can access this service to construct training datasets without having to write their own labeling apps. 

A new SageMaker Inference Recommender tool was also released to assist customers in selecting the best available compute instance for deploying machine learning models for optimal performance and cost. According to AWS, the tool chooses the appropriate compute instance type, instance count, container settings, and model optimizations automatically. Except for AWS China, Amazon SageMaker Inference Recommender is available in all locations where SageMaker is offered.

AWS also unveiled Amazon SageMaker Training Compiler, a new SageMaker technology that can speed up deep learning (DL) model training by up to 50% by making better use of GPU instances.

Deep learning model fine-tuning might take days, resulting in exorbitant expenses and a slowdown in innovation. You may now employ SageMaker Training Compiler to speed up this process by making minor modifications to your existing training script. SageMaker Training Compiler is built into the most recent versions of PyTorch and TensorFlow in SageMaker and operates behind the scenes of both frameworks, requiring no further modifications to your workflow once enabled.

3. No-code and Low code Solutions

Some of AWS’s big announcements at its re:Invent 2021 users conference were based on low-code and no-code. AWS touts the debut of Amazon SageMaker Canvas as a no-code for machine learning, claiming that it allows business analysts to develop ML models for predictions without knowing how to code or having ML experience. Business users may easily integrate files from the cloud and on-premises data sources to make forecasts for the delivery of products using its intuitive graphical user interface. The no-code Canvas may be thought of as a basic user interface for Amazon’s SageMaker AutoML features.

Meanwhile, Amplify Studio, another product unveiled at re:Invent 2021, is a low-code platform-as-a-service for web and mobile app development, is now made available in public preview on AWS. Amplify Studio is a visual development tool that allows developers to take a designer’s Figma file and instantly transform it into React UI component code, which can then be connected to back-end resources and allows changes to be made using a visual development interface. Amplify Studio is an augmentation of AWS’s earlier Amplify service, which focused on creating web and mobile apps but lacked Amplify Studio’s simple drag-and-drop interface.

4. Amazon Lex

AWS company announced the Amazon Lex automated chatbot designer in preview, a new product that automates the chatbot training and creation process.

Lex accomplishes this by utilizing advanced natural language interpretation capabilities aided by deep learning techniques. According to Swami Sivasubramanian, VP of Amazon AI, developers can now use historical call transcripts to build a fundamental chatbot in only a few clicks. Within a few hours, Amazon Lex automated chatbot designer can scan 10,000 lines of transcripts to recognize intents like ‘submit a new claim’ or ‘check claim status.’ It ensures that these intents are adequately segregated and that none of them overlap, removing the need for a trial-and-error method.

5. IoT RoboRunner

AWS IoT RoboRunner is a new robotics service from Amazon that makes it easier for businesses to create and deploy apps that allow fleets of robots to collaborate. IoT RoboRunner, which is now in beta, leverages robots management technology that is already in use at Amazon warehouses. It enables AWS customers to integrate robots and current automation software to synchronize work across operations, merging data from each kind of robot in a fleet and unifying data types such as facility, location, and robotic job data in a single repository. IoT RoboRunner may also be used to give measurements and KPIs to administrative dashboards through APIs.

6. AWS Robotics Startup Accelerator

Along with IoT RoboRunner, Amazon also introduced the AWS Robotics Startup Accelerator, a joint venture with nonprofit MassRobotics to address difficulties in automation, robotics, and industrial internet of things (IoT) technology, at re:Invent 2021.

The Robotics Startup Accelerator will help robotics startups create, prototype, test, and market their products and services. AWS and MassRobotics specialists will offer consultancy to startups selected into the Robotics Startup Program on business models, while AWS robotics engineers will provide technical support. Some of the additional advantages include Hands-on training on AWS robotics solutions, as well as up to $10,000 in promotional credits to utilize AWS IoT, robotics, and machine learning services. Startups can also benefit from MassRobotics’ business development and investment advice, as well as co-marketing possibilities with AWS through blogs and case studies.

7. Karpenter

Karpenter, a new open-source autoscaling solution for Kubernetes clusters, was unveiled by AWS. One of the benefits of cloud computing is its capacity to automatically grow to suit your resource requirements. Administrators managing Kubernetes clusters, on the other hand, have had to keep a close eye on them in order to ensure that they had the correct number of resources and avoid service outages.

Karpenter was created to make the cloud computing dream a reality. It improves application availability and cluster efficiency by deploying the relevant computing resources quickly in response to changing application demand. It also offers just-in-time computing resources to fulfill your application’s requirements, and will soon optimize a cluster’s compute resource footprint to increase performance. In case the nodes are no more needed, it automatically terminates them, thereby cutting the infrastructure cost. 

To calculate your Kubernetes workloads, Karpenter uses Helm, the Kubernetes package management. It also needs permission to automate the provisioning of computational resources.

Read More: Key Announcements at Microsoft Ignite Fall Edition 2021: Day 1

8. AWS Data Exchange for APIs

Companies can acquire or extract information that matches their needs by using an API. For example, to retrieve location information, you may use the Google Maps API. That’s fantastic for a single API, but if you’re employing many APIs, it might lead to a whole new set of issues with communication, authentication, and API governance.

To address this, AWS unveiled the AWS Data Exchange for APIs at re:Invent 2021, a new solution that automatically updates changing third-party APIs, eliminating the need to design the updating mechanism.

If a user is developing an application or data model on AWS, this tool allows them to use AWS SDKs and leverage AWS authentication and governance tooling to access and update third-party APIs in an automated manner. Third-party data providers can also publish their APIs in the Data Exchange catalog to make their data sources available to developers.

9. Textract

Textract, Amazon’s machine learning tool that extracts text, handwriting, and data from scanned documents, now supports identifying papers such as licenses and passports, according to the company. Users may automatically extract specific as well as inferred information from IDs, such as expiry date, date of birth, name, and address, without the need for templates or settings.

10. Container Security

AWS introduced pull-through cache repository support in Amazon Elastic Container Registry to assist development teams who use containers from publically available registries in securing the containers.

For container images sourced from public registries, the feature will provide developers with increased speed, security, and availability of the Amazon Elastic Container Registry.

Images in pull-through cache repositories, according to Amazon, are automatically kept in sync with upstream public registries, removing the need for human image downloading and updating. Furthermore, pull through cache repositories benefit from Amazon Elastic Container Registry’s built-in security features, such as AWS PrivateLink, which allows you to keep all network traffic private, image scanning to detect vulnerabilities, encryption with AWS Key Management Service (KMS) keys, cross-region replication, and lifecycle policies.

11. AWS Lake Formation Data Lake Update

With the addition of row- and cell-level security capabilities, AWS announced new features for allowing safe access to sensitive data in the AWS Lake Formation data lake service.

AWS Lake Formation allows users to aggregate and classify data from databases and object storage, but users must decide how to safeguard access to different slices of data.

And to facilitate that, AWS introduced row- and cell-level security capabilities for Lake Formation which are now generally available. You must have previously had to build and manage several copies of the data, keep all the copies in sync, and handle “complex” data pipelines if you wanted customized access to slices of data. You may now enforce access limits for specific rows and cells using the new enhancements.

12. AWS IoT TwinMaker

AWS IoT TwinMaker is a digital twins solution that allows clients to create cloud-based virtual devices that closely resemble the state and visualization. Through services like AWS IoT SiteWise, Amazon Kinesis Video Streams, and Amazon Simple Storage Service, AWS IoT TwinMaker can ingest data from physical devices. Customers can upload existing 3D models of actual equipment to visualize the digital twin. Multiple 3D models can be combined and placed into a scenario that closely resembles the device topology. Customers may then view a graph that depicts the devices’ interdependence and relationships.

Advertisement

Ophthalmic Sciences reveals World’s first AI device to measure Eye Fluid

Ophthalmic Sciences AI measure eye fluid

Ophthalmic Sciences reveals the world’s first artificial intelligence-powered contactless device to measure eye fluid pressure. The newly unveiled device named IOPerfect accurately measures the intraocular pressure of eye fluid in humans. 

Currently, the standard procedure to measure intraocular pressure involves various tests like dilation, applying air pressure, and many more. The innovative product developed by Ophthalmic Sciences will turn out to be a revolutionary device in its field. 

The AI-powered system uses a VR headset-like device to remotely monitor and diagnose glaucoma, which is the second leading cause of blindness in the world. The device will allow patients to check their IOP in the comfort of their homes, reducing the hassle of traveling to clinics every time. It is a concise diagnosis process that generates an accurate AI-based image processing analysis of vascular pressure response.

Read More: Zerodha says ‘Powered by AI’ is a Marketing Gimmick

CEO of Ophthalmic Sciences, Ariel Weinstein, said, “Growing exposure to phone and computer screens appears to be linked to increased glaucoma prevalence. Along with an ageing population, the risk keeps getting higher, increasing the need for early diagnosis. But most importantly, the past year has proven the value of tele-diagnosis and this fact has attracted significant attention from clinicians and investors in our venture.” 

He further added that they are extremely thrilled about the fact that their device would prevent thousands of glaucoma patients from turning blind. The solution will help clinics, and healthcare practitioners save a lot of time consumed in lengthy diagnosis procedures and also increase their revenue. 

Israel-based Ophthalmic Science was founded by Gabriel Dan and Noam Hadas in 2019. The startup specializes in developing artificial intelligence-powered image processing solutions for eyecare. According to company officials, the newly unveiled IOPerfect device will be available for sale in the United States and Europe by 2023.

Advertisement

Zerodha says ‘Powered by AI’ is a Marketing Gimmick

Zerodha says 'Powered by AI' is a Marketing Gimmick

While AI and ML-powered have been dominating every sector, in a surprise statement, Nithin Kamath, founder and CEO at Zerodha, pointed out that there is no use-case for Zerodha yet. Zerodha’s boss took to Twitter, “I keep getting asked how we use AI/ML/Blockchain at @zerodhaonline, and I keep saying we don’t and haven’t found any use-case yet. This time I asked our man behind the scene, Dr K, to comment. Couldn’t help but share his response.” He shared a screenshot of Kailash Nath’s comment, CTO at Zerodha. Many analysts, data scientists, experts, and technologists admired the honesty in Nithin and Kailash’s statements. At the same time, others completely thrashed the duo’s underestimation of AI technology and the potential it holds. 

Is the ‘powered by AI’ a marketing gimmick? To an extent, yes. The attention to the development of AI is remarkable, from improving healthcare to natural language processing. However, the hype has given rise to many enterprises casually associating the term ‘AI-powered’ with their products or services. It is also becoming increasingly difficult to distinguish between products using AI or ML algorithms and companies that are only imitating it to jump on the bandwagon. 

There is no general or officially agreed-upon definition of artificial intelligence, allowing companies great and small to include AI-powered in their products. Since AI is poorly defined, it has become easy to state that a company’s products use artificial intelligence and back that up with some mumbo-jumbo. ‘Powered by AI’ has become a marketing term used by companies across the globe to create a perception of intelligent and competent products because most customers don’t understand the concept of incompetent AI. 

Read more: DeepMind Makes Huge Breakthrough by Discovering New Insights in Mathematics

In his blog published on Nov 26th, Kailash Nath explains how the minimal use of AI in their KYC process does not warrant them to use ‘powered by AI’ on their website because AI has no impact on their product offering. According to Nithin, Zerodha doesn’t use AI or ML technology apart from a commodity image recognition tool for processing KYC images during onboarding. “We have not come across any problems yet where we have felt the need to turn to ML solutions.” 

Most people have interacted with a chatbot, typically marketed as a ‘live chat’ option on many company’s websites. However, chatbots have a predetermined response list, and they can’t learn or adapt. A human can pick up on what a customer is asking, but a chatbot cannot. In this context, chatbots aren’t AI-powered. Unfortunately, most businesses market chatbots as AI, leading to immense confusion for customers. Additionally, because of chatbots’ inability to learn from its mistake, the bot will continue to pick up on a keyword and offer the same answer—even if the response has nothing to do with the customer’s query.  

Kailash claims that innovations in artificial intelligence and machine learning have now been commoditized to the point that they have become a laughable trivial task for most use cases. The increase in the ‘powered by AI/ML’ phrase has become a marketing gimmick and reduced to meaninglessness. He also mentioned how companies use the prominent ‘powered by AI’ label even if technology does not directly relate to the actual product. According to MMC, a London venture capital firm, 40% of European AI startups don’t actually use artificial intelligence in a way that is ‘significant’ to their businesses. Some of the common uses of AI and ML are chatbots (26%) and fraud detection (21%), but it is tricky to judge exactly how the technologies affect the product and the customers.

Misleading marketing and outright fabricated features create a hallowed tradition in tech and marketing to attract more customers. Since its inception in 2010, Zerodha has accounted for a 15% market share of India’s overall retail trading without the use of AI or ML. The real measure of AI-powered products or tools is its functionality and maximization to bring more value to the product or service and help the end-user. However, companies like Zerodha that are upfront about their use of AI show that it’s possible to build solutions and become a market leader without the need to market products as ‘powered by AI.’

Advertisement

Active.ai to provide Whatsapp Banking Services for ADCB

Active.ai whatsapp banking ADCB

Artificial intelligence-powered services providing company Active.ai is all set to provide Whatsapp banking services for Abu Dhabi Commercial Bank (ADCB). This move is aimed to provide an enhanced banking experience to customers of ADCB. 

ADCB and Active.ai have collaborated with Talisma Corporation to provide the Whatsapp banking feature to customers. It is a 24/7 available service that uses various artificial intelligence and machine learning algorithms to offer a wide range of baking services. 

Both customers and potential customers can enjoy the benefits of this AI-powered Whatsapp banking service. The feature will allow customers to seamlessly interact with the bank and carry out multiple banking-related tasks in English and Arabic languages. 

Read More: Taiwan launches new Artificial Intelligence hub to upgrade Industries

Managing Director of Talisma Corporation, Raj Mruthyunjayappa, said, WhatsApp is one of the most preferred channels for communication today. Integrating AI solutions into their business functions is not a choice anymore for enterprises. We are absolutely pleased to digitally empower ADCB in partnership with Active.Ai by providing cutting-edge Conversational AI WhatsApp chatbot for the bank’s customers to ensure seamless user experience.” 

He further added that this development is a step towards the company’s aim of providing a superior experience by constantly catering to the changing needs of customers and helping brands to meet their customer expectations. 

The newly launched Whatsapp banking feature will allow customers to carry out the following tasks – 

  • Check account balance
  • Check credit card balance
  • Request checkbooks
  • View last ten credit and debit card transactions
  • Check loan balance and EMI status
  • Generate IBAN letter
  • Locate bank branches and ATMs
  • Activate and block cards
  • Check rewards
  • Explore various products and services
  • Check and Calculate foreign currencies 

Singapore-based artificial intelligence company Active.ai was founded by Parikshit Paspulati, Ravi Shankar, and Shankar Narayanan in 2016. The firm specializes in developing advanced proprietary conversational AI solutions. 

“It’s an honor and privilege for Active.ai to enable WhatsApp Banking at Abu Dhabi Commercial Bank (ADCB) in Arabic and English on its conversational AI banking platform for their retail banking customers,” said the Co-founder and CEO of Active.ai, Ravi Shankar.

Advertisement