Friday, November 21, 2025
ad
Home Blog Page 179

Leading AI Healthcare Startups in India

AI healthcare startups in India

The revolutionary advancements in artificial intelligence are transforming almost all spheres of modern human existence. Amongst the widespread prevalence of artificial intelligence, the life-changing applications in the healthcare sector are particularly remarkable. Most of the greatest healthcare innovations today have a significant component of AI. From early diagnosis of life-threatening diseases to performing complex surgeries, artificial intelligence is contributing profoundly to it all. 

Although AI has been a reasonably familiar technology for the healthcare sector of the western world, it is still gaining ground in India. And in laying that familiar groundwork in India, artificial intelligence-based startups focused on healthcare are doing a remarkable job. This article lists some of the pioneer AI healthcare startups in India which are transforming the face of medical care in the country. 

1. HealthifyMe

HealthifyMe is a Bengaluru-based digital health and wellness company that was founded by Tushar Vashisht, Sachin Shenoy, and Mathew Cherian in 2012 with the aim of introducing digital healthcare to Indians. The company provides an app that uses an virtual assistant ‘Ria’, world’s first AI nutritionist, to solve users’ queries around fitness, nutrition, and health in 10 different languages. Moreover, the app provides dietary recommendations and uses artificial intelligence to track calorie intake. HealthifyMe offers professional advice on nutrition and fitness to people from all spectrums, from healthcare professionals to personal users through a premium subscription. The company claims to have access to one of the largest Indian food databases in the world. 

Read More: Man Who Threw Away £150m In Bitcoin Plans To Find It Via AI And Robot Dogs

2. PharmEasy

PharmEasy is one of the leading AI healthcare startups in India that offers an application connecting users with pharmacies. In 2015, PharmEasy was founded by Dharmil Sheth, Mikhil Innani, and Dhaval Shah in Mumbai. The smartphone-based app connects users with these pharmacies to make seamless medical deliveries and provides services such as teleconsultation and diagnostic test sample collection. PharmEasy app uses machine learning and big data tools, including Tensorflow, Hadoop, Spark, and Hive. It also uses several data crunching and analytics solutions. PharmEasy has partnered with more than 80,000 pharmacies across 1200+ cities in India. 

3. Qure.ai   

Qure.ai is a breakthrough artificial intelligence solution provider that is enhancing imaging accuracy and improving healthcare outcomes with the assistance of machine learning-supported tools. This Mumbai-based company is one of the leading AI healthcare startups in India that was founded in 2016. Qure.ai employs deep learning technology to provide automated interpretation of radiology exams like X-rays, CT scans, and ultrasounds, enabling faster diagnosis and treatment of malignant diseases. The tools can also outline and quantify specific information such as lung disease patterns. Qure.ai also offers digital pathology solutions that can distinguish malignant biopsies from benign ones and grade a variety of tumor types. 

4. Axtria   

Founded in 2010, Axtria is a Noida-based company that provides online sales and marketing management tools for life science industries. The company develops data analytics and software platforms that help clients with the management of several aspects, including customer, revenue, risk, and supply chain. Axtria helps healthcare organisations and institutions with services such as planning, lead scoring, reporting, forecasting optimization, and visualization. Axtria’s risk solutions include portfolio risk management, regulatory compliance, operational risk management, and risk strategy development. The Noida-based company also provides healthcare sector with services such as HR analytics, spend analytics, and supply chain analytics. 

5. SigTuple   

Another name among leading AI healthcare startups in India is SigTuple, which has been providing AI-based healthcare diagnostic solutions since 2015. SigTuple democratizes microscopy by automating it through advanced AI and robotics. Their machine learning and deep learning platforms, such as Manthan, can detect and predict the chances of the person having a particular disease from medical diagnostic images. SigTuple’s peripheral blood smear analyzing solution, Shonit, provides image processing-based solutions for a differential blood count. The screening of diseases like malaria and anemia on Shonit are under clinical trials. 

6. Niramai 

Bengaluru-based Niramai Health Analytix is one of the pioneer AI healthcare startups in India. Founded in 2016 by Geetha Manjunath and Nidhi Mathur, the company provides a novel software-based medical device Thermalytix that uses an AI-based high-resolution thermal sensing device for the early-stage detection of breast cancer. The software-based medical device, which is automated, low-cost, and portable, is making cancer screening possible in clinics across India. The core technology of the device is based on machine learning algorithms. Niramai’s solution is also being used for regular preventive health checkups and large-scale screenings in rural and semi-urban areas.

7. Perfint Healthcare 

Chennai-based AI healthcare startup Perfint Healthcare provides solutions for image-guided interventional procedures, laying special emphasis on oncology and pain care. Radiologists across the world are using Perfint’s robotic solutions for biopsy, drug delivery, ablation, drainage, fine needle aspiration, and varied pain care procedures for both cancerous and non-cancerous pain. The company’s products include Robio EX, Maxio, and Navios. Robio EX is a CT & PET-CT guided robotic positioning system, whereas Maxio provides intraoperative guidance and post-procedure verification support. Navios is a computer-based workflow assistance solution for CT-guided percutaneous ablation procedures. 

8. HealthPlix  

HealthPlix is another leading AI healthcare startup in India that is focused on empowering healthcare professionals to make the right medical decisions using softwares specially customized for them. Through their assistive AI-powered EMR software, doctors can drive better patient health outcomes by providing clinical decision support at the point of care. Healthplix also provides an electronic medical record solution for chronic care management. Its features include e-prescription generation, lab management, and billing. It also has dashboards that provide AI and machine learning-based insights on marketing, clinical test hypothesis, finance, and treatment outcomes. 

9. Dozee   

Established in 2015, Dozee is a Bangalore-based AI healthcare startup that provides contactless health monitors that silently track heart, respiration, sleep patterns, stress levels, cardiac contractions, apnea, and more while sleeping. Its artificial intelligence algorithms enable early detection of any health deterioration. Medical professionals can access medical information remotely on Dozee’s patient monitoring systems and app. Dozee’s integration with pulse oximeters were used in providing continuous remote monitoring to Covid-19 patients and keeping doctors safe from exposure to infection during the pandemic. Dozee has been endorsed by Bill & Melinda Gates Foundation and the Indian government. 

10. BlueSemi   

Founded in 2017, the consumer health tech focused startup BlueSemi is one of the pioneer AI healthcare startups in India. The Hyderabad-based company provides IoT solutions to manage healthcare, including anti-counterfeiting and temperature measurement solutions. BlueSemi also offers a wireless temperature scanner for fever screening. Its energy harvesting device facilitates power independence for wearables and autonomous vehicles. Its other features include unique identification coding, data analytics, integration, customized access, marketing, and other capabilities. It has use cases in hospitals, medical institutions, wearables, retail, autonomous vehicles, smart homes, etc.

11. Tricog   

Tricog is a Bangalore-based AI healthcare startup and one of the world’s largest predictive healthcare analytics firms, which was founded in 2014. Tricog was founded by interventional cardiologist Dr. Charit Bhograj to leverage deep learning-based medical and technology expertise to provide virtual cardiology services to distant clinics. Tricog has evolved into a technology-driven cardiac care company run by a highly experienced team. Tricog’s flagship product, InstaECG, enables professionals to create a network of 3,000 ECG diagnosis points at various medical centers across the globe. Today, Tricog’s AI has a data store that exhibits 200+ cardiac conditions, which significantly improves the detection of rare cardiac disorders.

12. DocTalk

DocTalk is a Mumbai-based AI healthcare startup that provides a software application that keeps track of medical reports and prescriptions and allows users to easily share them with doctors. Founded in 2016 by Akshat Goenka and Vamsee Chamakura, the startup focuses on an AI-based virtual assistant program to enhance the healthcare industry of India. DocTalk also offers subscribers a platform where they can get solutions by speaking with their physicians over the app. The app also allows users to easily save all the medical reports and photos on the cloud, thus enabling users to avail them anywhere digitally.

13. Lybrate

Lybrate is one of the pioneer AI healthcare startups in India, which was established in 2014 as the country’s first online doctor consultation platform. Lybrate provides an online service on an AI-based platform of the same name, through which people are able to connect with doctors and also have a consultation online. Users can also book lab tests and appointments online through the service. Lybrate also provides fitness, skincare, and hair care solutions. They have doctors from a wide range of spectrums across the country. Moreover, Lybrate provides a health consultant bot on Facebook Messenger. The goal of the startup is to create an internet consultation for everyone that is readily available. 

14. HealthKart

Founded in 2011, Healthkart is one of the top AI healthcare startups in India, which is based in Gurgaon. It offers many health products and services to assist customers in attaining their fitness goals through its e-health store. Healthkart also provides on-demand healthcare and fitness advice using conventional AI. It is among the largest healthcare e-stores in India. HealthKart provides several healthcare items, including nutritional supplements, diabetes supplies, medical-related equipment, and infant care products. Products can be purchased on the internet and are home-delivered by HealthKart.

This leading healthcare platform was originally launched as a mobile application and a website. Today it has turned into a marketplace that provides many services, including browsing and buying prescribed drugs online.

Advertisement

Top Data Centers in India 2022

top data centers in India 2022

Today, according to Livemint, India produces more than 692 million internet users and is expected to reach 900 million by 2025. With the significant population and internet growth, according to ResearchandMarkets, the Indian data center market is expected to grow at a compound annual growth rate (CAGR) of 15.07% forecasted for 2022 to 2027. 

Many hyper-scale data center operators announced building data center projects in 2021, which was only the beginning. This year many more companies are investing a huge amount of money into data centers to acquire the position of the best data center in India. 

To note, this article isn’t providing a ranking of data centers. Here is a list of top data centers in India 2022. 

  1. NTT

NTT (Nippon Telegraph and Telephone) Global data centers and cloud infrastructure, also known as Netmagic, is a popular managed hosting and multi-cloud IT solution provider. The company has 160 carrier-neutral, hyper-scale, and high-density data centers serving big firms worldwide across 20 countries, among which 12 are located in India. These 12 data centers are distributed over Mumbai, Bangalore, Chennai, and Delhi, making NTT one of the top data centers in India 2022. In India, NTT is headquartered in Mumbai with eight working data centers, six in Mumbai and two in Navi Mumbai. In addition, two data centers are located in Bangalore and one each in Chennai and Delhi. The current data load of the data centers is 150 MW (megawatt) IT load, and another 250 MW IT load center is under development. NTT Group operates the data centers through NTT Data, an information technology company headquartered in Tokyo, Japan. NTT helps clients with services to accelerate their growth and develop new business models. They aim to transform clients through consulting, industry solutions, business process services, IT modernization, and managed services.

  1. CtrlS

CtrlS is an Indian data center company founded in 2008 to provide services like data center colocation, DC build & consulting, internet bandwidth, managed services, cloud security services, and disaster recovery services. The company is focused on keeping clients’ applications online and data security. It’s been reported that the company had an  EBITDA (earnings before interest, taxes, depreciation, and amortization) growth which is almost double the industry standard. On this, Mr. Sridhar Pinnapureddy, founder and CEO of CtrlS stated,’ Our growth is driven by sharp focus on product innovation, customer delight, process efficiencies and a very passionate CtrlS team. Apart from these, we are committed to expanding our global footprint and are making strategic investments into Asia Pacific, Middle East, and North American markets.’ CtrlS is known as Asia Pacific’s largest tier-4 data center and has data center facilities in Hyderabad, Mumbai, Noida, and Bangalore. The 18-year-old Pioneer group also promoted CtrlS which built the largest available infrastructure in the data center world.

  1. Web Werks

Web Werks is one of the top data center companies in India, which offers a range of services providing fast and secure access to data of any complexity. They have six data centers in India, among which three are operating, one each in Mumbai, Pune, and  Delhi-NCR, and the other three are under development in Mumbai, Bangalore and Hyderabad. The data center facility of Wen Werks is a high-density, hyper-scale, and AI-powered infrastructure offering the best-in-class service. The company aims to maintain and raise the standards of data center services with new data centers in India. Additionally, Web Werks ensures clients achieve business excellence by addressing their data center needs and problems. Web Werks believes that data centers hold the answer to the future of business. Web Werks’ team consists of data management professionals who solve data needs with colocation that maximizes the efficiency of your work at every turn. 

  1. Nxtra

Nxtra Data is one of the best data centers in India 2022 and is a fully owned subsidiary of Bharti Airtel. Nxtra provides coalition, managed services, and cloud services. The company is headquartered in Gurugram and has ten functional core data centers over 7 locations in India, including Noida, Manesar, Mumbai, Pune, Bhubaneswar, Bangalore, and Chennai. Being associated with both the data center and telecom industry for more than two decades, Nxtra provides a platform of hyper-connectivity core and edge data centers across 120+ locations in India alone. They enable businesses to accelerate their digital journey and host applications closer to their customers. The company started investing in multiple large data center parks, and in 2020, Nxtra made a deal with Carlyle Group, one of the world’s largest investment firms, to sell up to 25% of its stake for $235 million.

  1. Microsoft

Microsoft Corporation is a tech giant that produces computer software, consumer electronics, personal computers, and related services. The company is the leading provider of cloud computing services, video games, computer & gaming hardware, search and other online services. It is an American multinational company headquartered in Redmond, Washington, and operates its offices in more than 60 countries. Microsoft is one of the most important IT companies contributing to several software & services, and now building the largest data center in India. Microsoft has well-equipped data centers in three locations, Pune, Mumbai, and Chennai. At the beginning of 2022, Microsoft announced to set up its fourth & largest of all data centers in India in Hyderabad, Telangana, by the end of 2025. Microsoft is investing a total of Rs 15,000 crore in this new data center which will employ 18,000 full-time employees. The Hyderabad data center will be the largest center after the Redmond center. 

  1. Adani

Adani Enterprises is the flagship company of Adani Group, an Indian multinational conglomerate company headquartered in Ahmedabad, Gujarat. Adani Group and EdgeConneX, one of the world’s largest private data center operators, came up with a joint venture called AdaniConneX. The aim of AdaniConneX is to empower digital India with a 1 GW (gigawatt) of data center capacity in the next decade. Adani deserved to be on the list of top data centers in India 2022 for its innovative ideas and sheer determination. AdaniConneX focuses on accelerating the client’s digital transformation by being their data sanctuary, providing the levels of transparency, security, scale, and flexibility they need. Today, AdaniConneX has data centers in 7 locations in India, including Chennai, Hyderabad, Mumbai, Noida, Pune, and Vizag. 

  1. Reliance Data Center

Reliance data center is one of the top data center companies in India which provides outsourced data center infrastructure for organizations having crucial IT operations. It is a division of Reliance Communications, the mobile network provider company of Reliance industries limited. Currently, Reliance data center has 9 data centers located in Mumbai, Chennai, Hyderabad, and Bangalore. All the data centers are built on proven IT systems & facility platforms and with recognized expertise in the field. The company delivers security, reliability, scalability, networking, applications, and consulting services. Reliance data center not only provides a wide range of services but also has the highest degree of network security and over 6,50,000 sq.ft of hosting space. Companies from the Fortune 1000 trust Reliance due to their vast experience in the data center business in India and come to them for their impeccable record of delivering services. 

  1. Sify Technologies

Sify technologies limited is an Indian information communication technology (ICT) service and solution provider. The company delivers end-to-end ICT solutions, including telecommunications, data center, cloud & managed services, and transformation integration & application integration services. Today, Sify is one of the best data centers in India 2022. Sify has matured into India’s largest digital transformation company, which has put a converged ICT ecosystem consisting of the largest multiprotocol label switching (MPLS) Network in India, top data centers, and cloud-managed services. Sify has six maintainable data centers in India located in Mumbai, Delhi, Bangalore, & Chennai, and they connect over 45 data centers across India. The connection helps multiple businesses in different verticals to leverage Sify’s data centers, networks, and security services. These data centers have a single cabinet for multi-MW deployment, a carrier-neutral facility, and a multi-telecom ecosystem, which offers shared & caged colocation, cloud solutions, remote infrastructure management, and migration services.

  1. Equinix

Equinix is an American multinational company specializing in data centers and internet connections. It is the largest global data center and colocation provider that operates a network of 220+ international business exchange (IBX) data centers in 63 metropolitan cities worldwide. Currently, Equinix has only two data centers in India, located in Mumbai, but the reason they made it to the list of data centers in India 2022 is because of their highest level of security and operational reliability with an award-winning portfolio of 20+ years of experience. The company has worked to interconnect industry-leading organizations across a digital-first environment. Recently Equinix announced they are planning to build a 3rd IBX data center in Mumbai with an initial investment of $86 million, including an acquisition of a parcel of land of nearly four acres. This new project will allow Equinix to exponentially grow its ecosystem on the platform Equinix across India, contributing to India’s growing digital economy.

  1. Amazon Web Services

Amazon Web Services (AWS) is a subsidiary of Amazon, an IT service management company providing on-demand cloud computing platforms and APIs to individuals, companies, and governments. AWS continuously comes up with innovative data center designs and systems, implements controls, builds automated systems, and undergoes third-party audits for verification of security & compliance. One prime reason why many want to choose AWS is that the services delivered are unique, as users can opt for development platforms or programming languages they prefer. In the coming two years, AWS is set to open four mini data centers in India, one each in Bangalore, Chennai, Delhi, and Kolkata. AWS’ plan is to open 32 local zones in 26 countries over the next two years, making networking easier and more secure. The local zones will provide cloud services, and some use cases of video streaming, gaming, and applications requiring real-time feedback. These local zones will be the infrastructure to support AWS regions to place compute, storage, database, and other services near populated locations and industry.

Advertisement

Who is the Father of Artificial Intelligence? 

Father of Artificial Intelligence

The hype around the term artificial intelligence has been skyrocketing day by day, considering the recent advances in the field. It is often self-explanatory that artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. But have you ever questioned who the father of artificial intelligence is? 

After playing a significant role in setting up the foundation for the creation of intelligent machines, John McCarthy, a renowned American computer scientist and inventor, was honored with the title Father of Artificial Intelligence. He coined the term Artificial Intelligence and therefore is credited to be the founder of AI. This article sums up John McCarthy’s early life, history of artificial intelligence, and his achievements. 

Early Life of John McCarthy

John McCarthy, a renowned computer science giant and a pioneer in artificial intelligence, was born in Boston on September 4, 1927. However, things were not always smooth for this genius. McCarthy was born to a family of European immigrants during the Great Recession. Considering the humble circumstances he came from, little seemed to point toward the fact that McCarthy was to become a successor to Alan Turing and the father of AI. The poor health of McCarthy’s younger brother led his family to settle in Los Angeles after moving throughout the country for a while in search of work opportunities. It was there that McCarthy, who was already outstanding in mathematics, came to know about the California Institute of Technology (Caltech). The future father of artificial intelligence taught himself college-level mathematics by borrowing used textbooks from Caltech students before entering college. 

McCarthy worked as a carpenter and fisherman while trying to study at the same time. He was an inventor from a young age. He devised a hydraulic orange-squeezer, among several other things, to help his family out. Despite having health complications, he secured a place in an undergraduate program in mathematics at the California Institute of Technology. When he officially entered Caltech, he had already studied so much independently that his professors allowed him to skip the initial two courses. He graduated in the year 1948 and obtained his doctorate in mathematics from Princeton in 1951. So far, it was clear that McCarthy was an extraordinary mind with exceptional learning capabilities. 

Birth of Artificial Intelligence

Early in his life, John McCarthy attended a symposium on ‘Cerebral Mechanisms in Behaviour’, and it was here that the idea to create machines that could think as humans took birth in his mind. He believed that it was possible to develop machines that could embody the problem-solving nature and abstract thinking of human brains. In his formal proposal for the Dartmouth Conference, the first-ever artificial intelligence conference conducted in 1956, McCarthy coined the term. His intention behind the conference was to discuss if there was any way to develop a machine that could think abstractly, develop itself, and solve problems like a human. The father of AI claimed that every aspect of learning or any other attribute of intelligence could be described so precisely that a machine can be made to simulate it in principle. 

The father of AI defined artificial intelligence as the engineering science of making intelligent machines. At the conference, he established the objectives that he would pursue throughout his career. Ten computer scientists attended this conference. The inaugural text for the conference was written along with two prestigious scientists, Claude Shannon and Marvin Minsky, who soon abandoned the study to focus on mathematical or computational theorizing. However, McCarthy is recognized as the father of artificial intelligence for two reasons: pioneering the field of AI to turn it into a new area of research and continuing to provide evidence for its development for half a century.

Achievements of John McCarthy

As mentioned earlier, John McCarthy coined the term Artificial Intelligence, laying down the roadmap for it to become a field of study. He was later joined by Marvin Minsky in 1959 at MIT, where McCarthy was a research fellow. In following his quest to build machines that could imitate human logical thinking, McCarthy’s next breakthrough came when he proposed ‘Advice Taker’ in his research paper ‘Programs with Common Sense’ in 1958. Advice Taker was a hypothetical computer program that would use logic to represent information in a computer. It was the first step toward the concept of using logical reasoning to enhance artificial intelligence.

During the same time, the founder of artificial intelligence invented a new programming language called LISP, which is still used as a language in the field of artificial intelligence. John McCarthy gained much recognition between the 1950s and 1960s because of his stunning concept of time-sharing. Experts say that his concept of using one central computer to store everything and sharing across multiple systems was the foundation idea for the present-day cloud-based storage. 

There are three significant early time-sharing systems that are credited to the father of artificial intelligence, John McCarthy. They are the Compatible Time-sharing System, BBN Time-sharing system, and Dartmouth Time-Sharing System. Le Earnest, McCarthy’s colleague at that time, had explained that the internet would have taken longer to come into existence if it were not for McCarthy’s invention of time-sharing systems. 

McCarthy also invented the ‘garbage collection ‘method for solving problems in the programming language LISP around 1959. This method attempted to claim memory that was occupied by objects and was not being used by the program. This invention made way to further simplify manual memory management in LISP. Later on, John McCarthy became a distinguished member of the ACM Ad Hoc Committee on Languages at the International Federation for Information Processing (IFIP).

Between 1978 and 1986, McCarthy spent his time developing the circumscription method of non-monotonic reasoning. As complicated as the term sounds, Circumscription was a method of formalizing the common-sense assumption of things as expected, unless specified otherwise. In 1982, news surfaced about McCarthy’s new idea of Space Fountain. It revolved around the idea that a long tower would be extended into space. The tower would be kept vertical by the outward force of a stream of pallets emitted from the earth. This would be done with a conveyor belt to bring these pallets back to earth.

Accolades 

Throughout his journey, the significance of McCarthy’s contributions to the field of artificial intelligence is evident. His pioneering work has been recognized globally and has been awarded several accolades. Some of the notable accolades to the name of the founder of artificial intelligence are:

  • Turing Award by Association for Computing Machinery (1971).
  • Kyoto Prize (1988).
  • USA National Medal of Science in Statistical, Computational Sciences, and Mathematics (1990).
  • Benjamin Franklin Medal in Cognitive Science and Computers by the Franklin Institute (2003).
  • John McCarthy was awarded the title of Stanford Engineering Hero (2012). 

Last Days

Near the end of the research stage of his career, in 1978, McCarthy had to give up on his purist idea of ​​artificial intelligence. He said artificial intelligence needs 5 Faradays, 2 Maxwells, 1.7 Einsteins, and the funding of 0.3 Manhattan Projects to succeed. During an interview, the founder of artificial intelligence said that if it takes 200 years to finally achieve artificial intelligence, and then, at last, there is a textbook that explains how it is done, the most challenging part of that textbook to write will be the part that explains why people did not think of it 200 years ago. At the age of 84, the father of artificial intelligence left the world on October 24, 2011. A professor and an inventor, John McCarthy, defined and dominated the field of artificial intelligence for more than five decades.

Quotes from John McCarthy

“He who refuses to do arithmetic is doomed to talk nonsense.” 

“All travel is, after all, a journey in time and mind. Physical landscapes are a mirror of, or perhaps a key into, our inner landscape.”

“Never abandon a theory that explains something until you have a theory that explains more.”

“With no more than six levels of misquotation, any statement can be made to say whatever you wish.”

Advertisement

Top AutoML tools in 2022

top autoML tools in 2022

Machine learning is a booming industry, and reports by Fortune Business insights proved that the global machine learning market exhibited a higher growth of 36.1% in 2020 compared to 2019. This gained trend in machine learning has resulted in high demand for automated machine learning (AutoML) tools, which makes the use of ML algorithms & models easier. The goal of AutoML tools is to efficiently automate all regular, manual, and tedious workloads of ML implementations. It provides the techniques to automatically find the best performing ML model for a given dataset. Today, several AutoML software and platforms are available online for aspiring AI & ML developers and professionals.

AutoML tools enable users to train and test their models with minimal domain knowledge of either machine learning or their data.

This article provides a list of the best AutoML tools. To be noted, no ranking of AutoML tools is done. 

  1. Auto-Sklearn

Auto-Sklearn is an open-source automated machine learning tool built around Sklearn. Sklearn (Scikit-learn) is an open-source machine learning library built upon Scientific Python (SciPy), providing a range of supervised and unsupervised learning algorithms. Conventionally, the extensions or modules in SciPy are named SciKits, thus the name Scikit-learn. Auto-sklearn was first introduced in an article named ‘Efficient and robust automated machine learning’ in 2015. Later in 2020, the second version of Auto-Sklearn was developed on GitHub and presented in the paper ‘Auto-Sklearn 2.0: The Next Generation’.

Auto-sklearn processes the tasks like feature selection, data preprocessing, hyperparameter optimization, model selection, and evaluation. This toolkit is a drop-in replacement for sklearn estimators and classifiers. It works on ML problems by defining them as CASH (Combined Algorithm Selection and Hyperparameter optimization) problems. Auto-sklearn works to automatically search for the right algorithm to implement for a dataset and apply optimization techniques to its hyperparameters. The automated process is based on the usage of Bayesian optimization with meta-learning. The goal is to find the optimal model pipeline and ensemble from the individual model pipelines. Auto-sklearn consists of 15 classification algorithms, 14 feature preprocessors and performs data scaling, encoding categorical parameters, and also handle missing values. 

  1. Auto-PyTorch

Auto-PyTorch is one of the best AutoML tools, a machine learning automation toolkit built on top of PyTorch. PyTorch is an open source ML framework formulated on the Torch library, with applications in computer vision and natural language processing (NLP). The focus of the AutoML framework is enhanced with Auto-PyTorch by jointly combining optimization of traditional ML pipelines and the neural architecture. It has a similar API to Auto-sklearn and thus, requires only a few inputs to fit a DL pipeline, and it was designed to support tabular data and time series data. The workflow of Auto-PyTorch is a combined effort of multi-fidelity optimization with portfolio construction for meta-learning and ensembling of deep neural networks. Auto-PyTorch features were explained in the papers ‘Auto-PyTorch Tabular: Multi-Fidelity metalearning for Efficient and Robust AutoDL’ and ‘Efficient Automated Deep Learning for Time Series Forecasting.’ 

  1. Auto-Keras

Auto-Keras is an open-source software library that implements AutoML for deep learning models using the Keras API. Keras is a high-level API that runs on top of TensorFlow and is written in Python. Auto-Keras automatically searches for architecture and hyperparameters of DL models by using Keras models via TensorFlow tf.keras API. The searching process in Auto-Keras is known as neural architecture search (NAS) and can be labeled as a modeling task. DATA Lab developed Auto-Keras intending to make machine learning accessible to everyone. 

Auto-Keras works on a simple and effective way of finding the top-performing models among various predictive modeling tasks, making it one of the best AutoML tools. It supports several tasks, including classification & regression of images, text, and structured data. The current version is a pre-release version of Auto-Keras, as this package is still evolving. Thus, tasks like time series forecasting, object detection, and image segmentation are under development and will be available in the future version of Auto-Keras. It provides an easy-to-use interface where the user is only required to specify the location of data and number of models to try and is returned the most appropriate model that can achieve the best results. These days the usage of Auto-Keras is at the user’s own risk if any loss of data and libraries occurs because it is not providing warranties on the ‘as available’ basis. 

  1. Google Cloud AutoML

Google Cloud AutoML is an automated machine learning software that lets you train custom ML models without coding. Google Cloud AutoML was announced as a suite of machine learning products by Google in 2018. It provides simple, secure, and flexible products with an easy GUI (graphical user interface). One of the latest products in Google Cloud AutoML is Vertex AI which helps you to build, deploy, and scale ML models faster. It contains a pretrained and custom tooling within a unified AI platform. The advantages of Google Cloud AutoML are that it builds with groundbreaking ML tools powered by Google, deploying the models faster with 80% fewer lines of code, and using of MLOps tools for easy management of data and models. 

  1. DataRobot

DataRobot stands out among the best AutoML tools because it is an AutoML platform that manages and simplifies complex enterprise workflows. You can use the platform for any function to make predictions, perform what-if analysis, and automate & optimize model creation. It also helps executives manage value by giving real-time insights into how many models are running in production. DataRobot was invented by Jeremy Achin and Tom de Godoy in 2012 to automate tasks needed to develop artificial intelligence and machine learning applications.

The features of DataRobot include data formatting, feature engineering, model selection, hyperparameter tuning, and monitoring. Additionally, it helps us to understand the important variables for prediction and evaluate the significance of different variables in determining the target variable. The deployment of models in DataRobot is a simple process. Once the instructions are given, it pulls the REST APIs, and then ML operations enable you to check the model’s accuracy and behavior with production data. DataRobot also offers pertained models, a data catalog, and a user-friendly GUI to visualize the entire training and deployment process. Hence, DataRobot provides complete transparency into the workflow and monitors it from a single place. 

  1. BigML AutoML

BigML AutoML tool is an automated machine learning platform that offers to build and share the datasets & models. It can be considered software as a service (SaaS) that provides services to build complex ML-based solutions affordably by processing predictive patterns from the data into usable real-life intelligent applications. The functions of BigML include private deployments and a rich toolkit helping the customers to create, experiment, automate and manage machine learning workflows. In 2011, the team of BigML company headquartered in Corvallis, Oregon, and in Valencia, Spain, founded BigML to make machine learning easy and approachable for everyone. The company serves around 169,000 users across the world and promotes machine learning in academia through an education program in over 700 universities.

BigML is one of the best AutoML tools contributing to being a consumable, programmable, and scalable ML platform that gives easy solutions. And automates classification, regression, time series forecasting, cluster analysis, anomaly detection, association discovery, and topic modeling tasks. BigML offers various features in many ways to load raw data, including cloud storage systems, clustering algorithms & visualization, and flexible pricing. The main modes of services BigML AutoML provides are web interface, command line interface, and API. 

  1. H2O AutoML

H2O AutoML,also known as H2O, is an open-source distributed in-memory machine learning platform with linear scalability which supports statistical and machine learning algorithms. It is designed to have minimum parameters so that the process of automated training and tuning models is more accessible. All you need to do in H2O is point to the dataset, identify the response column, and specify a time constraint. H2O.ai developed H2O platform with cutting-edge and distributed implementation of many ML algorithms. H2O supports these algorithms in Java, Python, Spark, Scala, and R. It also has a web GUI that uses JSON, and the models trained can be deployed on Spark server, AWS etc. 

Among the best AutoML tools, the advantage H2O has is that it automates the steps of basic data preprocessing, model training & tuning, and ensemble & stacking of models to provide the best-performing model. Thus, users are free to focus on other tasks like data collection, feature engineering, and deployment. There are several impressive functionalities of H2O such as providing necessary data processing capabilities, training a random grid of algorithms, individual models tuned using cross-validation, training two stacked ensembles, returning a sorted ‘Leaderboard’ of all models applied, and easy export of all models to production.  

  1. MLBox

MLBox is a dynamic and automated machine learning python library focusing on drift identification, entity embedding, and hyperparameter optimization. It has been developed and used by active community members of MLBox and is compatible with Linux, macOS & Windows operating systems, and Python versions 3.5 to 3.7 & 64-bit only. The main features of MLBox are to provide fast reading & distributed data processing/cleaning/ formatting, robust feature selection & leak detection, accurate hyperparameter optimization, and state-of-art predictive models. 

A fully automated MLBox pipeline has three components — initialization, validation, and application. The tasks are further divided into three steps. In initialization, raw data goes through preprocessing, cleaning, and encoding. Then in validation, MLBox performs feature engineering & selection and model tuning. Lastly, in application, the whole pipeline is fitted to predict the target and perform model interpretation.

Advertisement

Behind Ethereum Merge: Understanding the Need, the Implications, and Possibilities

Ethereum Merge proof of stake

In a research study, Citigroup warned that the Ethereum blockchain’s planned Merge, an update that converts it from a proof-of-work (PoW) system to a more environmentally friendly proof-of-stake (PoS) mechanism, will have a variety of effects. The Merge is scheduled to usher the second-largest cryptocurrency Ethereum into its next phase (previously known as Ethereum 2.0) when the network’s whole blockchain migrates to a new structure that supporters claim would make it more efficient, sustainable, and scalable. 

In order to verify transactions, and protect the network, proof of work relies on the participation of network users known as miners. Miners utilize powerful computers to solve challenging mathematical puzzles. In exchange for their work, miners are paid in the network’s native cryptocurrency, such as BTC or ETH. Proof-of-stake networks require validators who confirm transactions and maintain the network by staking—or locking up—a certain amount of the network’s native cryptocurrency, to work together instead of miners. Instead of using electricity to validate blocks, this solution requires users to put up raw cash. In contrast to a proof-of-work system, which requires 51% of the network’s power to overthrow, a proof-of-stake system requires 51% of the entire amount of staked ether. The network grows safer when more total ether is staked since it costs more to attain 51% of its capital.

Validators will also ensure network security by locking up the network’s native crypto, ETH. Potential block verifiers must stake 32 ETH by putting the money into the official deposit contract created by the Ethereum Foundation in order to become a complete validator on Ethereum 2.0.

It is important to note that for the owners of ETH, no special activity is required to “transfer” ETH from the 1.0 chain to the 2.0 chain. ETH holders can continue using (or hodling) their ETH just as they did before, but on a more secure and scalable chain than the one they were using earlier. Ethereum 1.0 will become a part of the 2.0 chain. 

The staking will probably slow down the rate at which ETH’s supply grows. And for Ethereum investors, if supply growth slows as demand picks up, it may be advantageous. Therefore, it should be no surprise that the price of ETH has increased as the upgrade draws closer. 

Over the past several days, Ethereum staked on the Beacon Chain, a PoS variant of Ethereum that debuted in December 2020, has been slowly rising. On December 1, 2020, 16,000 validators deposited the needed 32 ETH to stake. Since this time, Ethereum’s Proof-of-Work consensus layer has been coexisting with the Proof-of-Stake Beacon Chain. ​​

The current mainnet and the PoS version are set to combine on September 19. Currently, more than 410,000 distinct validators have contributed to the total staked value of 13.28 million. The Merge, has also been largely viewed as a positive trigger for ETH. With the migration to Proof-of-Stake, the network will stop paying miners and just charge validators. 

According to the bank’s research, once the update is implemented, it might result in decreased energy consumption and possibly pave the way for a more scalable future through sharding. Sharding is the process of dividing a large dataset into pieces that each represent the complete dataset. As a result of the Merge, the current Ethereum chain splits into 64 parallel shard chains.

Since each sharded chain will distribute the data storage functionality of the existing Ethereum chain, distributing the data processing weight among the nodes, sharding will significantly increase the network’s scalability; while relieving pressure on the core network. The multi-phased Sharding is expected to be implemented in 2023–2024, even though it was initially scheduled for 2022. When it is made available, sharding will be able to support Layer 2 rollups like zkSync and Immutable X, which handle large amounts of Ethereum transactions off-chain more quickly and cheaply.

Ethereum will be able to execute somewhere between 20,000 and 100,000 transactions per second thanks to sharding and Proof-of-Stake. The speed improvement from the current rate of 10–20 transactions per second is up to 999,900%, even though it might take a few years to achieve the maximum capacity. These exponential speed increases will assist in reducing network congestion and gas prices. The so-called “triple halvening” following the Merge will see fresh block rewards decline from 12,000 ETH (Ether) per day to 1,280 ETH, increasing scarcity, preventing the issuance inflation of the token, and probably maintaining the value of ETH.

Read More: Columbia Unveils Guide for Implementing Blockchain for Public Projects

Citi estimates that the switch to Proof-of-Stake could also reduce the annual issuance of Ethereum by 4.2%. Following the upgrade, it is predicted that ETH may turn into a deflationary asset because Ethereum also uses a part of its supply to pay for gas fees via EIP-1559. In line with the study, Ethereum’s importance as a store of value will increase as it turns into a deflationary asset.

The bank predicts that as a deflationary asset, it is less likely for Ethereum to be the blockchain with the maximum throughput. The report also stated that Ethereum’s blocktime will decrease from 13 to 12 seconds as a result of the Merge, which would result in lower fees and faster transaction times. Blocktime is the amount of time needed by crypto miners to authenticate or validate a block. For Ethereum, it typically takes between 10 and 20 seconds.

Furthermore, Citi stated that because Ethereum would be a PoS network, it would draw investment opportunities, enabling the application of new valuation methodologies currently unavailable for the blockchain. Citi also added due to the decline in energy usage, which would fall by 99.95% after the Merge, Ethereum may be deemed an energy-efficient and environmentally friendly cryptocurrency.

According to the report, The Merge, the first of five planned network enhancements, may only boost transaction speeds by 10% by cutting down on block times. The Surge (sharding) the network’s next scheduled update, is expected to enable 100,000 transactions per second (TPS) for the blockchain, according to the study. The Surge update will be followed by The Verge (introduction of verkle trees), The Purge (reducing hard drive space needed for validators), and The Splurge (miscellaneous minor upgrades) phases. 

The sooner the Merge takes place, in the opinion of its proponents, the faster it will be able to address the two main Web 3.0 growth constraints. These include the need to accommodate the growing need for distributed/decentralized processing capacity while still doing so in an environmentally friendly manner, and a shortage of raw native blockchain development talent.

In an effort to move closer to implementing Ethereum Merge, the Ethereum developers tested a “Shadow Fork” on April 11. A test to replicate data from the Ethereum blockchain to a testnet is known as a shadow fork. The term “testnet” or test environment network refers to a different blockchain network that is used for testing and evaluation. Before transferring their findings to the main network, the engineers test the recently added features using Shadow Fork.

The developer claims that the Shadow fork test, which was conducted against both current testnets and a portion of the Ethereum blockchain, uncovered client implementation problems. Once clients function normally during shadow forks, the current Ethereum testnets are processed via The Merge. An appointment time is scheduled for the Ethereum mainnet upgrade after testnets have successfully upgraded and are stable. 

Before the Ethereum blockchain successfully transitioned from proof-of-work to proof-of-stake, it underwent its third and final test environment network merging on August 10 under the moniker Goerli (after a Berlin railway station). In this instance, the Prater beacon chain and the Goerli testnet were combined. This experiment demonstrated the effectiveness of the merging process and demonstrated how the proof-of-stake validation procedure significantly decreases the amount of energy required to validate a block of transactions. The Sepolia testnet worked well last month, while the Ropsten testnet successfully switched to proof of stake in June.

Several methods exist for determining if a test was effective. The participation rate, or how many validators are online and doing their jobs, is the indicator that is the simplest to monitor. Those in charge of development must ascertain why in case the numbers decline.

Another metric is transactions that in Ethereum are processed in blocks. A certain sign that the test was successful would be if the blocks included genuine transactions rather than being empty. The final significant check is to see if the network is finalizing, which means that more than two-thirds of validators are online and agree on the chain history, which under typical network circumstances takes 15 minutes.

However, some people pointed out that a few small bugs persisted from the first two testnet mergers. Despite some “confusion on the network due to two different terminal blocks and plenty of non-updated nodes,” according to Ethereum developer Marius van der Wijden, things were still moving forward pretty well.

“Bellatrix” and “Paris” are the final two levels of Ethereum’s upgrading. Developers said during the conference call that “Bellatrix” will occur on September 6. When Ethereum’s hash rate (a metric of a network’s computational power) exceeds a specific threshold, the last phase of the upgrade, dubbed “Paris,” will take place. This is presently scheduled to occur on September 15.

Ethereum detractors have compared the integration to replacing an airplane’s engine in the middle of a passenger flight. Not just the aircraft but also the $183 billion worth of ether in circulation are in jeopardy.

The new blockchain may have several unanticipated flaws on a technological level. This is based on the fact that several total outages have occurred this year on the Solana blockchain, another proof-of-stake platform. At the same time, experts believe that the Merge and the creation of layer-2 networks built on top of Ethereum to help with layer-1 network scalability will boost Ethereum against other proof-of-stake competitors. 

The security of proof of stake is another concern raised by critics. However, a feature called slashing allows validators to be punished for acting maliciously by having their staked ether destroyed and their network access terminated.

In conclusion, in spite of delays, with a competitive lead already established, qualified and deliberate measures toward the Merge will guarantee Ethereum remains competitive and energy-efficient for the imminent future.

Advertisement

How will the US CHIPS bill affect the global chip industry? 

How will US CHIPS bill affect global chip industry

Creating Helpful Incentives to Produce Semiconductors (CHIPS) Bill was passed by the US House of Representatives on July 28. According to the bill, $52.7 billion will be set aside for the research, development, and domestic manufacturing of semiconductors. A sum of $39 billion will go toward incentivizing manufacturers, and $2 billion will be used to create existing/legacy chips for automotive and defense. Another $13.2 billion will be invested in workforce research and development. 

Although meant to boost semiconductor production in the US, what does this bill mean for the global chip industry? And how are the international competitors responding to it? Let’s discuss. 

Reason for the Bill

There are several reasons we can consider as to why this bill has been passed. A global shortage of semiconductors or chips last year led to the realization that the United States needed its own substantial manufacturing of semiconductors. A shortage of the former has left carmakers with unfinished vehicles. Since then, chip manufacturers have been lobbying for such a bill to pass. 

The growing use of electronic devices like laptops and smartphones in homes has further increased the demand for chips. According to Forbes, about 75% of the world’s semiconductor demand is met by East Asian manufacturers. In particular, Taiwan and South Korea’s Samsung have been critical proponents of manufacturing chips. However, China has been notably upping the production of semiconductors on a large scale.

Now, there is also a nationalistic argument here. The US has pointed to dependence on China for semiconductor supply as a cause for worry. According to the summary of the CHIPS bill, only 12% of chips are currently manufactured domestically in the US, as compared to the 1990s when domestic production was 37%. The bill also points out that many foreign competitors, including China, are investing heavily to dominate the semiconductor industry. 

Joe Biden said that the CHIPS bill would strengthen the country’s national security by making the US less dependent on foreign semiconductor sources. He also mentioned maintaining US leadership in the chip industry. Therefore, it is clear that the bill has been passed to give the US an advantage to top the global semiconductor game. The US president further said the bill would make cars, appliances, and computers cheaper. Moreover, it will lower the costs of everyday goods.

China’s Response 

The state-run media organizations of China, like the China Daily and Global Times, have criticized the provision of the CHIPS bill that would punish American companies for dealing with China. They accused it of being the latest representation of Washington’s efforts to exclude China from the global supply chains. 

Chinese officials also warned the concerned companies that they would risk losing market revenue or share in China if the CHIPS bill is implemented. China has said it strongly opposes this legislation, arguing that it evokes the anti-China sentiment and that it is reminiscent of the cold war mindset. However, pro-bill Americans are saying that China is simply upset because of the advantage the US will be at.

Global Repercussions

The United States is good at researching and designing high-end chips, whereas China and some other countries are good at mass production as these countries have added advantages over the US when it comes to labor costs. According to Gao Lingyun, an expert at the Chinese Academy of Social Sciences (CASS), Beijing, a short-term subsidy through the bill, which is aimed at moving all the semiconductor sectors to the US, will only add to the manufacturing cost. This will make the products less competitive on the international market.

Furthermore, if the concerned companies decide to accept subsidies from the US government and give up the chip investment in the Chinese mainland, it will imply a spontaneous abandonment of the massive Chinese market, which not only produces a large amount of chips at a relatively low cost for them but buys many semiconductor products from these companies. If we consider the facts in this sense, what the bill brings to the global chip industry is loss instead of benefits in the long run. 

Conclusion

The CHIPS bill is precisely what the US needs to get the country’s economy on track. By making more semiconductors in the US, the bill will increase domestic manufacturing and lower costs for US residents. And it will also strengthen the country’s national security by making it less dependent on foreign sources for semiconductors. 

However, when it comes to the global semiconductor industry, things are not looking so good. Experts say that the CHIPS bill shows the US government’s intention to set hurdles for China and other countries to obstruct their semiconductor industry development. However, as ambitious as it looks, it is unreasonable in many aspects and is making people doubt its effectiveness as well as its sustainability.

Advertisement

NVIDIA announces Omniverse Avatar Cloud Engine, a suite of cloud-native AI models and services

NVIDIA announces Omniverse Avatar Cloud Engine

NVIDIA has announced a suite of cloud-native AI models and services, NVIDIA Omniverse Avatar Cloud Engine (ACE), that makes it easier to build and customize lifelike virtual assistants and digital humans.

ACE enables businesses of any size to instantly access the massive computing power needed to create and deploy assistants and avatars by bringing these models and services to the cloud. These avatars can respond to speech prompts, understand multiple languages, interact with the environment, and make intelligent recommendations.

Rev Lebaredian, vice president of Omniverse and simulation technology at NVIDIA, said that ACE combines many sophisticated artificial intelligence technologies that allow developers to create digital assistants that are on a path to passing the Turing test.

Read More: UNCTAD Urges To Curb The Growth Of Crypto In Developing Countries

ACE is built on top of NVIDIA’s Unified Compute Framework. It provides access to the APIs and rich software tools needed to harness the wide range of skills required for highly fully interactive and realistic avatars.

These skills include: 

  • NVIDIA Metropolis for computer vision and intelligent video analytics
  • NVIDIA Riva for developing speech AI applications
  • NVIDIA Merlin™ for high-performing recommender systems
  • NVIDIA NeMo Megatron for large language models with natural language understanding 
  • NVIDIA Omniverse for AI-enabled animation.

The assistants and avatars enabled by ACE will transform interactions in gaming, banking, entertainment, transportation, and hospitality. Two applications built on ACE include Project Tokkio and NVIDIA’s Project Maxine. 

Project Maxine brings state-of-the-art audio and video features to virtual collaboration and content creation applications. Project Tokkio provides interactive avatars that intelligently see, perceive, converse, and provide recommendations to enhance customer service in places like restaurants.

Advertisement

Xiaomi launches humanoid robot CyberOne ahead of Tesla Bot Optimus’ debut

Xiaomi launches humanoid robot CyberOne

Xiaomi announced the launch of the company’s CyberOne humanoid robot at an event in Beijing on August 11. The debut comes ahead of Tesla’s AI Day in September, where the working prototype of Optimus Bot will be launched. The two subsequent launches are creating an air of competition. 

According to Xiaomi, their CyberOne robot, having arms, and legs, reaches peak torque of up to 300 Nm and supports bi-pedal motion posture balancing. The robot can also create a three-dimensional virtual reconstruction of the natural world, detect human emotions, and has advanced vision capabilities.

Xiaomi said that the robot’s AI and mechanical capabilities are being self-developed by the Xiaomi Robotics Lab. The company has also invested heavily in research and development, including algorithm innovation, software, and hardware.

Read More: UNCTAD Urges To Curb The Growth Of Crypto In Developing Countries

The CEO of Xiaomi said that CyberOne is an exploration of possibilities of Xiaomi’s technological ecosystem of the future with AI at its core and a full-size humanoid frame as its vessel. It is a new breakthrough for the company, he added. 

Many have speculated that Xiaomi unveiled the CyberOne in the spirit of competing with Tesla’s Optimus Bot. Experts say that it is most likely true, given the timeframe during which CyberOne has been launched. Although Tesla is yet to launch a working prototype of Optimus, Elon Musk had announced that Tesla AI Day was moved to September 30 with the hopes that Tesla will have the prototype ready by then. 

Advertisement

UNCTAD urges to curb the growth of crypto in developing countries

UNCTAD urges to curb growth of crypto in developing countries

The United Nations Conference on Trade and Development (UNCTAD) urged actions to curb the growth of cryptocurrencies in developing countries. The three policy briefs conveying the same were published on Wednesday. The UN trade and development body warned that while private digital currencies have rewarded and facilitated ­remittances to some, they are still unstable financial assets involving social risks and costs. 

The newly released policy briefs by UNCTAD examined the costs and risks of cryptocurrencies, including the threats cryptocurrencies bring to the security of monetary systems, financial stability, and domestic resource mobilization. 

Global use of cryptocurrencies has ­increased drastically during the pandemic, especially in developing countries. Reasons for the sudden uptake of crypto in developing countries include remittances facilitation and their use as a hedge against risks relating to inflation and currency.

Read More: Nagpur Scientists Develop AI Algorithm That Predicts Diabetes From ECG

Recent digital currency dips in the crypto market suggest that there are personal risks involved in holding cryptocurrencies. However, it becomes a public problem if the central bank steps in to protect financial stability. This could jeopardize the monetary sovereignty of countries if crypto becomes a widespread means of payment and even replaces domestic currencies unofficially.

In developing countries where the demand for reserve currencies is unmet, the so-called stablecoins – a kind of digital currency that is pegged to the US dollar – pose particular risks. The agency said that the International Monetary Fund had expressed the view that cryptocurrencies pose risks as legal tender for some of these reasons.

UNCTAD called for authorities to act to halt the rising expansion of cryptocurrencies in developing countries. It also mentioned several recommendations, including restricting advertisements related to cryptocurrencies.

Advertisement

Nagpur scientists develop AI algorithm that predicts diabetes from ECG

AI algorithm predicts diabetes from ECG

A team of scientists in Lata Medical Research Foundation, Nagpur, have developed an artificial intelligence (AI) algorithm that can accurately predict diabetes and pre-diabetes from ECG. The AI algorithm has been derived from the features of individual heartbeats recorded on an electrocardiogram (ECG). 

The team included clinical data from 1,262 individuals. A standard 12-lead ECG heart trace lasting 10 seconds for each participant was performed. A predictive algorithm named DiaBeats was generated by combining 100 unique structural and functional features for each of the 10,461 single heartbeats recorded.

The DiaBeats algorithm quickly detected diabetes and pre-diabetes based on the size and shape of individual heartbeats. The algorithm did so with an overall accuracy of 97%, irrespective of factors such as gender, age, and co-existing metabolic disorders.

Read More: Artificial Intelligence In Chronic Disease Management

Vital ECG features consistently matched the known biological triggers that imply cardiac changes that are common for diabetes and pre-diabetes. The method could be used to screen for the disease in settings with low resources if validated in more extensive studies, the team said.

In theory, the study provides a relatively non-invasive, inexpensive, and accurate alternative to the current diagnostic methods. This alternative can effectively detect diabetes and pre-diabetes early in its course. Despite that, the adoption of this algorithm into regular practice will need solid validation on external and independent datasets.

The researchers admitted that the participants of the study were all at high risk of diabetes and other metabolic disorders. Therefore, it is unlikely to represent the general population. Also, DiaBeats was slightly less accurate for patients taking prescription medications for high blood pressure, diabetes, and high cholesterol.

Advertisement