Home Blog Page 258

Elon Musk’s Neuralink Trials kills 15 out of 23 Monkeys

Neuralink trials kills 15 monkeys

Elon Musk’s neurotechnology company Neuralink over the last few years, held medical trials which killed 15 out of 23 monkeys. The company was conducting trials on a chip that could link to human brains. 

Neuralink chose 23 monkeys as test subjects and implanted the Neuralink Chip in them between 2017 and 2020 at the University of California Davis. According to sources, a minimum of 15 monkeys have died over time because of the implantation. 

Various animal rights organizations are claiming the test to be extremely suffering for the subjects. This week, a regulatory complaint was filed by the Physicians Committee for Responsible Medicine (PCRM), a non-profit organization that strives to save and improve human and animal lives. 

Read More: Tricentis Acquires AI-powered SaaS Test Automation Platform Testim

Over 700 pages of documentation, veterinary records, and necropsy reports were examined by the Physicians Committee through public records at the University. 

The complaint mentioned, “Many, if not all, of the monkeys, experienced extreme suffering as a result of inadequate animal care and the highly invasive experimental head implants during the experiments, which were performed in pursuit of developing what Neuralink and Elon Musk have publicly described as a brain-machine interface.” 

The brain-computer interface company’s technology records and decodes electrical signals from the brain using thousands of electrodes. The implantations were made in various parts of the monkey’s motor cortex that coordinate hand and arm movements. 

“Pretty much every single monkey that had implants put in their head suffered from pretty debilitating health effects.,” said research advocacy director at the PCRM, Jeremy Beckham. He further added that the company was merely maiming and killing the animals. 

According to reports, a monkey was found missing a few fingers, and some monkeys were vomiting, retching, and gasping after the chip implantation. The autopsy report suggests that the dead monkeys suffered a severe brain hemorrhage. 

The company earlier revealed its plan to start human trials for its brain chip in the latter half of 2022. However, this new report might hinder their aim and might cause a delay in its plan for human testing.

Advertisement

IBM Watson-Powered AI Assistant Helps Visitors on the TD Precious Metals Digital Store

IBM Watson AI Assistant TD Precious Metals Digital Store

Technology Giant IBM partnered with TD Securities to launch an AI-based virtual assistant powered by IBM Watson Assistant. 

The powerful assistant can be used to provide an enhanced experience to customers as it can solve inquiries on the TD Precious Metals digital store, including frequently asked questions and many more. 

According to the company, this newly developed artificial intelligence-powered virtual assistant can drastically reduce the time required in purchase and also provide a smooth and hassle-free customer experience. 

Read More: WHO highlights Benefits and Dangers of AI for Older People

Customers can buy actual gold, silver, and platinum bullion and coins online from anywhere in the world through the TD Precious Metals digital store. Now, the AI virtual assistant will be added as a new feature on the digital store to help customers in their shopping process around the clock. 

Financial Service Sector Leader at IBM Canada, Daniel Cascone, said, “With the rapid acceleration of digital transformation, businesses need to enhance their services using AI-powered intelligent workflows. The use of AI to automate tasks can drive greater efficiency and strengthen customer relationships.” 

He further added that they are collaborating with TD Securities to improve the overall customer experience by leveraging cutting-edge technology such as conversational AI via IBM Watson’s AI, virtual assistant. 

The newly launched AI assistant can effectively answer several questions of customers regarding pricing, delivery, shipment options, maximum and minimum order, and lots more. Customers type their inquiries into the virtual assistant and receive an immediate textual response. 

Additionally, the virtual Ai assistant also provides customers links to related resources according to their questions, assisting customers in resolving their queries. 

Managing Director, Head of Retail & Wealth Distribution & Product Innovation at TD Securities, James Wolanski, said, “We know our customers are looking for an enhanced digital experience, and the new virtual assistant will provide quick responses to help customers feel confident in their purchasing decisions.” 

He also mentioned that their TD Precious Metals Support Desk would remain open for any inquiries that require further assistance.

Advertisement

Tricentis Acquires AI-powered SaaS Test Automation Platform Testim

Tricentis Acquires Testim

United States-based Enterprise software testing solutions provider Tricentis announces that it has acquired artificial intelligence-powered Software as a Service (SaaS) test automation platform Testim. 

This acquisition will allow Tricentis to further expand the capabilities of its AI-powered continuous testing platform using the expertise of Testim, which will assist the firm to simplify test automation and enable organizations to rapidly construct durable end-to-end tests. 

The current testing methods and platforms available in the market are considerably slow and require users to have impeccable coding skills, making it difficult for organizations to bring in innovations quickly. 

Read More: Skydio secures Contract of $20 million from the US Army to provide AI Drones

Tricentis now would be able to change the scenario and provide customers with an easier and faster testing platform after this acquisition. 

Chairman and CEO of Tricentis, Kevin Thompson, said, “Tricentis aims to help customers deliver better business outcomes by producing high-quality, high-performing, and highly secure applications, no matter where or what the app might be.” 

He further added that with the arrival of Testim, they plan to expand their already competitive testing product selection into SaaS and DevOps. Customers who want to use cloud-based testing capabilities with flexible consumption models will be able to use Testim’s existing Tricentis SaaS products. 

United States-based machine learning solutions company Testim was founded by Oren Rubin in 2014. The firm specializes in providing solutions to help developers become experts in automation. 

“Tricentis has built a comprehensive offering to support the full testing lifecycle across the enterprise application landscape. The unique capabilities of both companies complement one another perfectly, and devs around the world will enjoy a more robust, comprehensive platform and higher productivity,” said CEO and Co-founder of Testim, Oren Rubin. 

He also mentioned that they are incredibly excited to join Tricentis and look forward to working with them. 

Advertisement

Top 15 Popular Computer Vision Datasets

Top 15 Popular Computer Vision Datasets
Image Credits: Analytics Drift Design Team

Computer vision models allow digital systems to recognize and make sense of the information contained in images, similar to how humans view and interpret the world around them with their eyes and minds. To acquire the fundamental level of image/object identification, computer vision technologies must go through several training stages using machine learning, deep learning algorithms, and neural networks, unlike humans’ cognitive learning capacity to understand the visual world instantaneously. Therefore, to train computer vision-based visual perception models, you’ll need curated computer vision datasets that can assist these models in discovering or distinguishing things in images.

In a larger sense, computer vision algorithms can deconstruct and turn visual material into metadata, which can then be saved, classified, and analyzed much like any other dataset. The data used for training should be of the highest quality to establish the quality of computer vision models. 

While it is good to have a dataset that comprises a vast range of images and video sequences for training, there are chances that a lack of a sufficient number of carefully selected training examples, i.e., labeled images can cause under-fitted models. Also, it is better to have datasets that contain information relatable to the type of industry for which you are developing the solution. 

However, manually labeling data is a strenuous process. Today, organizations can quickly get the data they need to train computer vision models thanks to the advent of pre-labeled computer vision datasets. Instead of collecting data, developers and researchers can focus their resources on building and training a computer vision model with pre-labeled datasets. Furthermore, the greater the number of open-source datasets available, the higher the data quality will become.

In the following listicle, we have compiled the top computer vision datasets that are widely used.

1. CIFAR-10 and CIFAR-100 

The Canadian Institute For Advanced Research provides both CIFAR-10 and CIFAR-100. The CIFAR-10 dataset is developed by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. It has 60000 photos divided into ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck. CIFAR-100 is similar in that there are 60000 photos altogether, but there are 100 classes, each with 600 images. This computer vision dataset is used for object recognition and contains 60,000 32-bit color photos divided into ten groups, each with 6,000 images. It’s split into five training batches and one test batch, each with 10,000 photos, totaling 50,000 training and 10,000 test images.

2. MNIST

The Modified National Institute of Standards and Technology database of handwritten digits, is among the most common datasets for computer vision, which was compiled by Professor Yann LeCun. It comprises 70000 photos of handwritten digits structured in 28×28 grayscale for each number, i.e. 0–9. The data is pre-split into two sets in the release: a training set of 60000 and a test set of 10000. All digits are placed at the center of the image. It’s employed in a basic computer vision project called handwritten digital recognition.

3. Fashion-MNIST 

This is a computer vision dataset based on Zalando’s (a fashion retailer) article images includes a training set of 60,000 instances and a test set of 10,000. Each instance in this collection is a 28×28 grayscale image with a label from one of ten classifications, with fashion-related topics including T-shirt/top, trousers, pullover, dress, coat, sandal, shirt, sneaker, bag, and ankle boot. There is a Scikit-learn-based automated benchmarking system that covers 129 classifiers with various parameters.

4. Labeled Faces in the Wild

This is a computer vision open-source dataset comprising of images of people’s faces that was created to research the challenge of unconstrained facial recognition. More than 13,000 photos of faces were gathered from the internet for the data collection. Each face has been identified with the name of the individual shown. In the data set, 1680 of the persons featured had two or more different photographs. There are currently four separate sets of LFW photos, including the original and three other types of “aligned” photographs. The aligned photos include “funneled images,” LFW-a, and “deep funneled” images. In comparison to the original images and the funneled images, LFW-a and the deep funneled images offer improved results for most face verification methods.

5. ImageNet

This deep learning computer vision dataset was created jointly by Stanford University and Princeton University for the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), an annual computer vision competition in which participating teams were challenged with five main tasks, namely object classification, object localization, object detection, object detection from video, and scene recognition. Only nouns are chosen in this dataset, which is based on the WordNet (A lexical database for English) hierarchy. Each node of the hierarchy has an average of over 500 images. There are almost 1.4 million photos representing over 220,000 classes in all. It’s the world’s largest categorized image collection, and it’s free to use.

6. ObjectNet

This image dataset for computer vision was developed by researchers at the MIT-IBM Watson AI Lab with the purpose of eliminating biases in existing image datasets. The researchers used Mechanical Turk, Amazon’s micro-task platform, to crowdsource the photographs instead of curating them from existing online sources. The Turkers documented each thing individually and forwarded them to be reviewed. The procedure ensured that the background, lighting, rotation, and other aspects were varied enough. 50,000 photos are scattered among 313 object classes in the ObjectNet.

7. PatchCamelyon

This medical image classification dataset was obtained from the TensorFlow website. PatchCamelyon is a brand-new and complex image classification dataset. It is made up of 327.680 color pictures (96 x 96px) taken from lymph node histopathologic scans. Each picture has a binary label that indicates the existence of metastatic tissue. PCam is a new machine learning benchmark that is larger than CIFAR10, smaller than Imagenet, and trainable on a single GPU.

Read More: European Parliament votes for Ban on Facial Recognition: Why is it a Historic moment?

8. IMDB-Wiki Dataset

This is said to be the world’s biggest publicly available training dataset of face images with gender and age information. This collection comprises 460,723 face images from 20,284 IMDb celebrities and 62,328 Wikipedia celebrities, for a total of 523,051. The data includes important meta-information such as the location of the person’s face in the image, their name, date of birth, and gender. Typically, this dataset is used for gender and age prediction tasks.

9. DOTA

DOTA (Dataset of Object deTection in Aerial Images) is a large-scale dataset for aerial object detection which can be used to design and test object detectors using high-altitude cameras. The dataset features images amassed from a variety of sensors and platforms. Each image has a resolution of 800 x 800 pixels to 20,000 x 20,000 pixels and includes items of various sizes, orientations, and forms. Experts in aerial image interpretation mark the occurrences in DOTA photos using an arbitrary (8 d.o.f.) quadrilateral.

The computer vision dataset comprises 15 common categories (e.g., ship, plane, car, swimming pool, etc.) that are annotated as bounding boxes defined by four pairs of points on the photos. The data is divided into three categories: training, validation, and testing.

10. MPII Human Pose

This dataset is used to test the accuracy of estimated articulated human poses. It contains around 25K photos of over 40K humans with annotated body joints. Each image is taken from a separate YouTube video and accompanied with a description. The collection contains around 410 human images, each of which is labeled with a particular activity.

11. COCO

Microsoft’s COCO stands for Common Objects in Context and is a large-scale dataset for object detection, segmentation, and captioning. The dataset includes images from 91 different stuff categories and 80 different object categories. This dataset has over 120000 photos with over 880000 tags (each image could have several tags). It also includes annotations for more advanced computer vision applications including multi-object labeling, segmentation mask annotations, image captioning, panoptic segmentation, stuff image segmentation, Dense human pose estimation, and key-point identification. It comes with an easy-to-use API that makes COCO’s loading, parsing, and visualizing annotations a breeze. 

12. Embrapa Wine Grape Instance Segmentation Dataset

This computer vision agriculture dataset for aimed at providing images and annotation for research into object recognition and instance segmentation in viticulture for image-based monitoring and field robots. It includes examples from five distinct grape types that were harvested in the field. These instances illustrate the variation in grape position, lighting, focus, and genetic and phenological variables like form, color, and compactness. The dataset includes 300 images with 4,432 grape clusters identified using bounding boxes. Binary masks that identify the pixels of each cluster are included in a subset of 137 photos.

13. Bosch Small Traffic Lights Dataset

When developing an automated driving vehicle for urban cityscapes, it is crucial that the computer vision model is efficient in vision-only based traffic light detection and tracking. This dataset comprises around 24000 annotated traffic lights and 13427 camera photos with a resolution of 1280×720 pixels. The annotations contain traffic light boundary boxes along with the current condition (active light) of each traffic signal.

The camera images are raw 12bit HDR images shot with a red-clear-clear-blue filter and reconstructed 8-bit RGB color photographs. The RGB photographs are available for troubleshooting and training purposes. It’s vital to remember that this dataset was produced to test traffic light detection methods; it’s not meant to cover all eventualities and shouldn’t be used in production.

14. Google Open Images

This computer vision open-source dataset from Google is a 9 million-image URL to images that have been annotated with labels spanning over 6000 categories. This computer vision dataset has 16 million bounding boxes for 600 object classes on 1.9 million images, making it the biggest collection with object location annotations currently available. The boxes were mostly drawn by hand by skilled annotators to guarantee accuracy and uniformity. The photographs are diverse and frequently include convoluted scenarios with several items (8.3 per image on average).

It also has visual relationship annotations, which show pairings of items in certain relationships (e.g., “woman playing guitar,” “beer on the table”), object qualities (e.g., “table is wooden”), and human behaviors (e.g., “woman is jumping”). It comprises 3.3 million annotations from 1,466 different relationship triplets in all.

15. Waymo Open Dataset

The Waymo Open Dataset is the most extensive and diversified multimodal autonomous driving dataset to date. It includes images from a variety of high-resolution cameras and points clouds from a variety of high-quality LiDAR sensors, as well as 12 million LiDAR box annotations and around 12 million camera box annotations. It has 798 video sequences for training, 202 video sequences for validation, and 150 video sequences for testing, each of which lasts 20 seconds. Each video sequence has five views, where each camera captures 171-200 frames with the image resolution of 1920 × 1280 pixels or 1920 × 886 pixels.

Advertisement

Students develop Solar-powered AI Robot to clean Ocean

Students Solar AI Robot Clean Ocean

Five students from Abu Dhabi University have developed a new solar-powered artificial intelligence robot that can be used to clean ocean debris. The primary purpose of it is to protect the aquatic environment. 

The robot has been built by five female students that use artificial intelligence to clean the sea of plastic and numerous other floating and mineral wastes. Saba Barfis, Dana Al Manala, Tasneem Al Assaad, Lin Al Asadi, and Iman Al Naqbi developed this robot under the supervision of Dr. Mohammed Ghazal, Dr. Anas Al Tarabshah, and others. 

The robot can drive autonomously in a marine environment using only solar energy. The robot’s battery gets recharged using an off-grid photovoltaic system once it finishes its duties. As it runs on solar power, the reliability and sustainability of the robot increase exponentially. 

Read More: Skydio secures Contract of $20 million from the US Army to provide AI Drones

It considers various factors, including energy efficiency, the quantity of detected marine waste, and others, to choose its autonomous movement tracks. 

Co-developer of the robot, Dana Al Manala, said, “The amount of waste is increasing in the world every year, and human practices affect the land and water surfaces, threatening the marine environment with plastic and other types of floating waste. This endangers marine life and thus humans, as many fish, turtles, and aquatic species die due to eating plastic.” 

She further added that their technology intends to assist the aquatic environment recover by removing all types of debris from the water’s surface. Furthermore, their robot will aid people who operate in the sector of water surface cleaning by drastically reducing their working hours. 

Saba Barfis, one of the developers, revealed that the robot cleans water bodies using an autonomous, self-driving, and a sustainable mechanism that is specifically built to clean water surfaces safely and continuously. 
Recently, researchers from NOAA, NCCOS, Oregon State University, and their partners announced their plans to develop a unique machine learning-powered drone system that can be used to tackle the increasing concern of marine litter.

Advertisement

Skydio secures Contract of $20 million from the US Army to provide AI Drones

Skydio US Army AI Drones

Artificial intelligence-enabled drones developing company Skydio secures a contract worth $20.2 million per year with the United States Army. Skydio will now equip the U.S. army with high-end artificial intelligence-powered drones to increase the Army’s battle strength and capabilities. 

The deal is a part of the U.S. Army’s Short Range Reconnaissance (SRR) program, in which the Army is looking to develop and deploy cost-effective and lightweight drones. Skydio plans to provide the U.S. Army with its Skydio X2D drones as a part of this contract. 

The Army’s Short Range Reconnaissance program aims to deliver a rucksack portable, vertical take-off and landing drone that provides the Soldier on the ground the capability to gain situational awareness. 

Read More: Sony AI Unveils Gran Turismo Sophy

Col.Joseph Anderson said, “The selection of the U.S. Army’s short-range reconnaissance provider for tranche 1 is a significant milestone for the Army, our strategic partners, and the domestic industrial base. The future for our Soldiers is now.” He further added that their partnerships with industry reflect their willingness to compete as well as their ability to lead and innovate in unmanned systems technology. 

According to Skydio, their X2D drone has a dimension of 11.9″ x 5.5″ x 3.6″ when folded, and 26.1″ x 22.4″ x 8.3″ when unfolded, making it extremely portable. 

The drones are equipped with front-facing 4K cameras having 16x zoom ability and an infrared camera. The company claims that its drone has a flight time of 35 minutes and can operate 24 hours, regardless of the lighting conditions. 

Skydio X2D drones are packed with artificial intelligence-powered tools that enable 360-degree obstacle avoidance. United States-based AI drones manufacturing firm Skydio was founded by Abraham Bachrach, Adam Bry, and Matt Donahoe in 2014. The company specializes in developing drones that recognize and avoid obstacles in real-time using a variety of cameras and unique computer vision technology. 

CEO and Co-founder of Skdio, Adam Bry, said, “This is an exciting milestone for Skydio, the Army, and most importantly the men and women who serve our country. For drones under 20 pounds, civilian drone technology has raced ahead of traditional defense systems.”

Advertisement

Indian Astronomers discover 60 Habitable planets using AI Technology

Indian Astronomers discover 60 Habitable planets AI

Indian Astronomers have used a new artificial intelligence-powered technology to accurately discover 60 earth-like habitable planets in the universe. The new artificial intelligence technology helped astronomers in finding 60 potentially habitable planets from a total of 5000 planets identified. 

According to the astronomers, the artificial intelligence algorithm used to identify planets intensively scanned multiple planets to find out which planets have similar characteristics like Earth, making them potentially habitable. 

The AI solution named the Multi-Stage Memetic Binary Tree Anomaly Identifier (MSMBTAI) is built on a revolutionary multi-stage memetic algorithm (MSMA). Astronomers from the Indian Institute of Astrophysics, an autonomous institute of India’s Department of Science and Technology, Government of India, and astronomers from BITS Pilani, Goa campus. 

Read More: WHO highlights Benefits and Dangers of AI for Older People

Prof. Snehanshu Saha from BITS Pilani Goa Campus and Dr. Margarita Safonova of the Indian Institute of Astrophysics led the research team in this new discovery. 

A statement from the Ministry of Science and Technology mentioned, “The method is based on the postulate that Earth is an anomaly, with the possibility of the existence of few other anomalies among thousands of data points…There are 60 potentially habitable planets out of about 5000 confirmed and nearly 8000 candidate planets proposed. The assessment is based on their close similarity to Earth.” 

It also stated that these recently discovered planets can be thought of as anomalous examples among a vast pool of ‘non-habitable’ exoplanets. This unique approach developed by astronomers from IIA and BITS Pilani is based on the assumption that Earth is an anomaly, with the possibility of a few other anomalies among thousands of data points. 

Nevertheless, it is still difficult to accurately identify such plates given the massive number of exoplanets that require a long time. The artificial intelligence-powered solution made it easier for astronomers to analyze thousands of planets manually and identify potentially habitable planets.

Advertisement

WHO highlights Benefits and Dangers of AI for Older People

WHO Benefits Dangers Older People

The World Health Organization (WHO) recently published a report highlighting the benefits and dangers of artificial intelligence for older people. Many sectors, including public health and medicine, are being transformed by artificial intelligence. 

The technology can aid in predicting health risks and occurrences and medicine discovery and personalization of healthcare management. Though AI has innumerable benefits, it also comes with multiple concerns and risks if precautions are not taken. 

Older people may find it challenging to contribute to the appropriate governance and oversight of AI technology for health. The design and reach of artificial intelligence-powered products and solutions can also be limited by false assumptions about how older people want to live or engage with technology in their daily lives. 

Read More: PUBG Creator Krafton to build AI-powered Virtual Humans

WHO proposed eight principles in the new document, including participatory design of AI technology by and with older people, age-diverse data science teams, age-inclusive data gathering, and many more. 

Ethical challenges for healthcare institutions, practitioners, and recipients of medical and public health services must be addressed in order to fully enjoy the benefits of artificial intelligence. Unit Head of Demographics and Healthy Aging at WHO said, “To ensure that AI technologies play a beneficial role, ageism must be identified and eliminated from their design, development, use, and evaluation. This new policy brief shows how.” 

She further added that in this discipline, societal implicit and explicit biases, especially those related to age, are frequently duplicated. It is of prime importance that AI-powered solutions that affect day to day lives of elderly people do not worsen or promote ageism. According to WHO’s document, ageism has far-reaching effects on all aspects of health, well-being, and the economy. 

Advertisement

Sony AI Unveils Gran Turismo Sophy

Sony Gran Turismo Sophy

Technology and electronics company Sony’s subsidiary Sony AI announces a breakthrough in artificial intelligence called Gran Turismo Sophy. Sony Ai collaborated with Polyphony Digital Inc. (PDI) and Sony Interactive Entertainment (SIE) to develop this newly launched technology. 

Gran Turismo Sophy is the first superhuman AI agent to beat the world’s greatest drivers in Gran Turismo 4’s incredibly realistic racing simulation game. GT Sophy is an autonomous AI agent that was trained using a revolutionary deep reinforcement learning platform built by Sony AI, PDI, and SIE jointly.

According to experts in video game racing and artificial intelligence, GT Sophy’s accomplishment is a significant milestone, with the agent demonstrating mastery of tactics and strategy. 

Read More: India to use AI solutions to Tackle Power Distribution Losses

Chairman, President, and CEO of Sony, Kenichiro Yoshida, said, “This group collaboration in which we have built a game AI for gamers is truly unique to Sony as a creative entertainment company. It signals a significant leap in the advancement of AI while also offering enhanced experiences to GT fans around the world.” 

He further added that the company wants to fill the world with emotion through the power of creativity and technology. The newly developed artificial intelligence technology promises to provide players all across the world with unique AI-powered gaming experiences. 

Sony AI was founded on April 1, 2020, with the purpose of using artificial intelligence to liberate human imagination and creativity. Sony AI’s newly launched technology was able to reach extraordinary performance on three tracks after 45,000 hours of training. 

“Gran Turismo Sophy is a significant development in AI whose purpose is not simply to be better than human players, but to offer players a stimulating opponent that can accelerate and elevate the players’ techniques and creativity to the next level,” said the CEO of Sony AI, Hiroaki Kitano. 

He also mentioned that they hope that this breakthrough will open up new potential in fields such as autonomous racing, autonomous driving, high-speed robots, and control. 

Advertisement

India to use AI solutions to Tackle Power Distribution Losses

India AI power distribution loss

The government of India is planning to use artificial intelligence solutions to tackle the emerging challenge of power distribution losses across the country. 

The government is now reaching out to major information technology companies and startups to help them develop AI-enabled solutions to handle this issue. 

This new development falls under the government’s Rs 3.03 lakh crore reform-based and result-linked scheme, with a corpus of Rs 4 crore sanctioned for the first year by the electricity ministry. 

Read More: Lawmakers Warn Clearview AI Could End Public Anonymity

Distribution loss is one of the most prevalent issues that India’s energy sector has been facing in recent years, and this initiative of the government would considerably help solve the problem. According to reports, the country witnessed a high transmission and distribution loss of more than 20% in 2019, one of the world’s highest. 

Power Secretary Alok Kumar said, “Huge data will be thrown up when we implement smart meters in a time-bound manner. We are conscious that this data should be analyzed intelligently in a way that it leads to good actionable points for the utility managers and for the policymakers.” 

He further added that the average distribution loss in India is 20%. However, many utilities experience losses of 40-45 percent, with a few companies losing more than 50%. 

According to the government, the developer of the solution will leverage technologies including artificial intelligence, machine learning, and IoT to analyze data made available through consumer, transformer, and feeder metering. 

The government plans to select multiple technology providers, companies, and startups to tackle the power distribution issue in the country. A government official said, “Increased technology interventions will aid in facilitating operational and financial sustainability of the distribution companies.” 

The government will allow the selected companies to make judgments on loss reduction, demand forecasting, differential tariff in a day, and renewable energy integration using innovative technologies.

Advertisement