Saturday, November 22, 2025
ad
Home Blog Page 257

MIT Develops Dynamo, a Machine Learning Framework to study Cell Trajectory

MIT Dynamo machine learning framework
Image Source: John Hopkins UNiversity

Whether it is a projectile or football, we find it easy to calculate their range, velocity, trajectory using fundamental physics formulas. However, when it comes to approximating the nature of the path of a cell, scientists have been struggling with the same till now. Professor of biology Jonathan Weissman of MIT, postdoc Xiaojie Qiu of the University of Pittsburgh School of Medicine, and collaborators at the University of Pittsburgh School of Medicine recently developed a method for understanding the path taken by cells by observing how cells change over time rather than looking at how cells move through space.

They’ve developed a machine learning framework named Dynamo, which specifies mathematical equations detailing a cell’s progression from one state to another, e.g., its transformation from a stem cell to one of several types of mature cells. This framework can also be used to find out the underlying processes, i.e., the gene activity that drive cell changes. Researchers can harness these findings to direct cells along one route rather than another, which is a popular objective in biomedical research and regenerative medicine.

Dynamo develops equations by combining data from many different cells. The most important information it requires is how the expression of several genes in a cell varies over time. Because RNA is a quantifiable result of gene expression, the researchers calculated this by looking at variations in the quantity of RNA over time. To anticipate the cell’s course, researchers looked at the initial levels of RNAs and how those levels change. As sequencing only measures RNA once, estimating RNA quantity changes from single-cell sequencing data is tricky. The researchers must then estimate how RNA levels were changing using clues viz., RNA being generated at the time of sequencing, and RNA turnover equations.

The researchers improved on earlier techniques to provide clean enough data for the Dynamo to function. They employed a newly discovered experimental approach that uses complex mathematical modeling to tag fresh RNA and differentiates it from old RNA.

Researchers next tried comparing a continuous image of how cells change versus viewing cells at discrete moments in time. The team employed machine learning to uncover continuous functions that describe these spaces, led by Qiu and Zhang. This was crucial because while technologies for thoroughly profiling transcriptomes and other ‘-omic’ information with single-cell precision have exponentially improved, the analytical tools for interpreting this data have been descriptive rather than predictive.

The MIT researchers validated Dynamo’s cell fate predictions by comparing them to cloned cells with the same Genetic code and ancestry. The findings reveal that one of two virtually similar clones’ sequences would be carried out while the other clone differentiated. Dynamo’s predictions on what would happen to each sequenced cell matched what actually happened to its clone.

Blood cells, which contain a broad and branching differentiation tree, were also used to evaluate the Dynamo’s efficiency. In collaboration with Boston Children’s Hospital, the Dana-Farber Cancer Institute, and the MIT Department of Biology, researchers discovered that Dynamo accurately mapped blood cell differentiation and verified a recent observation that one kind of blood cell megakaryocytes, develops sooner than others.

Read More: Overinterpretation: MIT finds new bottleneck in Machine learning based Image Classification

The researchers envisage that this machine learning framework will not only assist them in better understanding how cells migrate from one state to another but will also assist them in regulating this transition. To achieve this, MIT Dynamo has tools to model how cells would change as a result of various operations and a methodology for determining the most efficient path from one cell state to another. These tools offer researchers a robust foundation for predicting how to best convert any cell type to another, a major problem in stem cell biology and regenerative medicine, as well as generating ideas about how other genetic modifications may affect cells’ fate.

For information on this research, visit here.

Advertisement

Israel reveals AI strategy for Armed Forces

Israel AI strategy Armed Forces

Israel reveals its plan to use artificial intelligence solutions in multiple branches of its armed forces to increase its capabilities. 

A senior Israel Defense Forces official presented the new program during the three-day AI Week 2022 event at Tel Aviv University’s Blavatnik Interdisciplinary Cyber Research Center and Tel Aviv Center for AI and Data Science. 

The Israeli Defense Forces gave a session during the event regarding its new information and artificial intelligence strategy. The IDF’s new AI strategy is based on the assumption that data and AI will play a key role in modern and future warfare. Artificial intelligence can help armed forces in many ways that help them in gaining strategic superiority in battles. 

Read More: UK Regulators Warn Banks on use of AI in Loans

An IDF official said, “It was cleared by the general staff, and it was very important for us to create an unclassified version; this allows us to have this kind of conversation with industry, academia, and partners overseas. This is important for us.” 

The IDF plans to transform digitally to stay up-to-date with the latest technology. This development would help the IDF in expanding its abilities at multiple levels. The new initiative will be the first time the IDF has announced a multi-branch and multi-command plan for the use of artificial intelligence. 

IDF will process the massive amounts of data provided by numerous sensors, converting it into understandable information and delivering it to necessary departments. 

“Data is the dimension that is most flexible and adaptable. We fight in many areas with new threats and challenges, and this allows us great flexibility,” said the unnamed official. 

In recent years, AI has turned out to be very advantageous in multiple battles, including the conflict with Hamas in Gaza in May 2021. Various war equipment manufacturing companies in Israel are now introducing AI capabilities in multiple products such as rifles, bombs, and others.

Advertisement

Indian Cyber Security Researcher wins ₹65 Crore from Google

Indian Cyber Security Researcher ₹65 Crore Google

Technology giant Google announced in a report that an Indian cyber security researcher named Aman Panday won ₹65 Crore for identifying top vulnerabilities in various products of Google. 

Researcher Aman submitted more than 200 vulnerabilities in 2021, earning him this whopping ₹65 Crore reward from Google. 

The company awarded this amount under its Vulnerability Reward Programs, in which Google partners with the security researcher community to determine bugs in its products like web browser Chrome and operating system Android. 

Read More: Intel Launches new Blockchain chip for Crypto Mining

Google claimed that nearly 115 Chrome VRP researchers were rewarded for submitting 333 unique Chrome security bug reports. According to Google, it gave a record-breaking $8,700,000 in vulnerability rewards in 2021. Astoundingly, winning cybersecurity researchers donated nearly $300,000 to charity according to their likings. 

Aman Pandey is the CEO and founder of Indore-based cybersecurity firm Bugsmirror. He is a graduate of NIT Bhopal and recently established the company in 2021. Soon after the launch of the company, he managed to become the highest contributing cybersecurity researcher in Google’s VRP initiative. 

Apart from Aman, Google also recognized another researcher named Yu-Cheng Lin, who submitted 128 valid reports to the program in 2021. Google also paid out the highest ever reward, worth $157,000, for finding vulnerabilities in Android in 2021. 

Furthermore, last year Google also created bughunters.google.com, a public research platform committed to ensuring the safety and security of Google products and the internet. 

Google’s new platform would now unify the company’s VRPs, including Android, Abuse, Chrome, and Google Play, and provide a single intake form. This would considerably help researchers quickly identify and submit vulnerabilities in multiple Google products. 

Advertisement

UK Regulators Warn Banks on use of AI in Loans

UK Regulators Warn Banks AI Loans

Financial regulators of the United Kingdom (UK) warn banks that are using artificial intelligence (AI) solutions to approve loan requests. 

According to regulators, banks can only use AI technology provided they can demonstrate that it will not discriminate against minorities. 

Multiple people familiar with the discussions said that the regulatory agencies are increasingly pressuring the country’s major banks about the precautions they want to put in place around the use of AI, reported Financial Times. 

Read More: Elon Musk’s Neuralink Trials kills 15 out of 23 Monkeys

The European Union’s banking regulators urged lawmakers to look into the usage of data in AI/ML models and any bias that might lead to discrimination and exclusion. 

“The banks would quite like to get rid of the human decision-maker because they perceive, I think correctly, that is the potential source of bias,” said Simon Gleeson, a lawyer at Clifford Chance. 

Machine learning techniques have been used by banks to determine lending decisions. Banks believes that artificial intelligence will not make subjective or biased decisions and help reduce racial prejudice. However, the regulators have a different take on this situation and claim that AI could pose a more significant threat to biases. 

Sara Williams said, “If somebody is in a group which is already discriminated against, they will tend to often live in a postcode where there are other (similar) people … but living in that postcode doesn’t actually make you any more or less likely to default on your loan.” 

She further added that the more big data is shared around, the more info that is not immediately related to the person is sought. There’s a serious danger of propagating preconceptions in this situation. 

Earlier, a similar episode occurred in the United States where regulators were asked to ensure that artificial intelligence increased access to credit for low- and middle-income families and people of color. 

Advertisement

Intel Launches new Blockchain chip for Crypto Mining

Intel blockchain chip crypto mining

World’s leading semiconductor manufacturing company, Intel, launches its new blockchain chip to enter the crypto space. 

According to the company, the newly launched chip will be used for Bitcoin mining and NFT minting. Blockchains, which are distributed ledgers that keep track of transactions on a network of computers, have gained popularity in recent years. 

Blockchain is a technology that has the potential to make most of the digital material and services that people generate on their own. Therefore, Intel has developed this new chipset to address the increasing usage of cryptocurrencies. 

Read More: Top 15 Popular Computer Vision Datasets

Intel claims that its circuit innovations will deliver a blockchain accelerator that has over 1000x better performance per watt when compared to mainstream GPUs for SHA-256 based mining. The company will start shipping the chip later this year. Jack Dorsey-led company Block and GRIID Infrastructure will become the first customers of Intel’s new blockchain chip. 

Senior Vice President General Manager, Accelerated Computing Systems and Graphics Group at Intel, Raja M. Koduri, said, “Intel will engage and promote an open and secure blockchain ecosystem and will help advance this technology in a responsible and sustainable way.” 

He further added that Intel had announced its intention to help develop blockchain technologies by releasing a roadmap of energy-efficient accelerators. This new chip can also play a significant role in the evolving sector of the metaverse. The chip’s architecture is constructed on a small piece of silicon to have the least amount of impact on the current product supply. 

Last year, technology company NVIDIA also launched its chipset, CMP HX, dedicated to Ethereum mining. 

Additionally, Intel has created a new division named Custom Computer Group within its Accelerated Computing Systems and Graphics business unit to expand its presence in the industry. 

“The objective of this team is to build custom silicon platforms optimized for customers’ workloads, including blockchain and other custom accelerated supercomputing opportunities at the edge,” said Koduri. 

Advertisement

Elon Musk’s Neuralink Trials kills 15 out of 23 Monkeys

Neuralink trials kills 15 monkeys

Elon Musk’s neurotechnology company Neuralink over the last few years, held medical trials which killed 15 out of 23 monkeys. The company was conducting trials on a chip that could link to human brains. 

Neuralink chose 23 monkeys as test subjects and implanted the Neuralink Chip in them between 2017 and 2020 at the University of California Davis. According to sources, a minimum of 15 monkeys have died over time because of the implantation. 

Various animal rights organizations are claiming the test to be extremely suffering for the subjects. This week, a regulatory complaint was filed by the Physicians Committee for Responsible Medicine (PCRM), a non-profit organization that strives to save and improve human and animal lives. 

Read More: Tricentis Acquires AI-powered SaaS Test Automation Platform Testim

Over 700 pages of documentation, veterinary records, and necropsy reports were examined by the Physicians Committee through public records at the University. 

The complaint mentioned, “Many, if not all, of the monkeys, experienced extreme suffering as a result of inadequate animal care and the highly invasive experimental head implants during the experiments, which were performed in pursuit of developing what Neuralink and Elon Musk have publicly described as a brain-machine interface.” 

The brain-computer interface company’s technology records and decodes electrical signals from the brain using thousands of electrodes. The implantations were made in various parts of the monkey’s motor cortex that coordinate hand and arm movements. 

“Pretty much every single monkey that had implants put in their head suffered from pretty debilitating health effects.,” said research advocacy director at the PCRM, Jeremy Beckham. He further added that the company was merely maiming and killing the animals. 

According to reports, a monkey was found missing a few fingers, and some monkeys were vomiting, retching, and gasping after the chip implantation. The autopsy report suggests that the dead monkeys suffered a severe brain hemorrhage. 

The company earlier revealed its plan to start human trials for its brain chip in the latter half of 2022. However, this new report might hinder their aim and might cause a delay in its plan for human testing.

Advertisement

IBM Watson-Powered AI Assistant Helps Visitors on the TD Precious Metals Digital Store

IBM Watson AI Assistant TD Precious Metals Digital Store

Technology Giant IBM partnered with TD Securities to launch an AI-based virtual assistant powered by IBM Watson Assistant. 

The powerful assistant can be used to provide an enhanced experience to customers as it can solve inquiries on the TD Precious Metals digital store, including frequently asked questions and many more. 

According to the company, this newly developed artificial intelligence-powered virtual assistant can drastically reduce the time required in purchase and also provide a smooth and hassle-free customer experience. 

Read More: WHO highlights Benefits and Dangers of AI for Older People

Customers can buy actual gold, silver, and platinum bullion and coins online from anywhere in the world through the TD Precious Metals digital store. Now, the AI virtual assistant will be added as a new feature on the digital store to help customers in their shopping process around the clock. 

Financial Service Sector Leader at IBM Canada, Daniel Cascone, said, “With the rapid acceleration of digital transformation, businesses need to enhance their services using AI-powered intelligent workflows. The use of AI to automate tasks can drive greater efficiency and strengthen customer relationships.” 

He further added that they are collaborating with TD Securities to improve the overall customer experience by leveraging cutting-edge technology such as conversational AI via IBM Watson’s AI, virtual assistant. 

The newly launched AI assistant can effectively answer several questions of customers regarding pricing, delivery, shipment options, maximum and minimum order, and lots more. Customers type their inquiries into the virtual assistant and receive an immediate textual response. 

Additionally, the virtual Ai assistant also provides customers links to related resources according to their questions, assisting customers in resolving their queries. 

Managing Director, Head of Retail & Wealth Distribution & Product Innovation at TD Securities, James Wolanski, said, “We know our customers are looking for an enhanced digital experience, and the new virtual assistant will provide quick responses to help customers feel confident in their purchasing decisions.” 

He also mentioned that their TD Precious Metals Support Desk would remain open for any inquiries that require further assistance.

Advertisement

Tricentis Acquires AI-powered SaaS Test Automation Platform Testim

Tricentis Acquires Testim

United States-based Enterprise software testing solutions provider Tricentis announces that it has acquired artificial intelligence-powered Software as a Service (SaaS) test automation platform Testim. 

This acquisition will allow Tricentis to further expand the capabilities of its AI-powered continuous testing platform using the expertise of Testim, which will assist the firm to simplify test automation and enable organizations to rapidly construct durable end-to-end tests. 

The current testing methods and platforms available in the market are considerably slow and require users to have impeccable coding skills, making it difficult for organizations to bring in innovations quickly. 

Read More: Skydio secures Contract of $20 million from the US Army to provide AI Drones

Tricentis now would be able to change the scenario and provide customers with an easier and faster testing platform after this acquisition. 

Chairman and CEO of Tricentis, Kevin Thompson, said, “Tricentis aims to help customers deliver better business outcomes by producing high-quality, high-performing, and highly secure applications, no matter where or what the app might be.” 

He further added that with the arrival of Testim, they plan to expand their already competitive testing product selection into SaaS and DevOps. Customers who want to use cloud-based testing capabilities with flexible consumption models will be able to use Testim’s existing Tricentis SaaS products. 

United States-based machine learning solutions company Testim was founded by Oren Rubin in 2014. The firm specializes in providing solutions to help developers become experts in automation. 

“Tricentis has built a comprehensive offering to support the full testing lifecycle across the enterprise application landscape. The unique capabilities of both companies complement one another perfectly, and devs around the world will enjoy a more robust, comprehensive platform and higher productivity,” said CEO and Co-founder of Testim, Oren Rubin. 

He also mentioned that they are incredibly excited to join Tricentis and look forward to working with them. 

Advertisement

Top 15 Popular Computer Vision Datasets

Top 15 Popular Computer Vision Datasets
Image Credits: Analytics Drift Design Team

Computer vision models allow digital systems to recognize and make sense of the information contained in images, similar to how humans view and interpret the world around them with their eyes and minds. To acquire the fundamental level of image/object identification, computer vision technologies must go through several training stages using machine learning, deep learning algorithms, and neural networks, unlike humans’ cognitive learning capacity to understand the visual world instantaneously. Therefore, to train computer vision-based visual perception models, you’ll need curated computer vision datasets that can assist these models in discovering or distinguishing things in images.

In a larger sense, computer vision algorithms can deconstruct and turn visual material into metadata, which can then be saved, classified, and analyzed much like any other dataset. The data used for training should be of the highest quality to establish the quality of computer vision models. 

While it is good to have a dataset that comprises a vast range of images and video sequences for training, there are chances that a lack of a sufficient number of carefully selected training examples, i.e., labeled images can cause under-fitted models. Also, it is better to have datasets that contain information relatable to the type of industry for which you are developing the solution. 

However, manually labeling data is a strenuous process. Today, organizations can quickly get the data they need to train computer vision models thanks to the advent of pre-labeled computer vision datasets. Instead of collecting data, developers and researchers can focus their resources on building and training a computer vision model with pre-labeled datasets. Furthermore, the greater the number of open-source datasets available, the higher the data quality will become.

In the following listicle, we have compiled the top computer vision datasets that are widely used.

1. CIFAR-10 and CIFAR-100 

The Canadian Institute For Advanced Research provides both CIFAR-10 and CIFAR-100. The CIFAR-10 dataset is developed by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. It has 60000 photos divided into ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck. CIFAR-100 is similar in that there are 60000 photos altogether, but there are 100 classes, each with 600 images. This computer vision dataset is used for object recognition and contains 60,000 32-bit color photos divided into ten groups, each with 6,000 images. It’s split into five training batches and one test batch, each with 10,000 photos, totaling 50,000 training and 10,000 test images.

2. MNIST

The Modified National Institute of Standards and Technology database of handwritten digits, is among the most common datasets for computer vision, which was compiled by Professor Yann LeCun. It comprises 70000 photos of handwritten digits structured in 28×28 grayscale for each number, i.e. 0–9. The data is pre-split into two sets in the release: a training set of 60000 and a test set of 10000. All digits are placed at the center of the image. It’s employed in a basic computer vision project called handwritten digital recognition.

3. Fashion-MNIST 

This is a computer vision dataset based on Zalando’s (a fashion retailer) article images includes a training set of 60,000 instances and a test set of 10,000. Each instance in this collection is a 28×28 grayscale image with a label from one of ten classifications, with fashion-related topics including T-shirt/top, trousers, pullover, dress, coat, sandal, shirt, sneaker, bag, and ankle boot. There is a Scikit-learn-based automated benchmarking system that covers 129 classifiers with various parameters.

4. Labeled Faces in the Wild

This is a computer vision open-source dataset comprising of images of people’s faces that was created to research the challenge of unconstrained facial recognition. More than 13,000 photos of faces were gathered from the internet for the data collection. Each face has been identified with the name of the individual shown. In the data set, 1680 of the persons featured had two or more different photographs. There are currently four separate sets of LFW photos, including the original and three other types of “aligned” photographs. The aligned photos include “funneled images,” LFW-a, and “deep funneled” images. In comparison to the original images and the funneled images, LFW-a and the deep funneled images offer improved results for most face verification methods.

5. ImageNet

This deep learning computer vision dataset was created jointly by Stanford University and Princeton University for the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), an annual computer vision competition in which participating teams were challenged with five main tasks, namely object classification, object localization, object detection, object detection from video, and scene recognition. Only nouns are chosen in this dataset, which is based on the WordNet (A lexical database for English) hierarchy. Each node of the hierarchy has an average of over 500 images. There are almost 1.4 million photos representing over 220,000 classes in all. It’s the world’s largest categorized image collection, and it’s free to use.

6. ObjectNet

This image dataset for computer vision was developed by researchers at the MIT-IBM Watson AI Lab with the purpose of eliminating biases in existing image datasets. The researchers used Mechanical Turk, Amazon’s micro-task platform, to crowdsource the photographs instead of curating them from existing online sources. The Turkers documented each thing individually and forwarded them to be reviewed. The procedure ensured that the background, lighting, rotation, and other aspects were varied enough. 50,000 photos are scattered among 313 object classes in the ObjectNet.

7. PatchCamelyon

This medical image classification dataset was obtained from the TensorFlow website. PatchCamelyon is a brand-new and complex image classification dataset. It is made up of 327.680 color pictures (96 x 96px) taken from lymph node histopathologic scans. Each picture has a binary label that indicates the existence of metastatic tissue. PCam is a new machine learning benchmark that is larger than CIFAR10, smaller than Imagenet, and trainable on a single GPU.

Read More: European Parliament votes for Ban on Facial Recognition: Why is it a Historic moment?

8. IMDB-Wiki Dataset

This is said to be the world’s biggest publicly available training dataset of face images with gender and age information. This collection comprises 460,723 face images from 20,284 IMDb celebrities and 62,328 Wikipedia celebrities, for a total of 523,051. The data includes important meta-information such as the location of the person’s face in the image, their name, date of birth, and gender. Typically, this dataset is used for gender and age prediction tasks.

9. DOTA

DOTA (Dataset of Object deTection in Aerial Images) is a large-scale dataset for aerial object detection which can be used to design and test object detectors using high-altitude cameras. The dataset features images amassed from a variety of sensors and platforms. Each image has a resolution of 800 x 800 pixels to 20,000 x 20,000 pixels and includes items of various sizes, orientations, and forms. Experts in aerial image interpretation mark the occurrences in DOTA photos using an arbitrary (8 d.o.f.) quadrilateral.

The computer vision dataset comprises 15 common categories (e.g., ship, plane, car, swimming pool, etc.) that are annotated as bounding boxes defined by four pairs of points on the photos. The data is divided into three categories: training, validation, and testing.

10. MPII Human Pose

This dataset is used to test the accuracy of estimated articulated human poses. It contains around 25K photos of over 40K humans with annotated body joints. Each image is taken from a separate YouTube video and accompanied with a description. The collection contains around 410 human images, each of which is labeled with a particular activity.

11. COCO

Microsoft’s COCO stands for Common Objects in Context and is a large-scale dataset for object detection, segmentation, and captioning. The dataset includes images from 91 different stuff categories and 80 different object categories. This dataset has over 120000 photos with over 880000 tags (each image could have several tags). It also includes annotations for more advanced computer vision applications including multi-object labeling, segmentation mask annotations, image captioning, panoptic segmentation, stuff image segmentation, Dense human pose estimation, and key-point identification. It comes with an easy-to-use API that makes COCO’s loading, parsing, and visualizing annotations a breeze. 

12. Embrapa Wine Grape Instance Segmentation Dataset

This computer vision agriculture dataset for aimed at providing images and annotation for research into object recognition and instance segmentation in viticulture for image-based monitoring and field robots. It includes examples from five distinct grape types that were harvested in the field. These instances illustrate the variation in grape position, lighting, focus, and genetic and phenological variables like form, color, and compactness. The dataset includes 300 images with 4,432 grape clusters identified using bounding boxes. Binary masks that identify the pixels of each cluster are included in a subset of 137 photos.

13. Bosch Small Traffic Lights Dataset

When developing an automated driving vehicle for urban cityscapes, it is crucial that the computer vision model is efficient in vision-only based traffic light detection and tracking. This dataset comprises around 24000 annotated traffic lights and 13427 camera photos with a resolution of 1280×720 pixels. The annotations contain traffic light boundary boxes along with the current condition (active light) of each traffic signal.

The camera images are raw 12bit HDR images shot with a red-clear-clear-blue filter and reconstructed 8-bit RGB color photographs. The RGB photographs are available for troubleshooting and training purposes. It’s vital to remember that this dataset was produced to test traffic light detection methods; it’s not meant to cover all eventualities and shouldn’t be used in production.

14. Google Open Images

This computer vision open-source dataset from Google is a 9 million-image URL to images that have been annotated with labels spanning over 6000 categories. This computer vision dataset has 16 million bounding boxes for 600 object classes on 1.9 million images, making it the biggest collection with object location annotations currently available. The boxes were mostly drawn by hand by skilled annotators to guarantee accuracy and uniformity. The photographs are diverse and frequently include convoluted scenarios with several items (8.3 per image on average).

It also has visual relationship annotations, which show pairings of items in certain relationships (e.g., “woman playing guitar,” “beer on the table”), object qualities (e.g., “table is wooden”), and human behaviors (e.g., “woman is jumping”). It comprises 3.3 million annotations from 1,466 different relationship triplets in all.

15. Waymo Open Dataset

The Waymo Open Dataset is the most extensive and diversified multimodal autonomous driving dataset to date. It includes images from a variety of high-resolution cameras and points clouds from a variety of high-quality LiDAR sensors, as well as 12 million LiDAR box annotations and around 12 million camera box annotations. It has 798 video sequences for training, 202 video sequences for validation, and 150 video sequences for testing, each of which lasts 20 seconds. Each video sequence has five views, where each camera captures 171-200 frames with the image resolution of 1920 × 1280 pixels or 1920 × 886 pixels.

Advertisement

Students develop Solar-powered AI Robot to clean Ocean

Students Solar AI Robot Clean Ocean

Five students from Abu Dhabi University have developed a new solar-powered artificial intelligence robot that can be used to clean ocean debris. The primary purpose of it is to protect the aquatic environment. 

The robot has been built by five female students that use artificial intelligence to clean the sea of plastic and numerous other floating and mineral wastes. Saba Barfis, Dana Al Manala, Tasneem Al Assaad, Lin Al Asadi, and Iman Al Naqbi developed this robot under the supervision of Dr. Mohammed Ghazal, Dr. Anas Al Tarabshah, and others. 

The robot can drive autonomously in a marine environment using only solar energy. The robot’s battery gets recharged using an off-grid photovoltaic system once it finishes its duties. As it runs on solar power, the reliability and sustainability of the robot increase exponentially. 

Read More: Skydio secures Contract of $20 million from the US Army to provide AI Drones

It considers various factors, including energy efficiency, the quantity of detected marine waste, and others, to choose its autonomous movement tracks. 

Co-developer of the robot, Dana Al Manala, said, “The amount of waste is increasing in the world every year, and human practices affect the land and water surfaces, threatening the marine environment with plastic and other types of floating waste. This endangers marine life and thus humans, as many fish, turtles, and aquatic species die due to eating plastic.” 

She further added that their technology intends to assist the aquatic environment recover by removing all types of debris from the water’s surface. Furthermore, their robot will aid people who operate in the sector of water surface cleaning by drastically reducing their working hours. 

Saba Barfis, one of the developers, revealed that the robot cleans water bodies using an autonomous, self-driving, and a sustainable mechanism that is specifically built to clean water surfaces safely and continuously. 
Recently, researchers from NOAA, NCCOS, Oregon State University, and their partners announced their plans to develop a unique machine learning-powered drone system that can be used to tackle the increasing concern of marine litter.

Advertisement