Home Blog Page 192

Indian student’s machine learning software to be sent to space

Indian student's ML software space

A team of five students led by an Indian student Archit Gupta at Singapore’s Nanyang Technological University (NTU), has made a machine learning software, Cremer, which will be sent to the International Space Station (ISS) for testing. Archit Gupta is a second-year student at the School of Computer Science and Engineering at NTU. 

The opportunity to test their software at ISS comes after the team won a competition on developing innovative ways to use artificial intelligence (AI) for space applications, at the start of the year. 

During the next three months, the team will install the software into a tiny supercomputer called an artificial intelligence box, after which it will be physically transported to the international space station.

Read More: AI To Help Study Images From James Webb Space Telescope

Gupta said the purpose of the international space station is to collect experimental data. If single event upsets – disruptions that tend to afflict sensitive electrical components in space – happen, the sanctity of data gets compromised, making the experiment go to waste. 

The software, Cremer, will play a crucial role in predicting hardware disruptions on the international space station or satellites which can cause these space vehicles to go off course or even crash in worst-case scenarios. Cremer was christened after an existing software program called Creme which also predicts single event upsets.

The other team members are third-year mechanical engineering student Deon Lim, third-year business student Sim See Min, and second-year electrical and electronic engineering student Rashna Ahmed.

Advertisement

Indigenous AI-powered Software to Prevent Trespassing on Defence Land

indigenous ai software prevent trespassing

Directorate General Defence Estates (DGDE) has developed an indigenous AI-based software that will detect illegal constructions or trespassing on the defence land using satellite imaging. The software was developed by the Centre of Excellence on Satellite and Unmanned Remote VEhicle Initiative (CoE-SURVEI), along with Bhabha Atomic Research Centre (BARC), at Meerut Cantonment in Uttar Pradesh.

Currently, the technology employs trained software and Cartosat-3 imagery from the National Remote Sensing Centre (NRSC). This software can detect any alterations made to the land by comparing satellite images in a time series. It allows the CEOs of Cantonment Boards to keep track of changes being made to the area, whether or not these changes are authorized, and when to take any action if it turns out to be unauthorized. 

The software is currently being used in 62 cantonments in the region and has facilitated enhanced accountability of the field staff and aided in reducing malpractices. It has detected around 1,133 unauthorized alterations, out of which 570 cases have been penalized. In the remaining cases, the Cantonment Boards have taken legal action wherever possible.

Read More: Intel’s OpenVINO v2022.1 – The Biggest Update to AI Toolkit in 3 Years

A. Bharat Bhushan Babu, Principal Spokesperson of the Ministry of Defence, tweeted, “It facilitates better control on unauthorized activities, ensures accountability of field staff, and helps in reducing corrupt practices.” 

The CoE-SURVEI has also developed a means to analyze vacant land via 3D still imagery for topologies like hill cantonments. By investing more in AI-powered detecting tools, the centre is trying to ensure that defence land is optimally used and protected via Geographic Information System (GIS)-based management. 

The CoE has also collaborated with other organizations for enhanced AI interface and change detection tools. The investments are aimed to benefit DGDE and Services in managing the defence land, especially in inhospitable regions.

Advertisement

Taiwan Hospital Adopts NVIDIA Jetson Real-Time AI Risk Prediction for Kidney Patients

taiwan hospital adopts nvidia jetson ai

Taipei Veterans General Hospital (TVGH) in Taiwan is working to enhance outcomes for dialysis patients by using the NVIDIA Jetson AI model that helps in real-time heart failure risk prediction for kidney patients during dialysis. Taiwan has the highest prevalence of kidney dialysis patients based on density. To improve the procedure’s outcomes and provide better risk management for heart failure, TVGH hopes to mitigate cardiovascular risk as a leading cause of death in dialysis patients. 

It plans to do so via an AI-based risk assessment model that achieves an accuracy of 90% and evaluates up to 200 sets of dynamic physiological and dialysis machine values while also processing medical records, blood test results, and medication data.

TVGH is adopting AI technologies like the NVIDIA Jetson AI Platform will enable TVGH to analyze data in real-time as it combines dialysis machine data with patients’ medical records and test results. 

Read More: IIT-Mandi Announces MBA Program in Data Science and AI for the Upcoming Semester

Prof Der-Cherng Tarng, Chief of Department at TVGH, said, “By deploying NVIDIA Jetson next to each dialyzer to perform AI prediction during the procedure, we can achieve real-time insights in a way that’s affordable and effective, even for small-scale dialysis centers.”

While the AI model automatically records and analyzes data generated by the dialyzers, their initial workflow required the healthcare staff to note physiological changes every 30 minutes. To make the model provide real-time inference, TVGH adopted the Aetna Edge AI Starter featuring Jetson Xavier NX, which can process up to 21 trillion operations per second. TVGH’s team also used NVIDIA TensorRT software to optimize predictions for the platform.

The hospital is also working on more AI projects with NVIDIA Parabricks genomics software, NVIDIA FLARE for workflows, and the NeMo Megatron for NLP.

Advertisement

AWS Announces a Machine Learning Scholarship with Udacity

aws announces machine learning scholarship with udacity

To invite learners interested in honing their machine learning skills and expertise, AWS collaborated with Udacity for its AWS Machine Learning Engineer Scholarship Program

This program aims to raise the level of machine learning expertise among all participants and to develop the next generation of ML leaders globally, emphasizing underrepresented groups. AWS works with professional groups spearheading efforts to broaden the skill and diversity of technical professions through its We Power Tech Program, including groups like Girls In Tech and the National Society of Black Engineers.

The principles of machine learning, the procedures involved in the process, reinforcement learning, generative AI, software engineering best practices for data science, and how to create your Python package are all covered in this course.

Read More: Reddit Enters Spains as it Acquires NLP Company MeaningCloud

Udacity will offer incentives like badges for completing lessons so students can feel confident about their accomplishments. Developers of all experience levels are encouraged to take the foundations course to grasp the basics of machine learning.

After completing the AWS Machine Learning Foundations Course, students will take an exam from which the best performers will be chosen for a follow-up scholarship from the AWS Machine Learning Engineer Nanodegree program, one of Udacity’s most popular and freshly updated Nanodegree programs.

Sunil PP, Lead- Education, Space, and Nonprofits, Amazon Web Services, said, “At AWS, we believe machine learning is among the most disruptive technologies we will encounter in our generation. Building a workforce that is skilled in ML will be crucial for India to leverage the transformational opportunity that ML presents.” 

This is not the only opportunity being provided by AWS. It has also launched The Summer Cohort – AWS DeepRacer Student League 2022 with the support of the Ministry of Education and the All India Council for Technical Education (AICTE), along with the similar aim of introducing ML to students. 

Advertisement

Indian used cars start-up CarzSo launches showroom in Metaverse

Indian start-up CarzSo launches auto showroom Metaverse

CarzSo, virtual reality-based auto tech firm, has launched India’s first used car’s showroom in the metaverse. Users can choose from the assorted makes and models which are meticulously picked by the company. 

The company also offers search tools that help users to find the right car in just a few clicks. Users can browse through a vast range of vehicles, using various filters like make, body type, model, price range, etc. 

The company has also announced the development of an NFT-based parcel of land to launch an auto industry-focused metaverse. The company added that it is focusing on building a video gaming platform around this concept. 

Read More: Meta To Launch Metaverse Academy In France With Simplon 

The auto tech startup also aims to facilitate buyers and owners to create virtual assets of their vehicles. This will provide vehicles their own unique digital identity. Additionally, the car owners will also be able to create NFTs of their vehicle’s number plates, which can be traded later.

CarzSo has a web 2.0 presence which leverages virtual reality and virtual showroom technologies to buy and sell cars. Thus, the new advancement will complement the company’s already existing virtual showrooms, allowing customers to skip a visit to the dealership and buy cars virtually.

Advertisement

Intel’s OpenVINO v2022.1 – The Biggest Update to AI Toolkit in 3 Years

intel openvino 2022.1 update

Intel announced its biggest update in 3.5 years, the OpenVINO 2022.1 with functional bug fixes and enhanced capabilities from its predecessor v2021.4.2 Long Term Support (LTS). It is a standard release for developers who wait for upcoming features and will be made available three to four times a year. However, the company also plans to provide LTS releases too. 

Recently, Intel expanded its OpenVINO toolkit to bring more intelligence to its Edge technologies. Now, this new update would include:

Updated and Cleaner API: The update introduced the new OpenVINO API 2.0. This API version aligns OpenVINO inputs/outputs with several frameworks using native layouts and element types. API 2.0 also enables developers to work with Dynamic Shapes. This would turn out to be beneficial for Neural Language Processing (NLP) models and other super-resolution models. 

Read More: Zomato Releases a hyper-localized Deepfake Ad Starring Hrithik Roshan

Libraries merged to a common openvino library: inference_engine, ngraph, inferencengine_Ip_transformations, and inference_engine_transformations were merged. A few more libraries were renamed. 

Enhanced Portability and Performance: the update comes with a new AUTO plugin that self-discovers system inference capabilities so that applications can skip knowing their compute environment. 

OpenVINO performance hints help to configure the performance keeping portability in mind. The hints “reverse” the configuration’s direction in the appropriate way. The concept allows the device to configure itself in response to a target situation expressed by a single configuration key. This is a totally portable and future-proof solution because the hints are supported by every OpenVINOTM device.

Broader Model Support: OpenVINO will be able to adapt to multiple input dimensions in a single model with Dynamic Input Shapes. 

The new version will provide more models with a focus on NLP along with a new capability of Anomaly Detection. It will offer anomaly segmentation for pre-trained models and noise reduction + speech recognition + translation + text to speech recognition for combined models.

The OpenVINO 2022.1 is built with 12th Gen Intel Core ‘Alder Lake’ in mind. Hence, it supports hybrid architectures to deliver high-end performance on CPUs and integrated GPUs. For more details, you can check out the OpenVINO toolkit 2022.1

Advertisement

AI Detects New Family of Genes in Intestinal Bacteria

AI Detects New Genes in Intestinal Bacteria

UT Southwestern researchers have found a new family of sensing genes in enteric bacteria using artificial intelligence (AI). The genes are linked by structure and function but not by genetic sequence. 

Published in PNAS, the findings offer a novel approach to identifying genes’ role in unrelated species and could also lead to new ways to fight intestinal bacterial infections.

Professor of Molecular Biology and Biochemistry, Kim Orth, co-led the study with the Department of Molecular Biology’s Lisa Kinch, a bioinformatics specialist. We identified similarities in the proteins in reverse of how it is usually done. Instead of using sequence, we looked for matches in their structure, said Orth. 

Read More: Schrodinger Is Using Artificial Intelligence Solutions To Develop Medical Drugs

Dr. Orth’s lab studies how estuary and marine bacteria cause infections. In 2016, she and her colleagues used biophysics to characterize the structure of two proteins called VtrA and VtrC complexes that work together in a bacterial species called Vibrio parahaemolyticus. 

Orth and her team used the artificial intelligence software called AlphaFold for their research. The AI program can accurately predict the structure of some proteins based on the genetic sequence that codes for them. Previously, this information was only gleaned through laborious work in the laboratory.  

Orth said these advancements could eventually lead to the discovery of pharmaceuticals that treat conditions caused by different infectious organisms relying on similar pathogenic strategies.

Advertisement

IISc researchers build an ML algorithm to discover human brain connectivity

IISc graphic processor human brain connectivity

IISc researchers at Bengaluru build a unit-based ML algorithm to study human brain connectivity using a graphics processor.

The researchers at IISc developed the ML algorithm that can analyze the brain data generated from diffusion Magnetic Resonance Imaging (dMRI) scans. They also conveyed that this ML algorithm can evaluate the dMRI data over 150 times faster than the existing algorithms.

Devarajan Sridharan, the associate professor at IISc’s Center for Neuroscience (CNS) and the corresponding study author, published in Nature Computational Science, said, “Tasks that took hours and days previously can be done within minutes and seconds now.”

Millions of neurons fire in the brain per second, helping in generating electrical pulses traveling across neural networks from one point to another through connecting cables or axons, essential for computations that the brain performs.

Read More: Modular closes $30M Seed Round to Provide a Unified Platform for AI-based System Development

The conventional approaches use animal models and are invasive. Varsha Sreenivasan, a Ph.D. student at CNS and the first author of the study, said, “dMRI scans can provide a non-invasive method to study brain connectivity in humans.”

Since axons are the brain’s information highways, IISc states that bundles of axons are tube-shaped, and water molecules can move through them with their length directedly.

Sridharan said,” Imagine water molecules are cars. You can obtain information about the direction and speed of cars at each point in space and time without any details about the roads. Similarly, our task is to gather networks of roads by observing traffic patterns.”

To identify networks, traditional algorithms closely match predicted dMRI signals from inferred connectome with observed dMRI signals. Scientists previously developed an algorithm called LiFE(Linear Fascicle Evaluation), which could only work on traditional CPUs and take time to process.

Therefore, researchers in Sridharan’s team at IISc developed the ML algorithm to improve LiFe’s performance. “To speed up the LiFe’s performance, Sridharan’s team redesigned the algorithm to work on a specialized electronic ship found in high-end game computing computers called GPUs, helping analyze data at 100-150 times faster,” IISc said.

Advertisement

WiseWorks AI Raises $1.2M to Build a One-stop AI Solution to Analyze Virtual Communications

wiseworks ai raises $1.2m

WiseWorks AI, an intelligent communications company backed by AI veteran Dr. Rojon Nag, raised a $1.2M investment in a round led by Veridian Ventures from funds including Silicon Valley-based R42 SyndicateRoom Angel Fund and Istcapital. Dr. Nag is the founder of R42 and has advised several speech recognition companies under brands like Motorola and Blackberry.

Dr. Nag said, “Having worked in speech recognition for 30 years, we can see the potential of WiseWorks’ application to transform the way financial institutions monitor and analyze virtual communications and handle costly and time-intensive activities.”

The startup, WiseWorks, aims to help regulated firms like financial institutions accredited by the Financial Conduct Authority (FCA) to record and store communications. Especially after the COVID-19 pandemic resulted in remote workflows amongst many institutions, various companies are struggling to manage the explosion of communications. Hence, a robust tracking and handling system of virtual communications involving confidential company data is needed.

Read More: Siemens Launches Xcelerator An AI-Enabled Open Business Platform, Unveils Building X With NVIDIA

Teoman Gonec, co-founder and CEO of WiseWorks, said their technology would help institutions achieve these objectives in a way that was not feasible before. WiseWorks has made a one-of-its-kind product that captures visual, linguistic, and acoustic communication cues. It enables out-of-reach use cases like misleading advice prediction, random fact-finding for development purposes, etc. 

The startup is already gaining traction amongst several financial institutions with its Tier 1 trials. The company also plans to extend the product to other consumers who face similar challenges.

Advertisement

DeepNash by DeepMind beats AI by mastering Stratego

DeepNash by DeepMind beats AI
DeepMind

On Jul 1, 2022 DeepMind posted a tweet of their newly trained model-free multiagent reinforcement workflow, DeepNash, that learned to play the Stratego game. Stratego is a board game that challenges players to handle various army ranks to defeat opponents and capture their flags. This autonomous agent can understand the Stratego game from scratch to a human expert level.

Among the different board games available, Stratego is one of the few that artificial intelligence (AI) has not been able to master yet. This popular game has a vast game tree order of 10535 nodes, i.e., 10175 times bigger than that of Go, an abstract strategy board game.

There is an additional complexity involving decision-making similar to Texax hold’em poker. However, the latter has a relatively smaller game tree order of 10164 nodes. The decisions made in Stratego are caused due to a large number of actions, and there is no evident correlation between the action and the result.

Read more: Siemens Launches Xcelerator, an AI-enabled Open Business Platform, Unveils Building X with NVIDIA

Each game is long and consists of multiple moves before a player wins. In some cases, Stratego cannot be broken down easily into manageable sub-problems. Due to these reasons, it has been a massive challenge for artificial intelligence to crack, and it can hardly clear the amateur level of Stratego.

On the other hand, DeepNash by DeepMind uses a game-theoretic, model-free seep reinforcement algorithm that has mastered the Stratego game through self-play. The Regularised Nash Dynamic (R-NaD) algorithm is the essential component that converges to an estimated Nash equilibrium by instantly changing the underlying multi-agent learning dynamics. 

DeepNash exceeds the current artificial intelligence strategies in the Stratego game and is the all-time top 3 on the Gravon games platform, contesting with human expert players.

Advertisement