Friday, November 21, 2025
ad
Home Blog Page 199

Machine Learning-based MRI predicts Alzheimer’s disease

ML-based MRI predicts Alzheimer’s disease

Published research claims to have used machine learning techniques to look at the structural features inside the brain and identify Alzheimer’s disease at an early stage when it can be complicated to diagnose. The technology even scanned the regions not previously associated with Alzheimer’s.

The research has been published in the Nature Portfolio Journal and was funded through the NIHR Imperial Biomedical Research Centre.

The new technique only requires a magnetic resonance imaging (MRI) brain scan taken on a standard 1.5 Tesla machine, which is commonly found in most hospitals, to perform the diagnosis.

Read More: MRI And AI Can Detect Early Signs Of Tumor Cell Death After Novel Therapy

The team of researchers applied an algorithm developed for the classification of cancer tumors in the brain. The researchers divided the brain into 115 parts and assigned 660 features, like shape, size, and texture, to assess each region. They then trained the algorithm to detect changes in these features to predict the existence of Alzheimer’s disease accurately.

The researchers tested the new AI-based technique on brain scans of over 400 patients from the Alzheimer’s Disease Neuroimaging Initiative. The scans were from patients with early and later-stage Alzheimer’s and patients with other neurological conditions, including Parkinson’s disease and frontotemporal dementia. The team also tested the data from over 80 patients undergoing diagnostic tests for Alzheimer’s at the Imperial College Healthcare NHS Trust.

In 98% of cases, the MRI-based machine learning system could accurately detect whether the patient had Alzheimer’s disease or not. The technique also differentiated between early and late-stage Alzheimer’s with 79% accuracy.

The system also spotted changes in areas of the brain previously not associated with Alzheimer’s disease, which has raised the possibility of potential new advances in treating Alzheimer’s disease. The areas include the cerebellum and the ventral diencephalon. 

Advertisement

Samsung Ventures Invests in NeuReality

samsung invests in neureality

Samsung Ventures recently expressed its support and interest in the Israeli AI and semiconductor company NeuReality. NeuReality, supported by Cardumen Capital, Varana Capital, and OurCrowd, is a comprehensive company providing chips, hardware, and software tools that significantly accelerate AI deployment.

The company is a startup founded in 2019 and focuses on evolving AI-as-a-service infrastructure in the cloud, edge, and fog systems. It uses a system-level approach that clubs convenient software with deep learning capabilities and data handling hardware. The holistic approach simplifies the mass deployment of inference technologies. 

Ori Kirshner, Samsung Ventures head in Israel, said, “We see substantial and immediate need for higher efficiency and easy-to-deploy inference solutions for data centers and on-premises use cases, and this is why we are investing in NeuReality.”

NeuReality’s services concentrate on real-life applications for various sectors like e-commerce, medical, digital personal assistants, and more. More comprehensive solutions cater to clouds, telecom operators, and data centers. The company is curating purpose-built AI platforms with experience in data center systems, AI, hardware design, and software development. 

Read More: Dataiku Announces Dataiku 11 with Enhanced Tools to Scale AI

The investment from Samsung Ventures will enable the company to come closer to its goal of the various deployment topologies and customers to adopt AI-based services in their workflows. 

“The investment from Samsung Ventures is a big vote of confidence in NeuReality’s technology. The funds will help us take the company to the next level and take our NR1 SoC to production. This will enable our customers to evolve their system architecture, and this evolution will make it easier for them to scale and maintain their AI infrastructure, whether it is in their data center, in a cloud or on-premises,” said Moshe Tanach, co-founder and CEO of NeuReality.

NeuReallity is also indulged with IBM to deliver disruptive cost and power performance in AI platforms. It is also working alongside AMD to provide its first-gen FPGA platform.

Advertisement

UF/IFAS to use AI to assess livestock mobility

UFIFAS to use AI to assess livestock mobility

The University of Florida, Institute of Food and Agricultural Sciences (UF/IFAS) scientists plan to assess livestock mobility faster and more accurately using artificial intelligence (AI). The technology will analyze high-definition videos of the animals as they move.

The team will use machine learning and gait analysis to speed up the assessment of livestock mobility.

Samantha Brooks, a UF/IFAS geneticist and associate professor of equine physiology, along with other UF researchers, received a $49,713 grant from the Agricultural Genome to Phenome Initiative.

Read More: University Of Florida Develop National AI Curriculum

The researchers will work primarily with horses as they are an excellent model for locomotion. The laboratory is currently working with about 2,000 video clips of horses in motion. Graduate student Madelyn Smythe and hundreds of central Florida horse-owners contributed to the videos. 

The extensive library of videos will enable the construction of accurate models to track the animals’ movement in the video frame. Although working with horses now, the team will translate the findings to similar models for other four-legged farm animals. 

The team will also build AI models to analyze videos of cattle, swine, and other small ruminants for the project. While reviewing the data, the researchers will look at horse traits such as stance time, limb extension, and stride length. In the case of cattle and swine, scientists focus on asymmetry and postures that indicate pain from abnormal function in one or more limbs.

Advertisement

PGIMER to use AI to predict mortality

PGIMER to use AI to predict mortality

The Postgraduate Institute of Medical Education and Research (PGIMER) in Chandigarh is currently researching the application of machine learning (ML) and artificial intelligence (AI) algorithms to predict critical events such as mortality and duration of ICU stay. 

Head of Department of Anesthesia and Intensive Care, Professor GD Puri, and his team are making technological advances through AI/ML in medicine in collaboration with the cardiothoracic vascular surgery team led by Dr. Nitin Auluck and Dr. SK Singh Thingnamsection at IIT, Ropar. 

Moreover, an interdisciplinary team is working under the leaders’ guidance to develop a predictive platform using the current Anesthesia Information and Management System (AIMS) to help with the decision-making process. 

Read More:  Artificial Intelligence Predicts Mortality Rate Using Socioeconomic And Clinical Data

AIMS is already functional in the four cardiothoracic vascular surgery operation theaters at the Advanced Cardiac Centre of the Institute. AIMS could better utilize the resources and predict adverse outcome probabilities by promoting advanced post-surgery cardiac care with the help of the new AI and ML technology. 

Recently, a copyright application was filed by AIMS for a software program that can automatically extract relevant data from patient reports, which is indispensable to capturing vital patient data. Along with that, AIMS is laying a foundation to utilize supercomputing resources to address the computational needs of medical applications with an eye on the future.

Advertisement

Dataiku Announces Dataiku 11 with Enhanced Tools to Scale AI

dataiku announces dataiku 11 ai

AI Conference London, 2022: New York-based Dataiku announces a pivotal update to deliver enhanced AI tools via its AI and data science platform. Dataiku 11, the new release, provides more value at scale and enables more engagement with artificial intelligence. The latest version will air in July and focus on working on the company’s promise of “everyday AI.”

Clément Stenac, cofounder and CTO of Dataiku, said, “Expert data scientists, data engineers and ML [machine learning] engineers are some of the most valuable and sought-after jobs today. Yet all too often, talented data scientists spend most of their time on low-value logistics like setting up and maintaining environments, preparing data and putting projects into production.”

He further explained that Dataiku’s extensive automation in the 11th version will aid in eliminating the busy work so companies can focus on making the most of their AI investments and not on setting up logistics. 

He added, “With extensive automation built into Dataiku 11, we’re helping companies eliminate the frustrating busywork so companies can make more of their AI investment quickly and ultimately create a culture of AI to transform industries.”

Read More: Google AI Launches LIMoE, a Large-Scale Architecture for Pathways

With Dataiku 11, developers get a Feature Store for sharing flows to enhance organization and accelerate the process. The updated version also offers: 

Code Studio: It is a fully automated and isolated coding environment where developers can engage with their preferred IDE/web app stack. The coding environment addresses the issue of a complex custom setup that would have been required previously. 

Seamless Computer Vision development: The update brings a visual ML interface and data labeling framework. Before the 11th version, third-party platforms were responsible for annotating data in large amounts. Currently, Dataiku 11 will provide end-to-end computer vision tasks of detecting and annotating complex objects. 

Time-series forecasting: Dataiku 11 addresses the historical analysis of company data using its no-code built-in tools for such research and forecasting. 

Besides these updates, the company unveiled several mainstream updates to generate flow documents and a central registry for data pipelines and project artifacts. The aim is to improve oversight and be in charge of model development and deployment.

Advertisement

Anantnag school introduces AI-enabled teaching solutions

Anantnag school introduces AI-enabled teaching solutions

St. Luke’s Convent School, Anantnag, Jammu & Kashmir, has partnered with a leading education technology platform, EMBIBE, to train teachers with Artificial Intelligence (AI) enabled personalized teaching solutions.

This initiative aims to create an adaptive learning environment through the teachers for the students. The training will catalyze high-quality digital education with 3D content related to the syllabus that will encourage imagination.

Using AI-based teaching, teachers can create impactful graphic lessons and easily switch between online and offline modes. The platform will also enable teachers to assign personalized homework to students and conduct tests easily. 

Read More: Deloitte Partners With The University Of Sydney Business School On AI

Moreover, the AI technology also keeps track of students’ grades, score progress, syllabus coverage, ascertain behavioral changes, and class engagements. Overall, teachers will be able to impart quality education and provide personalized attention to each student while delivering learning outcomes consistent with the New Education Policy.

According to Masood Ahmad, Chairman of St. Luke’s Convent School, the initiative will provide students with better access to high-quality content and detailed reports on their performance and learning patterns. Also, he added that teachers could understand every student’s strengths and weaknesses and take necessary action through this platform. 

EMBIBE is a leading educational technology platform that troubleshoots the learning gaps using artificial intelligence, designed for personalized learning and upgrading the student-teacher trade matrix. The platform caters to the education ecosystem with top-notch content aligned with every prescribed curriculum in every language.

Advertisement

Salesforce Open-Sources OmniXAI an Explainable AI Machine Learning Library

salesforce omnixai open source library

OmniXAI, which stands for Omni eXplainable AI, is a python-based machine learning library that provides deeper insights into explainable AI (XAI) models. It is an open-source framework by Salesforce, which offers a range of explanatory methods as ‘model-agnostic’ and ‘model-specific’ AI decisions. 

Primitive AI models can get very complex for human understanding. As a consequence, a lot of crucial applications deter using them. The complexity of AI models based on deep neural networks has caused a surge in developing more XAI models. These models offer more transparency and persuasiveness to enhance model performance. 

However, existing XAI libraries pose some restrictions. These libraries can handle only a limited type of data and models. Each XAI library has a different interface, making it difficult to switch from one to another. Furthermore, there is a lack of visualization and comparative explanation in the existing frameworks.

Read More: YogiFi launches 2nd Gen AI Yoga Mats: YogiFi Gen 2 and YogiFi Gen 2 Pro

OmniXAI is designed as a one-stop comprehensive library that addresses these flaws. It makes XAI accessible for all machine learning processes; it is not restricted to feature engineering, model development, data exploration, or decision-making. With OmniXAI, users can deploy a ‘model-agnostic’ approach. With this approach, the framework can provide insights without any prior knowledge of the AI model. And in the ‘model-specific’ approach, it enables the framework to generate explanations with some prior knowledge of the model.

Salesforce’s open-source library, OmniXAI, is also compatible with PyTorch, TensorFlow, and others. Users can choose from several explanation methods for various elements of the model. OmniXAI facilitates this process by providing a standard interface that generates explanations with few lines of code. 

The primary design philosophy of OmniXAI is to allow users to apply many explanation methods simultaneously and visualize the resulting generated explanations. Salesforce is still occupied with developing and improvising OmniXAI with more algorithms and compatibility with more data types. 

Advertisement

OSRTC installs AI-based smart toilet in Puri district 

OSRTC installs AI-based smart toilet

Odisha State Road Transport Corporation (OSRTC) installed an innovative ultra-modern artificial intelligence-based toilet-cum-integrated commercial facility in the Puri district.

The AI-based facility has ambient lighting and aesthetic design, keeping women’s safety in mind. The UV lights-enabled toilet will ensure no viruses and bacteria. The facility will operate on pay, use, and redeem with a lounge facility.

The toilets with refreshment rooms would provide a superior sanitation facility to passengers and act as a relaxation center.

Read More: Odisha-based Startup’s Aircraft to be displayed in Paris

The AI-powered facility has been developed with a Madhya Pradesh-based startup, Freshrooms Hospitality Services. Such a modern facility has been introduced in the eastern part of the country, and it will open for tourists and travelers from Monday. 

According to the managing director of OSRTC, Diptesh Pattnayak, being the only State Transport Undertaking (STU), it has always been the mandate for corporations to provide better and hygienic passenger facilities. He added that OSRTC employs advanced artificial intelligence, better enforcement, and visionary administration to fulfill that mandate. 

He added that the AI-based innovative system is expected to mitigate the issue faced by the tourism industry and benefit tourists from various parts of the world.

Advertisement

ML detects autism speech patterns in different languages

ML detects autism speech patterns

A team of researchers from Northwestern University (NU) has successfully used machine learning to identify speech patterns in children with autism that were consistent between languages like English and Cantonese. The research suggests that speech features may be a valuable tool for diagnosing the condition.

The study concluded results that could assist scientists in differentiating between environmental and genetic factors shaping the communication capabilities of people with autism. The results can potentially help them research more about the condition’s origin and develop new therapies.

A team of NU scientists Molly Losh and Joseph C.Y. Lau, in collaboration with Hong Kong-based Patrick Wong, and his team, have successfully used supervised machine learning techniques to identify speech differences associated with autism.

Read More: Researchers Use Neural Network To Gain Insight Into Autism Spectrum Disorder

The data used recordings of English and Cantonese-speaking young people with and without autism telling their version of the story to train the algorithm. The stories were depicted in a wordless picture book for children.

According to Joseph Lau, using machine learning to recognize the critical elements of speech predictive of autism represented a crucial step forward for researchers. He added that research had been limited by English language bias in autism and humans’ subjectivity when categorizing speech differences between people with and without autism.

The researchers believe this study work has the potential to contribute to a better understanding of autism. AI has the potential to make the diagnosis of autism easier by reducing the burden on healthcare professionals and making autism diagnosis more accessible to the public.

Advertisement

VideaHealth receives Regulatory License from Health Canada

VideaHealth Regulatory License Canada

Virtual care platform that provides health coaching from experienced healthcare providers VideaHealth receives a regulatory Medical Device Establishment License (MDEL) from Health Canada for its artificial intelligence-powered platform named Videa Caries Assis. 

The platform is a dental caries (cavity) detecting system driven by AI. Apart from this license, the company had earlier received FDA 501k clearance of Videa Carries Assist and its trial. 

The trial showed how the company’s cavity detection AI technology-enhanced dentists’ diagnosis accuracy without introducing new workflow procedures. The AI and software solutions from VideaHealth help dentists in better analyzing patient X-rays, capturing faster reimbursements, and providing greater transparency and accuracy for treatment suggestions. 

Read More: Microsoft and Museum of Art & Photography Bengaluru develops new AI platform, INTERWOVEN

Founder and CEO of VideaHealth, Florian Hillen, said, “We’re excited to partner with dentists and industry-leading DSOs within Canada’s innovative dental market as we work together to bring the power of AI to all.” 

Hillen further added that VideaHealth’s primary objective is to provide clinical access to our powerful AI so that millions of people can receive more effective preventative treatment.

United States-based leading dental artificial intelligence (AI) solution provider VidaHealth was founded by Connie Chen, Jono Chang, Ozan Onay, and Stephanie Tilenius in 2014. The company specializes in providing a virtual care platform that is specifically developed to treat a person’s overall health by addressing both mental and physical disorders concurrently. 

Vida’s app provides video sessions, messaging, and digital information to assist individuals in preventing, managing, and curing chronic disorders such as diabetes, hypertension, and others. Videa Caries Assist empowers dentists with VideaHealth’s Videa Factory, which holds over 100 million data points to assist in eliminating possible AI bias in dentistry and give comprehensive standards for data trends, pathology diagnosis, and treatment suggestions.

Advertisement