Home Blog Page 199

IBM Researchers Make an AI-assisted  E-Tongue called Hypertaste

ibm make e tongue hypertaste

The same team of researchers at IBM that invented Watson AI is now advancing their AI into an AI-assisted e-tongue called Hypertaste. The company plans to use the technology for sensing chemicals. The e-tongue will cater to various scientific and industrial applications to identify liquids without a high-end laboratory.

Patrick Ruch iterated on behalf of IBM, “For the rapid and mobile fingerprinting of beverages and other liquids less fit for ingestion, our team at IBM Research is currently developing Hypertaste, an electronic, AI-assisted tongue that draws inspiration from the way humans taste things.”

With Hypertaste, the company’s technology will reduce the gap between powerful stationary machines and portable sensors. Ruch added, “Closing this gap is crucial as most liquids of practical use are complex, meaning they comprise a rather large number of chemical compounds, none of which can serve as an identifier alone.”

Ruch explained that sending the liquids back to the lab for routine analysis adds a lot to the cost and impracticality of the study. The e-tongue would make the analysis much less costly and more time-efficient.

Read More: AI-Based Voicemod makes you sound like Morgan Freeman in Real-Time

Hypertaste uses AI for combinational sensing. Ruch said, “In these liquids, it’s not so much the single components that matter but rather the properties that arise from combining them. Combinatorial sensing relies on the ability of individual sensors to respond simultaneously to different chemicals.”

The tongue consists of an array of sensors that obtain a “fingerprint” of the liquid. These sensors are electrochemical. They consist of electrodes that measure voltage signals from the molecules present in the fluid and generate a “fingerprint” for it. A mobile application then passes on this data to a cloud server. 

Ruch told, “A trained machine learning algorithm compares the digital fingerprint just recorded to a database of known liquids. The algorithm figures out which liquids in the database are most chemically similar to the liquid under investigation, and reports the result back to the mobile app.”

Advertisement

NVIDIA Converts 2D Images into 3D with 3D MoMa AI Technology

nvidia converts 2d into 3d with 3d moma

Computer Vision and Pattern Recognition Conference: NVIDIA has made another attempt to transform still 2D images into 3D objects with AI. The GPU giant uses dubbed 3D MoMa technique for the conversion. 3D MoMa relies on photo measurements taken via photogrammetry and speeds up the process.

The company has been researching neural radiance fields to create 3D scenes from 2D source images. However, the newly unveiled 3D MoMa technology is very different from it. 

3D MoMa: The technology uses AI to approximate physical attributes like lighting and geometry in 2D images. Then it reconstructs realistic 3D form objects. Objects made using 3D MoMa are triangle mesh models that can be imported into graphics engines. NVIDIA’s Tensor Coe GPU can be done within an hour. The inverse rendering technique, which unifies computer graphics and computer vision, speeds up the process.

Read More: Salesforce Open-Sources OmniXAI an Explainable AI Machine Learning Library

Nvidia’s research and creative teams used 3D MoMa to represent jazz instruments as an example. The team then imported the newly developed models into Nvidia 3D Omniverse and dynamically changed their characteristics.

David Luebke, Vice President of Graphics Research of NVIDIA, said, “inverse rendering is a holy grail unifying computer vision and computer graphics.”

He added, “By formulating every piece of the inverse rendering problem as a GPU-accelerated differentiable component, the NVIDIA 3D MoMa rendering pipeline uses the machinery of modern AI and the raw computational horsepower of NVIDIA GPUs to quickly produce 3D objects that creators can import, edit, and extend without limitation in existing tools.”

3D MoMa is still in the works, but Nvidia believes it will allow game developers and designers to swiftly edit 3D objects and integrate them into any virtual scenario.

Advertisement

Iktos and Zealand Pharma to develop AI for Peptide Drug Design

Iktos and Zealand Pharma to develop AI

Iktos has announced collaborative research with Zealand Pharma A/S, a biotechnology company specializing in innovative peptide-based medicines, to co-develop generative and predictive AI technologies for peptide drug design. Iktos is a company specializing in developing AI solutions applied to chemical research.  

According to the agreement, Zealand Pharma will impart its expertise in peptide drug discovery to the generative modeling technologies, expertise in machine learning, and AI of Iktos.

In a statement, Iktos said that the company is looking forward to working with Zealand Pharma’s experienced R&D team to build leading peptide predictive and generative modeling technology in the field of peptides.

Read More: Meta Pharmaceuticals Raises $15M To Make Autoimmune Drugs With AI, New Immuno-Metabolism Tech

Iktos’ artificial intelligence technology is based on a unique data-driven chemical structure creation technology that brings new perspectives into the drug discovery procedure. The technology automatically designs virtual novel molecules with all the essential characteristics of an ideal drug molecule. 

Iktos has recently diversified its R&D efforts into developing an AI technology for peptide-based therapeutics. The company has also developed superior predictive and generative models to assist the design of new peptide therapeutics with desired properties.

Zealand Pharma A/S has an excellent track record of inventing and developing novel peptide-based drugs. The complete has extensive experience in improving the therapeutic characteristics of peptides. 

Zealand Pharma is focusing on expanding its computational chemistry toolbox to integrate artificial intelligence and machine learning-based procedures to design novel therapeutic peptides. 

Yann Gaston-Mathé, President and CEO of Iktos, said that the company is pleased to have joined forces with Zealand Pharma. He added that the company expects to leverage Zealand Pharma’s extensive knowledge in peptide therapeutics with Iktos’ existing technology to initiate peptide drug discovery.

Advertisement

BlackSky gets the Joint Artificial Intelligence Center Contract for 5 years. 

BlackSky gets the Joint Artificial Intelligence Center Contract

BlackSky Technologies was awarded an order agreement by the Joint Artificial Intelligence Center (JAIC) to produce and optimize data sets used in DoD AI models and applications. The contract has a ceiling value of $241 million over the next 5 years.

The agreement will open doors for BlackSky to contribute with its expertise to the various national security challenges the extensive DoD community faces. BlackSky has demonstrated AI expertise in space-based dynamic monitoring.

According to Patrick O’Neil, BlackSky’s chief innovation officer, their unique, data-rich platform brings a vital source of core AI-enabled end-to-end capabilities to the DoD’s mission sets by automatically tasking satellites to the low-latency delivery and analyzing high-frequency geospatial non-imagery and imagery data.

Read More: Artificial Intelligence Satellite From China Take Ultra High Pictures Of Earth

The JAIC, Office of Advancing Analytics, and Defense Digital Service were merged into a single organization called the Chief Digital Artificial Intelligence Office.

BlackSky is a worldwide provider of real-time geospatial intelligence and delivers on-demand, high-frequency imagery, monitoring, and analytics of the most critical strategic locations, events, and economic assets on Earth.

BlackSky operates and owns one of the industry’s leading low earth orbit small satellite constellations. It is optimized to capture imagery wherever and whenever the customers need it with cost-efficiency. BlackSky’s Spectra AI software platform processes data from its constellation and other third-party sensors to produce the critical analytics and insights that the customers require.

Advertisement

AI-Based Voicemod makes you sound like Morgan Freeman in Real-Time

ai voicemod sound morgan freeman

After transforming your voice, the voice changer app Voicemod is starting to use AI to make you sound like Morgan Freeman. The app has been changing voices using existing sound design techniques for a long time. More recently, it started combining the use of AI. 

The app allows users to pretend to have a polished voice of an actor, precisely that of Morgan Freeman, calling it the ‘Morgan’ voice. Besides transforming voices, Voicemod’s real-time AI-based pilot makes it possible to prank call your friends or stream live. The voice is recreated due to similar characteristics to the voices of English-speaking actors. 

These voice actors read out scripts to generate data as input, and then sound designers work on this curated data via sound design technologies to convert voices into characters. These AI voices include filters, background music, and dynamic effects. 

Read More: Samsung Ventures Invests in NeuReality

These voices are processed in real-time on your PC. however, getting the Morgan Voice would require more CPU power than the regular Voicemod effects. For starters, Voicemod will open a beta version where you can sign up and test the impact on your computer to ensure there are no performance issues. In further developments, the main version will be made available for Mac.

Voicemod also debuted its PowerPitch technology, allowing users to build a lasting online voice identity for gaming, role-play, work, education, or even regular calls. People can use this technology for amusement and pranks, but it can also help millions of people with vocal abnormalities improve their pitch, volume, and quality.

Advertisement

Machine Learning-based MRI predicts Alzheimer’s disease

ML-based MRI predicts Alzheimer’s disease

Published research claims to have used machine learning techniques to look at the structural features inside the brain and identify Alzheimer’s disease at an early stage when it can be complicated to diagnose. The technology even scanned the regions not previously associated with Alzheimer’s.

The research has been published in the Nature Portfolio Journal and was funded through the NIHR Imperial Biomedical Research Centre.

The new technique only requires a magnetic resonance imaging (MRI) brain scan taken on a standard 1.5 Tesla machine, which is commonly found in most hospitals, to perform the diagnosis.

Read More: MRI And AI Can Detect Early Signs Of Tumor Cell Death After Novel Therapy

The team of researchers applied an algorithm developed for the classification of cancer tumors in the brain. The researchers divided the brain into 115 parts and assigned 660 features, like shape, size, and texture, to assess each region. They then trained the algorithm to detect changes in these features to predict the existence of Alzheimer’s disease accurately.

The researchers tested the new AI-based technique on brain scans of over 400 patients from the Alzheimer’s Disease Neuroimaging Initiative. The scans were from patients with early and later-stage Alzheimer’s and patients with other neurological conditions, including Parkinson’s disease and frontotemporal dementia. The team also tested the data from over 80 patients undergoing diagnostic tests for Alzheimer’s at the Imperial College Healthcare NHS Trust.

In 98% of cases, the MRI-based machine learning system could accurately detect whether the patient had Alzheimer’s disease or not. The technique also differentiated between early and late-stage Alzheimer’s with 79% accuracy.

The system also spotted changes in areas of the brain previously not associated with Alzheimer’s disease, which has raised the possibility of potential new advances in treating Alzheimer’s disease. The areas include the cerebellum and the ventral diencephalon. 

Advertisement

Samsung Ventures Invests in NeuReality

samsung invests in neureality

Samsung Ventures recently expressed its support and interest in the Israeli AI and semiconductor company NeuReality. NeuReality, supported by Cardumen Capital, Varana Capital, and OurCrowd, is a comprehensive company providing chips, hardware, and software tools that significantly accelerate AI deployment.

The company is a startup founded in 2019 and focuses on evolving AI-as-a-service infrastructure in the cloud, edge, and fog systems. It uses a system-level approach that clubs convenient software with deep learning capabilities and data handling hardware. The holistic approach simplifies the mass deployment of inference technologies. 

Ori Kirshner, Samsung Ventures head in Israel, said, “We see substantial and immediate need for higher efficiency and easy-to-deploy inference solutions for data centers and on-premises use cases, and this is why we are investing in NeuReality.”

NeuReality’s services concentrate on real-life applications for various sectors like e-commerce, medical, digital personal assistants, and more. More comprehensive solutions cater to clouds, telecom operators, and data centers. The company is curating purpose-built AI platforms with experience in data center systems, AI, hardware design, and software development. 

Read More: Dataiku Announces Dataiku 11 with Enhanced Tools to Scale AI

The investment from Samsung Ventures will enable the company to come closer to its goal of the various deployment topologies and customers to adopt AI-based services in their workflows. 

“The investment from Samsung Ventures is a big vote of confidence in NeuReality’s technology. The funds will help us take the company to the next level and take our NR1 SoC to production. This will enable our customers to evolve their system architecture, and this evolution will make it easier for them to scale and maintain their AI infrastructure, whether it is in their data center, in a cloud or on-premises,” said Moshe Tanach, co-founder and CEO of NeuReality.

NeuReallity is also indulged with IBM to deliver disruptive cost and power performance in AI platforms. It is also working alongside AMD to provide its first-gen FPGA platform.

Advertisement

UF/IFAS to use AI to assess livestock mobility

UFIFAS to use AI to assess livestock mobility

The University of Florida, Institute of Food and Agricultural Sciences (UF/IFAS) scientists plan to assess livestock mobility faster and more accurately using artificial intelligence (AI). The technology will analyze high-definition videos of the animals as they move.

The team will use machine learning and gait analysis to speed up the assessment of livestock mobility.

Samantha Brooks, a UF/IFAS geneticist and associate professor of equine physiology, along with other UF researchers, received a $49,713 grant from the Agricultural Genome to Phenome Initiative.

Read More: University Of Florida Develop National AI Curriculum

The researchers will work primarily with horses as they are an excellent model for locomotion. The laboratory is currently working with about 2,000 video clips of horses in motion. Graduate student Madelyn Smythe and hundreds of central Florida horse-owners contributed to the videos. 

The extensive library of videos will enable the construction of accurate models to track the animals’ movement in the video frame. Although working with horses now, the team will translate the findings to similar models for other four-legged farm animals. 

The team will also build AI models to analyze videos of cattle, swine, and other small ruminants for the project. While reviewing the data, the researchers will look at horse traits such as stance time, limb extension, and stride length. In the case of cattle and swine, scientists focus on asymmetry and postures that indicate pain from abnormal function in one or more limbs.

Advertisement

PGIMER to use AI to predict mortality

PGIMER to use AI to predict mortality

The Postgraduate Institute of Medical Education and Research (PGIMER) in Chandigarh is currently researching the application of machine learning (ML) and artificial intelligence (AI) algorithms to predict critical events such as mortality and duration of ICU stay. 

Head of Department of Anesthesia and Intensive Care, Professor GD Puri, and his team are making technological advances through AI/ML in medicine in collaboration with the cardiothoracic vascular surgery team led by Dr. Nitin Auluck and Dr. SK Singh Thingnamsection at IIT, Ropar. 

Moreover, an interdisciplinary team is working under the leaders’ guidance to develop a predictive platform using the current Anesthesia Information and Management System (AIMS) to help with the decision-making process. 

Read More:  Artificial Intelligence Predicts Mortality Rate Using Socioeconomic And Clinical Data

AIMS is already functional in the four cardiothoracic vascular surgery operation theaters at the Advanced Cardiac Centre of the Institute. AIMS could better utilize the resources and predict adverse outcome probabilities by promoting advanced post-surgery cardiac care with the help of the new AI and ML technology. 

Recently, a copyright application was filed by AIMS for a software program that can automatically extract relevant data from patient reports, which is indispensable to capturing vital patient data. Along with that, AIMS is laying a foundation to utilize supercomputing resources to address the computational needs of medical applications with an eye on the future.

Advertisement

Dataiku Announces Dataiku 11 with Enhanced Tools to Scale AI

dataiku announces dataiku 11 ai

AI Conference London, 2022: New York-based Dataiku announces a pivotal update to deliver enhanced AI tools via its AI and data science platform. Dataiku 11, the new release, provides more value at scale and enables more engagement with artificial intelligence. The latest version will air in July and focus on working on the company’s promise of “everyday AI.”

Clément Stenac, cofounder and CTO of Dataiku, said, “Expert data scientists, data engineers and ML [machine learning] engineers are some of the most valuable and sought-after jobs today. Yet all too often, talented data scientists spend most of their time on low-value logistics like setting up and maintaining environments, preparing data and putting projects into production.”

He further explained that Dataiku’s extensive automation in the 11th version will aid in eliminating the busy work so companies can focus on making the most of their AI investments and not on setting up logistics. 

He added, “With extensive automation built into Dataiku 11, we’re helping companies eliminate the frustrating busywork so companies can make more of their AI investment quickly and ultimately create a culture of AI to transform industries.”

Read More: Google AI Launches LIMoE, a Large-Scale Architecture for Pathways

With Dataiku 11, developers get a Feature Store for sharing flows to enhance organization and accelerate the process. The updated version also offers: 

Code Studio: It is a fully automated and isolated coding environment where developers can engage with their preferred IDE/web app stack. The coding environment addresses the issue of a complex custom setup that would have been required previously. 

Seamless Computer Vision development: The update brings a visual ML interface and data labeling framework. Before the 11th version, third-party platforms were responsible for annotating data in large amounts. Currently, Dataiku 11 will provide end-to-end computer vision tasks of detecting and annotating complex objects. 

Time-series forecasting: Dataiku 11 addresses the historical analysis of company data using its no-code built-in tools for such research and forecasting. 

Besides these updates, the company unveiled several mainstream updates to generate flow documents and a central registry for data pipelines and project artifacts. The aim is to improve oversight and be in charge of model development and deployment.

Advertisement