Friday, January 16, 2026
ad
Home Blog Page 293

MicroSys partners with Hailo to develop High Performance Artificial Intelligence platform

MicroSys partners Hailo artifricial intelligence platform

Embedded systems and device developing company MicroSys partners with one of the leading AI chip manufacturing firms, Hailo, to develop a high-performance, embedded artificial intelligence platform. 

The product name Miriac AIP-LX2160A is a platform designed by both companies that can host up to five integrated Hailo 8 Artificial Intelligence Accelerator modules. The newly showcased product provides a high bandwidth and power-efficient solution at the edge, which would benefit multiple applications used in various industries, including automobile, heavy machinery, and many others. 

Miriac features NXP® QorIQ Layerscape® LX2160A high-throughput processor technology that increases its performance and deep learning capabilities exponentially by providing a speed of up to 130 tera-operations per second (TOPS). 

Read More: Apollo launches Artificial Intelligence-powered Heart Disease Risk Prediction tool

Managing Director of MicroSys Electronics, Sophia Schindler, said, “Hailo’s AI processor allows edge devices to run full-scale deep learning applications more efficiently, effectively, and sustainably while significantly lowering costs. In combination with our NXP processor-based platforms, our customers get one of the most powerful AI solutions that can be developed for edge applications.” 

She further added that this strategic partnership between the two companies would allow their customers to enjoy the benefits of artificial intelligence and neural networks to the fullest. 

The newly developed platform has been tested on multiple standard NN benchmarks, including Resent-50, Mobilenet V1-SSD, YOLOv5m, and displayed exceptional results. The new application-ready bundle will be mainly used for collaborative robotics, predictive maintenance, video surveillance with distributed cameras, and many more. 

Co-founder and CEO of Hailo, Orr Danon, said, “This collaboration strengthens our position in the edge computing sector, enabling us to further address the rapidly growing market seeking embedded edge platforms with robust AI capabilities.” 

He also mentioned that Hailo would continue to work together with MicroSys to develop industry-leading edge computation solutions for a wide range of automotive and industrial automation applications. 

Advertisement

Schrodinger is using Artificial Intelligence solutions to develop Medical Drugs

Schrodinger artificial intelligence medical drugs

Chemical simulation software developing firm Schrodinger has built a unique artificial intelligence-powered platform that enables pharmaceutical companies to create medicines. The company’s AI platform integrates predictive physics-based tools with machine learning solutions to boost the process of drug discovery. 

Schrodinger’s software allows faster lead discovery as it has the ability to quickly identify potent molecules for initiating the lead optimization process using scaffold hopping. This physics-based practice is used to replace the central core of molecules for identifying novel and highly potent molecules. 

The one-of-a-kind platform is also capable of accurately predicting drug properties, including selectivity, potency, and bioavailability using artificial intelligence and machine learning technologies. 

Read More: AMD Aims to Increase AI-HPC Server Energy Efficiency by 30x by 2025

Olivia Zitkus, an analyst at Motley Fool, said, “Their (Schordinger’s) main idea is basically, if you can compute all relevant molecular properties with high accuracy, with the computer designing drugs and materials so not even just inside the pharma space but materials, too, would have a higher success rate, be faster and they’d be cheaper.” 

She further added that the company’s platform is entirely artificial intelligence and physics-based that effortlessly runs computer simulations working on molecular designs. Schrodinger uses its internal and cloud computing resources to scale up its calculations of fundamental drug properties that accelerate the process of high-quality drug candidate identification with high accuracy. 

New York-based biotechnology and pharmaceutical software developing firm Schrodinger was founded by Richard Friesner and William Goddard in the year 1990. The company is an industry leader in providing software solutions that help in boosting research speed across various industries, including biopharmaceutical, academic institutions, government laboratories, and many more on a global scale. 

The company has raised a total funding of over $562 million from investors like Pavilion Capital, Bill and Melinda Gates Foundation, WuXi App Tech, Sixty Degree Capital, and Formic Ventures over eight funding rounds. 

Advertisement

Apollo launches Artificial Intelligence-powered Heart Disease Risk Prediction tool

Apollo heart disease risk prediction

India’s one of the largest hospital groups, Apollo launched its new artificial intelligence-powered tool that accurately predicts the risk of heart disease in patients. It used data gathered over the past decade from more than 400,000 patients to develop this predictive artificial intelligence tool. 

Apollo claims that the tool can detect the risk of cardiovascular disease in patients. The tool was launched in an event organized on the occasion of World Heart Day. Doctors will benefit from this tool as it will help them treat potential patients at an early stage. 

Apollo AI-powered cardiovascular disease risk tool, analyzes various aspects like the patient’s lifestyle characteristics, diet, physical activity, smoking preferences, blood pressure, anxiety, and stress to assign patients risk scores. The scores indicated the probability of patients developing cardiovascular disease. 

Read More: Toyota’s Woven Planet acquires Renovo Motors.

Joint Managing Director at Apollo Hospitals Group, Sangita Reddy, said, “The AI tool to predict and prevent heart disease is the fruition of many years of research and development and is built on algorithms based on ten years of anonymized data collected by the team at Apollo Hospitals.” 

She further added that the tool had been validated internationally with the use of federated learning on the Microsoft Azure platform. In addition, the tool has been validated by Coronary heart+Vascular Middle at Maastricht College and King George’s Medical College, Lucknow. 

The tool also provides insight on risk contributors that can be modified to decrease cardiovascular disease risk in patients. In 2016, India accounted for one-fifth of overall deaths caused by cardiovascular diseases worldwide. This new artificial intelligence tool will empower doctors to provide “proactive, pre-emptive and preventive” care to at-risk people. 

“The Apollo AI-powered heart problems danger instrument will change that and put the data and the means to foretell and forestall coronary heart illness within the doctor’s palms,” said the Chairman of Apollo Hospitals Group, Dr. Prathap C. Reddy.

Advertisement

AMD Aims to Increase AI-HPC Server Energy Efficiency by 30x by 2025

AMD AI AND HPC Energy Efficiency by 30x
Image credit: AMD

By 2025, AMD hopes to achieve a 30x boost in energy efficiency in Artificial Intelligence (AI) training and High-Performance Computing (HPC) applications running on accelerated compute nodes using AMD EPYC CPUs and AMD Instinct accelerators. To achieve this audacious objective, AMD will have to enhance the energy efficiency of a computing node at a rate over 2.5 times faster than the industry’s overall progress over the previous five years.

(Image credit: AMD)

Not only do we want to provide a better performance, but we also want to do it without using an excessive amount of power. In the next four years, AMD intends to increase the performance-per-watt efficiency of future server processors. However, AMD recognizes that this will be difficult because there isn’t much opportunity for process node energy efficiency improvements. As a result, the business will concentrate on improving its silicon architecture as the primary strategy for making its chips 30 times more efficient.

AMD
(Image credit: AMD)

Accelerated compute nodes (ACNs) are the world’s most powerful and sophisticated computing devices, used for scientific research and large-scale supercomputer simulations. They offer scientists the computational power they need to make discoveries in various disciplines, including material sciences, climate prediction, genetics, drug development, and alternative energy. Accelerated nodes are also crucial for training AI neural networks, which are now leveraged for speech recognition, language translation, and expert recommendation systems with similar potential applications in the future decade. In 2025, the 30x objective would save billions of kilowatt-hours of electricity, lowering the power grid’s load.

AMD updates efficiency target: 30x targeted for AI and HPC by 2025
(Image credit: AMD)

According to AMD, if successful, it will reduce the power required for AI and HPC systems to complete a single calculation by 97% over five years.

AMD employs segment-specific data center Power Utilization Effectiveness (PUE) with equipment usage taken into consideration, in addition, to compute node performance/Watt metrics to make the objective particularly relevant to global energy use. The energy consumption baseline is based on the same industrial energy per operation improvement rates from 2015 to 2020, projected to 2025. To arrive at a relevant assessment of actual energy consumption improvement globally, the measure of energy per operation improvement in each segment from 2020-2025 is weighted by predicted global volumes multiplied by the Typical Energy Consumption (TEC) of each computing segment.

AMD has previously experimented with a slew of power-efficiency advancements in its CPU and GPU architectures, to the point where AMD’s Zen CPUs outperform Intel’s finest in terms of performance per watt.

According to speculations, AMD’s Zen 4 architecture will enable AVX-512, implying another way AMD can improve AI speed and efficiency. AMD has a lengthy history of supporting Intel extensions at n-1 extension sets or when the Intel extensions have been reserved for Intel devices for a long time.

Intel should have Sapphire Rapids with AMX (Advanced matriX Extensions) compatibility built-in by the time Zen 4 probably comes in late 2022 with reported AVX-512 support. By 2025, AMD may have implemented or been planning to add AMX support. It’s unclear how much efficiency AMD would gain by using these new SIMD processors.

AMD anticipates these improvements to significantly lower the rise in data center power usage, which is presently increasing exponentially, by 2032, such that if this objective is met, it will be virtually linear with a minimal slope.

Read More: Inside Intel’s Loihi 2 Neuromorphic Chip: The Upgrades and Promises

AMD also released its 26th annual Corporate Responsibility Report, which highlights the company’s achievements from the previous year, including success against its 2014 to 2020 targets, as well as additional goals for 2025 and 2030.

The company’s purpose-driven approach to high-performance computing is guided by four main environmental, social, and governance (ESG) strategic emphasis areas in this year’s report: digital impact, environmental stewardship, supply chain accountability, and diversity, belonging, and inclusion.

Advertisement

Amazon Unveils new home Robot Astro

Amazon home robot Astro

Technology giant Amazon unveils its first home robot named Astro. According to the company, Astro is capable of helping customers with various daily tasks like home monitoring, connecting with family, and many others. 

The robot can be controlled by a remote when its users are not present at their homes. Astro can be used for home security purposes as it detects any movement and immediately sends a notification to users. 

Amazon has launched Astro as a ‘Day 1 Edition’ product, which means customers need to sign up, and Amazon will then send invitation links to buy the new robot. This strategy will help Amazon limit the manufacturing of the robot to avoid an unnecessary surge in demand. 

Read More: Nanowear Receives FDA Clearance to Implement Artificial Intelligence-based Diagnostics

No information has been provided regarding the official launch of Astro, but sources claim that the product will go on sale by the end of this year. Amazon has decided to sell Astro for $999 per unit. 

Astro is equipped with various safety sensors that enable it to detect any obstacle and make appropriate decisions regarding movements for avoiding damage. Chief Analyst at CCS Insights, Ben Wood, said, “Astro is a bold move by Amazon, but a logical step given its expertise in robots and desire to become more integrated into consumers’ daily lives.” 

He further added that offering products that resemble characters for Sci-fi novels prove that Amazon is indeed an innovative company. Astro can process data, including images and raw sensor data, that enables Astro to respond quickly to its environment. Astro can also recognize its users using visual ID technology. 

“I believe the Astro robot will sell out in minutes when it becomes available in the US market,” said Wood.

Advertisement

Deep learning framework will enable material design on unseen domain

Deep learning enables superior material design

A new study has proposed a deep neural network-based forward design approach that will enable an efficient search for superior materials, which will be far beyond the domain of the initial training set. This novel approach, funded by the National Research Foundation of Korea and KAIST Global Singularity Research Project, will compensate for the weak predictive power of neural networks on an unseen domain by gradually updating neural networks through active transfer learning and data augmentation techniques.

Professor Seungwha Ryu from the Department of Mechanical Engineering believes that this study will address various optimization problems with enormous design configurations. As this proposed framework provided excellent design close to global optima, it was efficient for the grid composite optimization problem. This study was reported in npj Computational Materials last month.

“As neural networks have weak predictive power, the primary intent was to mitigate underlying limitations for the training set of material or structure design,” said Professor Ryu. For a vast design space, neural network-based generative models have been investigated as an inverse design method. However, conventional generative models have limited application because they cannot access data outside the range of training sets. Even the advanced generative models also suffer from weak predictive power for the unseen domain devised to overcome this limitation.

Read More: IIT Mandi Is Hosting Workshop On Deep Learning For Executives And Working Professionals

To overcome the above issues, Professor Ryu’s team, in collaboration with researchers from Professor Grace Gu’s group at UC Berkeley, devised a method that simultaneously expands the strong predictive power of a deep neural network, and identifies optimal design by iterating three key steps:

  1. Genetic algorithms find a few candidates close to the training set by mixing superior designs with training sets.
  2. Check the accountability of candidates whether they have improved properties, and expand the training set by duplicating the authenticated designs via a data augmentation method. 
  3. As the expansion proceeds along a relatively narrow but correct path toward the optimal design, the framework enables efficient search. Lastly, expand the reliable prediction domain by updating the neural network with the new superior designs via transfer learning.

Neural network-based techniques are data-hungry models. They become inefficient and suffer from weak predictive power when the optimal configuration of materials and structure lies far from the initial training set. However, if data points lie within and near the domain of the training set, deep neural network models tend to have reliable predictive power. As this method provides an efficient way of gradually expanding the reliable prediction domain, it can handle design problems involving — huge datasets, time-consuming and expensive will be of greater benefit.

Researchers expect this framework to be applicable to a wide range of optimization problems in other science and engineering disciplines with ample design space. This method avoids the risk of being stuck in local minima while providing an effective way of progressively expanding the reliable prediction domain toward the target design.Currently, research teams apply this framework for optimizing design tasks of — metamaterial structures, segmented thermoelectric generators, and optimal sensor distributions. “From these sets of continual studies, we expect to recognize the potential factors of the suggested algorithm. Ultimately, we want to formulate efficient machine learning-based design approaches,” explained Professor Ryu.

Advertisement

Eagle Eye Networks acquires Surveillance AI firm Uncanny Vision

eagle eye networks acquires uncanny vision

Texas-based cloud video surveillance company Eagle Eye Networks acquires award-winning artificial intelligence-based computer vision startup Uncanny Vision. Officials have not provided any information related to the valuation of this acquisition deal. 

With this new development, Eagle Eye Networks wants to further improve its surveillance products with artificial intelligence and increase its customer base globally. Eagle Eye will use Uncanny Vision’s expertise in developing artificial intelligence-enabled analytics tools to strengthen the capabilities of its surveillance system. 

The acquisition will accelerate the product improvement process of Eagle Eye Networks that started last year when venture capital firm Accel funded Eagle Eye. “After evaluating more than a dozen AI companies, we began working with Uncanny Vision in 2020. It didn’t take long for us to conclude that Uncanny Vision is the clear leader in surveillance AI,” said the CEO of Eagle Eye Networks, Dean Drako. 

Read More: Hugging Face releases 900 unique Datasets to standardize NLP

He further added that Uncanny Vision’s market share would help them increase their customer base as the computer vision company’s artificial intelligence platform is being used across numerous locations, including Fortune 500 customers. 

Uncanny Vision’s artificial intelligence technology is currently leveraged in multiple sectors, including retail analytics, smart parking, gate security, ATM monitoring, worker safety, perimeter security, and many more. 

Bangalore-based startup Uncanny VIsion was founded by Navaneethan Sundaramoorthy in the year 2012. The company specializes in developing AI-based computer vision solutions that improve security camera capabilities by integrating them with real-time, edge-based intelligent tools. Earlier, the company had received funds from Target Accelerator and Microsoft Accelerator over two funding rounds. 

Co-founder of Uncanny Vision, Navaneethan Sundaramoorthy, said, “We share the Eagle Eye team’s vision to deliver advanced, cyber-secure AI cloud video surveillance offerings that transform video surveillance for businesses around the globe.”

Advertisement

OpenAI ML Model can Summarize Entire Books

OpenAI has developed a new model to study the alignment problem in AI. This model was published in the ‘Recursively Summarizing Books with Human Feedback’ paper, and it can summarize entire books. The OpenAI ML model can summarize entire books by summarizing each chapter or a small portion of the book and then summarizing the summaries. In the end, you get a high-level overview of the entire book. 

To have safe general-purpose artificial intelligence in the future, researchers have to ensure that their models work as per the intended intentions. The challenge of designing AI systems that do the right thing is called the alignment problem. This challenge is not about AI trying to figure out the right thing. Rather, it’s about the AI system choosing to do the right thing.

The OpenAI ML model can summarise entire books to find a scalable solution to the alignment problem. The current model has been trained from the GPT-3 language model, primarily fiction books with over 100K words on average. Researchers skipped non-narrative books and chose only narrative texts because their sentences have very low-level descriptions and are harder to summarize.

Read more: Facebook Launches Dynatask to Boost better Usability of Dynabench and customize NLP tasks

OpenAI’s new ML model builds on the company’s previous research. It used reinforcement learning from human feedback while training a model that helped align summaries with people’s preferences on short posts and articles. However, this method isn’t successful for larger pieces of text, like books. To build a scalable version, OpenAI’s team combined recursive task decomposition with reinforcement learning, which breaks up a complex task into smaller ones. This breakdown of a laborious task allows humans to evaluate the model’s summaries faster since they will check the summaries of smaller parts of a book. Moreover, recursive task decomposition allows the AI system to summarize a book of any length, from hundreds of pages to thousands. 

To compare human-written and model summaries, OpenAI assigned two labelers to read and summarize 40 of the most popular books of 2020, according to Goodreads. Next, the labelers had to rate one another’s summaries besides that of the AI models. On average, human-written summaries received a 6/7 rating. Furthermore, summaries of the model received a 6/7 rating 5% of the time and a 5/7 rating 15% of the time. Some summaries of the model even matched human-written summaries, and the entire set is available here.

They trained the 175B model with standard cross-entropy behavioral cloning (BC) and reinforcement learning (RL). OpenAI’s team evaluated two model sizes: 175B parameters and 6B parameters of the GPT-3 model. For each size, they also evaluated three different modes of training: RL on the whole tree, RL on the first subtree, and BC for the entire tree. Also, for each policy, they generated three summaries to reduce error bars. 

OpenAI also tested their model on the BookSum dataset and NarrativeQA Reading Comprehension Challenge as well. On the BookSum dataset, the 175B models beat all non-oracle baselines on ROUGE by 3-4 points, and 6B models are comparable to the baseline on ROUGE. They both significantly outperformed all baselines on BERTScore, including an 11B T5 model. 

A great way of checking the accuracy of book summaries is to test whether they can answer questions about the original text. To test this, the model was applied to the NarrativeQA question answering dataset. It comprises question/answer pairs about movie transcripts and full book texts that come from Wikipedia summaries. Researchers checked whether the model’s summary could be used as input (instead of the full book or movie text) for the (QA) model. The depth 1 summaries worked better despite the model not being trained explicitly for question answering. 

OpenAI’s primary interest in this work is to empower humans to give feedback to models that are very difficult to evaluate. The researchers comprehend that the lack of human feedback is critical in solving the alignment problem. Unless humans can communicate their values to AI, they can’t take on societally relevant tasks. The research shows that models like abstractive book summarization can be trained using human feedback by leveraging task decomposition. It also found that summaries by the RL models were more efficient than supervised learning and behavioral cloning models. 

Advertisement

Maruti Suzuki launches S-Assist – an Artificial Intelligence-powered Virtual Car Assistance

Maruti Suzuki Launches S-Assist

India’s largest car manufacturing company Maruti Suzuki launched its all-new S-Assist tool. It is a one-of-a-kind artificial intelligence-powered 24/7 virtual car assistance platform that will be available to the company’s NEXA customers. 

The smartphone application uses artificial intelligence and machine learning technologies to provide its customers with an optimized and immersive online buying experience. S-Assist is equipped with a voice-enabled car assistance system that makes it easier for users to understand various products offered by Maruti Suzuki and make informed buying decisions. 

The application also provides a personalized post-purchase service experience to customers. It will be available at no extra cost to customers who can make a purchase from Suzuki’s premium Nexa outlet chain. Customers can enjoy multimedia content, including DIY videos on how to maintain their NEXA cars and many more. 

Read More: US to offer grants to Small Businesses and help them participate in setting Global AI Standards.

Senior executive director of service at Maruti Suzuki India, Partho Banerjee, said, “We are proud to announce the launch of India’s first voice-enabled virtual car assistant, S-Assist, to strengthen the digital experience of our customers. S-Assist is a complimentary service which offers quick access to vehicle features, troubleshooting, and driving tips on customers’ smartphones.”

 He further added that the company’s motto is to digitize the car service experience and ease customers’ car ownership. S-Assist will also display real-time information to its customers related to their raises queries or issues. 

The artificial intelligence-powered application has been developed under the company’s innovation program Mobility and Automobile Innovation Lab (MAIL), which was launched in January 2019. MAIL collaborated with New Delhi-based artificial intelligence chatbot developing startup Xane.AI to create S-Assist. 

Currently, the application is only available in English; however, Partho confirmed that the company would soon launch the product for its mass segment retail outlet chain with multiple language options.

Advertisement

Inside Intel’s Loihi 2 Neuromorphic Chip: The Upgrades and Promises

Intel Loihi 2, Loihi, neuromorphic chip ai , Intel lava
Image Credit: Analytics Drift Team

The second-generation “Loihi” processor from Intel has been made available to advance research into neuromorphic computing approaches that more closely mimic the behavior of biological cognitive processes. Loihi 2 outperforms the previous chip version in terms of density, energy efficiency, and other factors. This is part of an effort to create semiconductors that are more like a biological brain, which might lead to significant improvements in computer performance and efficiency.

Intel Announces Loihi 2, Lava Software Framework For Advancing Neuromorphic  Computing - Phoronix

The first generation of artificial intelligence was built on the foundation of defining rules and emulating classical logic to arrive at rational conclusions within a narrowly defined problem domain. It was ideal for monitoring and optimizing operations. The second generation is dominated by the use of deep learning networks to examine the contents and data that were mostly concerned with sensing and perception. The third generation of AI focuses on drawing similarities to human cognitive processes, like interpretation and autonomous adaptation. 

This is achieved by simulating neurons firing in the same way as humans’ nervous systems do, a method known as neuromorphic computing.

Neuromorphic computing is not a new concept. It was initially suggested in the 1980s by Carver Mead, who coined the phrase “neuromorphic engineering.” Carver had spent more than four decades building analytic systems that simulated human senses and processing mechanisms including sensation, seeing, hearing, and thinking. Neuromorphic computing is a subset of neuromorphic engineering that focuses on the human-like systems’ “thinking” and “processing” capabilities. Today, neuromorphic computing is gaining traction as the next milestone in artificial intelligence technology.

Intel Rolls Out New Loihi 2 Neuromorphic Chip: Built on Early Intel 4  Process

In 2017, Intel released the first-generation Loihi chip, a 14-nanometer chipset with a 60-millimeter die size. It has more than 2 billion transistors and three orchestration Lakemont cores. It also features 128 core packs and a configurable microcode engine for asynchronous spiking neural network-on-chip training. The benefit of having spiking neural networks enabled Loihi to be entirely asynchronous and event-driven, rather than being active and updating on a synchronized clock signal. When a charge builds up in the neurons, “spikes” are sent along active synapses. These spikes are mostly time-based, with time being recorded as part of the data. The core fires out its own spikes to its linked neurons when spikes accumulate in a neuron for a particular amount of time and reach a certain threshold.

Even though Loihi 2 has 128 neuromorphic cores, each core now has 8 times the number of neurons and synapses. Each of the 128 cores has 192 KB of flexible memory, compared to the prior limit of 24. Each neuron may now be assigned up to 4096 states depending on the model, compared to the previous limit of 24. The Neuron model can now be entirely programmable, similar to an FPGA, which gives it more versatility – allowing for new sorts of neuromorphic applications.

One of the drawbacks of Loihi was that spike signals were not programmable and had no context or range of values. Loihi 2 addresses all of these issues while also providing 2-10x (2X for neuron state updates, up to 10X for spike generation) faster circuits, eight times more neurons, and four times more link bandwidth for increased scalability.

Loihi 2 was created using the Intel 4 pre-production process and benefited from the usage of EUV technology in that node. The Intel 4 process allowed to halve the size of the chip from 60 mm2 to 31 mm2, with the number of transistors rising to 2.3 billion. In comparison to previous process technologies, the use of extreme ultraviolet (EUV) lithography in Intel 4 has simplified the layout design guidelines. This has allowed Loihi 2 to be developed quickly.

Support for three-factor learning rules has been added to the Loihi 2 architecture, as well as improved synaptic (internal interconnections) compression for quicker internal data transmission. Loihi 2 also features parallel off-chip connections (that enable the same types of compression as internal synapses) that may be utilized to extend an on-chip mesh network across many physical chips to create a very powerful neuromorphic computer system. Loihi 2 also features new approaches for continual and associative learning. Furthermore, the chip features 10GbE, GPIO, and SPI interfaces to make it easier to integrate Loihi 2 with traditional systems.

Loihi 2 further improves flexibility by integrating faster, standardized I/O interfaces that support Ethernet connections, vision sensors, and bigger mesh networks. These improvements are intended to improve the chip’s compatibility with robots and sensors, which have long been a part of Loihi’s use cases.

Another significant change is in the portion of the processor that assesses the condition of the neuron before deciding whether or not to transmit a spike. Earlier, users had to make such conclusions using a simple bit of arithmetic in the original processor. Now, they only need to conduct comparisons and regulate the flow of instructions in Loihi 2 thanks to a simpler programmable pipeline.

Read More: Apple Event 2021: Everything about the new A15 Bionic chip Explained!

Intel claims Loihi 2’s enhanced architecture allows it to be compatible in carrying back-propagation processes, which is a key component of many AI models. This may help in accelerating the commercialization of neuromorphic chips. Loihi 2 has also been proven to execute inference calculations, with up to 60 times fewer operations per inference compared to Loihi – without any loss in accuracy. Often inference calculations are used by AI models to interpret given data.

The Neuromorphic Research Cloud is presently offering two Loihi 2-based neuromorphic devices to researchers. These are:

  1. Oheo Gulch is a single-chip add-in card that comes with an Intel Arria 10 FPGA for interfacing with Loihi 2 which will be used for early assessment.
  2. Kapoho Point, an 8-chip system board that mounts eight Loihi 2 chips in a 4×4-inch form factor, will be available shortly. It will have GPIO pins along with “standard synchronous and asynchronous interfaces” that will allow it to be used with things like sensors and actuators for embedded robotics applications

These will be available via a cloud service to members of the Intel Neuromorphic Research Community (INRC) and Lava via GitHub for free.

Intel has also created Lava to address the requirement for software convergence, benchmarking, and cross-platform collaboration in the realm of neuromorphic computing. As an open, modular, and extendable framework, it will enable academics and application developers to build on one other’s efforts and eventually converge on a common set of tools, techniques, and libraries. 

Intel Announces Loihi 2, Lava Software Framework For Advancing Neuromorphic  Computing - Phoronix

Lava operates on a range of conventional and neuromorphic processor architectures, allowing for cross-platform execution and compatibility with a variety of artificial intelligence, neuromorphic, and robotics frameworks. Users can get the Lava Software Framework for free on GitHub.

Edy Liongosari, chief research scientist and managing director for Accenture Labs believes that advances like the new Loihi-2 chip and the Lava API will be crucial to the future of neuromorphic computing. “Next-generation neuromorphic architecture will be crucial for Accenture Labs’ research on brain-inspired computer vision algorithms for intelligent edge computing that could power future extended-reality headsets or intelligent mobile robots,” says Edy.

For now, Loihi 2 has piqued the interest of the Queensland University of Technology. The institute is looking to work on more sophisticated neural modules to aid in the implementation of biologically inspired navigation and map formation algorithms. The first generation Loihi is already being used at Los Alamos National Lab to study tradeoffs between quantum and neuromorphic computing. It is also being used in the backpropagation algorithm, which is used to train neural networks.

Advertisement