Reliance New Energy announces that it has acquired assets of premier Lithium Iron Phosphate Energy Systems Group Lithium Werks for $61 million through definitive agreements.
The acquired assets include Lithium Werks’ entire patent portfolio, key business contracts, and the hiring of existing employees. This new acquisition will allow Reliance to further strengthen its capabilities and offering in the lithium-ion industry.
Earlier, Reliance also acquired Faradion Ltd, a global leader in sodium-ion cell chemistry. Hence the new development proves Reliance’s plans of expanding in this industry.
Reliance will have access to one of the world’s leading portfolios of Lithium Iron Phosphate (LFP) patents, along with a management team with extensive experience in cell chemistry, custom modules, packing, and the construction of large-scale battery manufacturing facilities as a result of the acquisition.
Chairman of Reliance Industries, Mukesh Ambani, said, “LFP is fast gaining as one of the leading cell chemistries due to its cobalt and nickel free batteries, low cost, and longer life compared to Nickel, Manganese, and Cobalt (NMC) and other chemistries. Lithium Werks is one of the leading LFP cell manufacturing companies globally and has a vast patent portfolio and a management team that brings the tremendous experience of innovation across the LFP value chain.”
He further added that they are eager to collaborate with the Lithium Werks team and are encouraged by the rapid progress they are making toward establishing an end-to-end battery manufacturing and supply ecosystem for Indian markets.
United States-based energy system developing company Lithium Werks was founded in 2017. The firm specializes in multiple domains in the energy industry, including valence battery modules, battery management systems, and many more.
CEO of Lithium Werks Joe Fisher said, “This deal means increased resources and expanded global reach while leveraging our experienced team and IP portfolio and providing scale and momentum to help drive our product innovation, capacity expansion and accelerate our clean energy strategy.”
He also mentioned that they are delighted to join the Reliance New Energy initiative.
In any scientific research, leveraging the ‘Trial and error’ methodology helps in finding optimal solutions for a research problem. However, it can be an expensive and time-consuming affair, especially when using this method to train models to deliver desired outcomes. Recently, in the paper titled, Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer,’ co-authored by Microsoft Research and OpenAI, researchers discuss a new technique called µTransfer. This technique has been shown to reduce the amount of trial and error required in the costly process of training large neural networks.
Across different tuning budgets, µTransfer dominates the baseline method of directly tuning the target model. Source: Microsoft
Models of AI (machine learning or deep learning) are mathematical functions that express the relationship between various data points. It takes time to train such models to handle a specific issue, such as image classification, object identification, image segmentation, or any other NLP application. One of the most important reasons is to acquire the best model by properly tuning hyperparameters. In other words, training a model entails selecting the best hyperparameters for the learning algorithm to employ in order to learn the best parameters that accurately map input data (independent variables) to labels or targets (dependent variable), resulting in ‘artificial intelligence.’
Every machine learning/deep learning model is defined using model parameters. Model parameters can be defined as the variables that your chosen machine learning algorithm utilizes to adjust to your data. They are specific to each model and are what separates it from other analogous models that are operating on similar data. Hyperparameters are variables whose values influence the training process and affect the learning algorithm’s model parameters. These variables aren’t linked to the training data in any way. They are configuration variables that remain constant during a task, as opposed to parameters that vary throughout training.
The process of discovering the best combination of hyperparameters to improve model performance is known as hyperparameter tuning (or hyperparameter optimization). Multiple trials are done in a single training job to perform hyperparameter optimization. Each trial is a complete execution of your training application with values for your selected hyperparameters set within the limitations you define. The outcomes of each trial are tracked by a training service, which makes improvements for future trials. When the job is completed, you can receive a summary of all the trials as well as the most effective value configuration based on the criteria provided. Given the crucial aspect of hyperparameter tuning, researchers spend huge time on the same.
The training of hyperparameters in large neural networks consumes resources since the network must estimate which hyperparameters to employ each time. The Microsoft research demonstrates that there is a highly particular parameterization that ensures excellent hyperparameters across a wide range of model sizes. Known as µ-Parametrization (µP, pronounced “myu-P”) or Maximal Update Parametrization, this technique makes use of the fact that neural networks of various sizes share the same optimum hyperparameters under certain conditions, allowing for substantially cheaper costs and improved efficiency while tuning large-scale models. Essentially, this implies that instead of directly tuning an entire multi-billion-parameter model, a small-scale tuning process may be extrapolated outwards and mapped onto a much larger model.
As per studies, large neural networks are difficult to train because their behavior changes as they expand in size. This homogeneity, however, falls off at varying model widths as training develops.
Furthermore, analytically analyzing training behavior is significantly more challenging. To reduce numerical overflow and underflow, the team worked to achieve comparable consistency, such that as the model width develops, the change in activation scales throughout training stays consistent and equivalent to initialization.
As a result, during training, their parameterization was founded on two fundamental ideas:
Gradient updates in neural networks behave differently than random weights when the width is large. This is because gradient updates are data-driven and take into account correlations, whereas random initializations do not. They must, as a result, be scaled differently.
The parameters of different shapes also behave differently when the width is large. While weights and biases are often used to partition parameters, with the former being matrices and the latter being vectors, some weights behave like vectors in the large-width scenario.
Researchers used these fundamental ideas to develop µ-Parametrization, which assures that neural networks of various widths act consistently during training. This enables them to converge to a desirable limit (feature learning limit) in addition to maintaining a constant activation scale during training.
The Microsoft team’s scaling theory paves the way for the development of a mechanism for transferring training hyperparameters across model sizes. If µ-Parametrization networks of various widths exhibit identical training dynamics, their ideal hyperparameters will most likely be similar. As a consequence, they could take the best hyperparameters from a small model and apply them to a bigger one. On the other hand, the findings suggest that hyperparameters can achieve the same effect without using a different initialization and learning rate scaling algorithm. This phenomenon was termed as µTransfer.
The Microsoft researchers collaborated with OpenAI to evaluate the efficacy of µTransfer for GPT-3. They first experimented with a smaller model to find the best hyperparameters, then transferred them to a bigger, scaled-up system. The team then used µTransfer on the GPT-3 to transfer hyperparameters from a 40-million-parameter model to a 6.7-billion-parameter model. The researchers calculated that their hyperparameter-tuning expenses leveraging µTransfer were just 7% of what it would have been to pre-train the model, only by eliminating the need to tune the bigger GPT-3’s hyperparameters continually. By transferring pretraining hyperparameters from a 13 million parameter model, it also produced remarkable results on BERT-large (350 million parameters).
Microsoft applied µTransfer to GPT-3 6.7-billion parameter model with relative attention and obtained better results than the baseline with absolute attention used in the original GPT-3 paper, all while only spending 7 percent of the pretraining compute budget on tuning. The performance of this µTransfer 6.7-billion model is comparable to that of the 13-billion model (with absolute attention) in the original GPT-3 paper. Source: Microsoft
Microsoft has released a PyTorch package to enable other practitioners to profit from these insights by integrating µ-Parametrization into their current models, which otherwise could be difficult to implement.
Microsoft Research first launched Tensor Programs in the year 2020. The research was based on µ-Parametrisation, which allowed for maximum feature learning in the infinite-width limit. µTransfer operates automatically for intricate neural networks such as Transformer and ResNet and is based on the theoretical underpinning of Tensor Programs.
However, Microsoft admits that there is still much to learn about the scalability of AI models and promises to continue its efforts to derive more principled approaches to large-scale machine learning.
India’s first Artificial Intelligence and Robotics Technology Park (ARTPARK) has been established in the Indian Institute of Science (IISc) Bengaluru campus. The technology park was launched with seed money of Rs 230 crores.
According to Karnataka Minister for IT-BT CN Ashwath Narayan, the Center will bear Rs 170 crore, and the rest of the Rs 230 crore will be taken care of by the Karnataka government.
ARTPARK aims to use futuristic technologies to connect the unconnected, with a focus on developing India’s Artificial Intelligence and Robotics Innovation ecosystem.
According to the plan, the lab’s primary focus will be on promoting new-age technologies such as 5G, artificial intelligence for inclusive learning, enhancing healthcare services, and more.
Chief Executive Officer of ARTPARK, Umakant Soni, said, “In the age of AI, knowledge will be everywhere. Students won’t have to cram information anymore and can focus on applying it to create things. Similarly, healthcare should be available everywhere and not just in hospitals.”
The artificial intelligence industry is expected to reach over $15 trillion by 2030, and this new technology park will help the country harness the potential of artificial intelligence.
The ARTPARK aims to channel innovations into societal impact by executing ambitious mission-mode R&D projects in healthcare, education, mobility, infrastructure, agriculture, retail, and cyber-security aimed at solving problems unique to India.
Narayan said, “This initiative to push the narrative for ‘Connecting the unconnected’ by ARTPARK will help the youth outside urban India not only access the next generation of digital work but also acquire the skills they need to thrive in an AI-driven future.”
He further added that Karnataka would take the lead in developing a new economic growth model for Atmanirbhar Bharat.
Predictive intelligence company Windward announces the launch of its new “Russia” sanctions solution as a part of Windward’s Maritime AI platform.
Windward is launching the new “Russia” sanctions solution to assist customers in minimizing their risk exposure.
Windward’s Maritime artificial intelligence-powered platform will enable every organization to configure, monitor, and adjust its practices in response to changing trade restrictions and business preferences.
According to the company, its solution allows stakeholders to understand the full scope of Russian-related trade, including cargo destinations and sources, allowing them to conduct business confidently and comply with new, rapidly evolving restrictions.
The development comes as an aftereffect of the ongoing Russia-Ukraine war that has caused global powers, including the United States and the United Kingdom, to impose heavy economic sanctions on Russia to pressure it in stopping the war. Companies and vessels identified as being associated with Russia in the Windward database will be labeled as Moderate Risk in the platform.
CEO and Co-founder of Windward Ami Daniel said, “As the fog of the conflict and increased sanctions make conducting trade even more complex, we will continuously update our platform so our customers can continue to conduct business with confidence.”
He further added that during these uncertain times, Windward is committed to providing our customers with the best visibility possible. Windward says that as crude oil and other potentially illegal or sanctioned commodities are transported out of Russia and into destination nations, its platform’s analytical tools will assist organizations in assessing and identifying them.
Israel-based predictive intelligence firm Windward was founded by Ami Daniel and Matan Peled in 2010. The company specializes in providing AI and big data to digitalize the global maritime industry, enabling organizations to achieve business and operational readiness.
To date, Windward has raised a total funding of more than $32 million from multiple investors, including XL Innovate, Aleph, Horizon Ventures, BizTec, and many more over three funding rounds.
Interested individuals can enjoy the benefits of the newly launched Sanction Compliance Solution of Windward at no extra cost for a period of two weeks.
Artificial intelligence-powered chatbot providing company Ivy.ai launches its new self-building chatbot technology named Genie.
It is a highly complex chatbot and live chat platform that allows organizations to build pre-trained, conversational chatbots that can understand unique content.
Genie by Ivy.ai uses a business’s website, knowledge base, and other documents to automatically create bot knowledge, making it a competent technology.
The company has packed Genie with preloaded training data, eliminating the need for time-consuming and long questionnaires or templates.
CEO of Ivy.ai Mark McNasby, said, “Genie is a response to the market’s evolution in its demand for chatbots. Most companies have accepted that chatbots are the future but don’t want to wait months for implementation or constantly tweak templates that don’t help them differentiate from the competition.”
He further added that Genie gives every company the ability to own a sophisticated, high-functioning chatbot that produces immediate results and stays up-to-date over time with minimal effort. Ivy.ai leverages its proprietary technology to get users started, and from there, customers can use Genie to modify and configure their bots.
Below mentioned are the key features of Genie by Ivy.ai –
The software is easy to install as it comes with an in-built intuitive setup assistant. The assistant also allows companies to quickly get started with the software without spending a lot of time training.
Genie has been trained by over five million inquiries and counting, allowing it to understand natural language questions from users within an organization right away.
It is very user-friendly as users need not have any prior coding knowledge to use the chatbot.
Genie has an extremely high accuracy rate of nearly 90% for inbound inquiries.
Genie examines websites for flaws with the Web Content Accessibility Guidelines (WCAG) and notifies users of any differences that need to be addressed.
Genie keeps bot knowledge up to date by scanning websites on a daily basis, ensuring that the website and bot are always in sync.
United States-based AI chatbot providing company Ivy.ai was founded by Mark McNasby in 2016. The firm specializes in providing chatbots for healthcare, education, and government institutions.
Global semiconductor giant Intel recognizes three Malaysian students and an instructor for their innovations in the field of artificial intelligence as a part of the company’s AI World Affect Pageant program.
It is a one-of-a-kind program launched in 2021 as a step towards its worldwide digital readiness initiative.
The widely popular Intel AI Global Impact Festival event witnessed the tremendous participation of more than 10,000 students, next-generation engineers, and future developers from 135 countries around the world.
According to Intel, over 230 AI Improvements were submitted by event participants, with 14 of them originating from Malaysian students and professors.
Managing Director of Intel Malaysia, AK Chong, said, “We are pleased to be working with CREST, MIDA and MPC to prepare and develop the next generation of Malaysians for the future, and are pleased to see our investments in digital readiness programs already bearing fruit in Malaysia.”
He further added that the submitted ideas were very amazing, and they serve as a terrific example of how well-thought-out AI can improve people’s lives in a variety of ways. In the ‘AI Enthusiasts’ over 18s category, Mohd Farith Ibrahim was among three winners recognized globally as a ‘Grand Prize Winner.’
His novel idea is to create an artificial intelligence system, powered by Intel software and libraries, that, once installed in a car, would monitor CO emissions and issue an alert message if they increased.
Farith received a $5,000 prize, an Intel-powered laptop computer, along with mentorship and internship opportunities with Intel. Additionally, students who participate in Intel’s AI for Youth programs in Malaysia can now earn points that the country’s Ministry of Education will officially recognize.
National Robotarium, hosted by Heriot-Watt University and the University of Edinburgh researchers are working on a new artificial intelligence-powered companion that can be used to tell stories to help memory recollection for dementia patients while boosting their confidence.
People with dementia frequently suffer difficulty speaking with others and a loss of confidence, leading to them becoming reclusive or depressed.
Along with memory aid, the artificial intelligence companion can also be used to counter depression in people suffering from dementia and Alzheimer’s.
Dr. Mei Yii Lim, a co-investigator of this project, presented the idea of this one-of-a-kind research named Agent-Based Memory Prosthesis to Encore Reminiscing. The Engineering and Physical Sciences Research Council has awarded £450,000 to the team from Heriot-Watt University in collaboration with the University of Strathclyde.
Dr. Lim said, “AMPER will explore the potential for AI to help access an individual’s personal memories residing in the still viable regions of the brain by creating natural, relatable stories.”
Lim further added that these would be adapted to their specific life experiences, age, social environment, and changing requirements to encourage reminiscence.
Most of the traditionally used rehabilitative care approaches focus on physical assistance and repetitive reminder strategies. Whereas AMPER’s artificial intelligence-powered user-centered approach will focus on individualized storytelling to help patients reclaim their memories.
Professor Ruth Aylett from the National Robotarium said that artificial intelligence has the potential to drastically impact the lives of those individuals suffering from cognitive illnesses.
“Our ambition is to develop an AI-driven companion that offers patients and their caregivers a flexible solution to help give an individual a sustained sense of self-worth, social acceptance, and independence,” she added.
Reuters reported that Ukraine has started using US-based company Clearview AI’s facial recognition technology as it offered to uncover Russian assailants, combat misinformation, and identify the dead.
The information was revealed by Clearview AI’s chief executive claiming that Ukraine’s defense ministry on Saturday began using the company’s facial recognition technology.
A United States diplomat mentioned that Ukraine has free access to Clearview AI’s strong face search engine, which might be used to screen people of interest at checkpoints, among other things.
In addition, Clearview bluntly said that it has not provided its technology to Russia, which is invading Ukraine and is calling it a ‘special operation.’
According to Reuters, following Russia’s invasion of Ukraine, Clearview Chief Executive Hoan Ton-That wrote a letter to Kyiv requesting aid. Clearview AI had access to over 2 billion images from the Russian social media service VKontakte, out of a total database of over 10 billion photos.
Therefore, it will be much easier for Ukraine to identify the dead using the company’s technology when compared to traditional approaches like matching fingerprints and others.
Additionally, according to Hoan Ton-That, Chief Executive of Clearview AI, the facial recognition company’s technology might be used to reunite refugees with their families, identify Russian spies, and assist the government in debunking bogus war-related social media posts.
Clearview said that its technology should never be used as the sole means of identification. The organization does not want the technology to be used to violate the Geneva Conventions.
The decision was made because of the company’s unethical practices of collecting user data from public databases without their consent, which violates the European Union’s privacy norms.
Alphabet subsidiary DeepMind has unveiled Ithaca, a new AI model that can help restore and reconstruct historical inscriptions, manuscripts, and other materials. Ithaca is a neural network that is developed in collaboration with the University of Venice, the University of Oxford, and the Athens University of Economics and Business. This neural network draws inspiration from Ithaca – the Greek island described in Homer’s Odyssey for its name.
Artificial intelligence has revolutionized the way archaeologists excavate the past in recent years. Though it cannot fight cursed mummies or crack a whip-like Indiana Jones, it has proved itself to be a valuable asset in unearthing the past. Archaeologists, for example, are examining manuscripts and tablets using computer vision techniques. In many places of the world, machine learning is used to assess satellite data and other aerial imagery to find potential archaeological sites.
According to a report published in Nature by DeepMind, Ithaca was trained using natural language processing to retrieve lost ancient literature that has been degraded through time and identify the original location of the text and determine the date when it was produced. The objectives behind this research were: finding a solution to decode ancient yet damaged Greek inscriptions and come up with an advanced modern dating technique.
These objectives were crucial because these manuscripts are frequently damaged owing to their antiquity, making restoration a gratifying effort. In addition, because they are frequently etched on inorganic materials like stone or metal, contemporary dating methods such as radiocarbon dating cannot be performed to determine when they were written.
Pythia, Ithaca’s precursor, which draws its name from the priestess of Delphi, was DeepMind’s first text restoration system launched in 2019. The initial stage for the researchers was to convert the Packard Humanities Institute (PHI) dataset, which is the world’s largest digitized collection of ancient Greek inscriptions, into PHI-ML, a machine-actionable text format. The Packard Humanities Institute dataset includes transcribed texts of 178,551 inscriptions. The researchers then taught Pythia to predict the missing letters of words in those inscriptions using both words and individual characters as inputs.
When presented with an incomplete inscription, Pythia generated as many as 20 alternative probable letters or phrases, as well as the level of confidence for each suggestion. It was up to the historians (also known as “domain experts”) to sort through all of the choices and make a final decision based on their subject matter expertise.
Ithaca’s neural network architecture is built on the transformer, which employs an attention mechanism to balance the impact of various input elements on the model’s decision-making process. By concatenating the input character and word representations with their sequential positional information, the attention mechanism is aware of the position of each component of the input text. Each Ithaca transformer block produces a sequence of processed representations with a length equal to the number of input characters, and each block’s output becomes the input of the next. The final output is sent to three separate task heads, each of which handles restoration, geographical attribution, and chronological attribution using a shallow feedforward neural network that has been properly trained for each function.
During testing, the team observed that Ithaca is 62% accurate at restoring damaged texts and 71% accurate in identifying the placement of a text. It was also demonstrated that it could determine the origin of the writer and could place the date of writing to within 30 years, on average. Further, this research is unique cause unlike existing NLP systems used for text generation and analysis like GPT-3, Ithaca does not rely on using word sequences to offer better textual context. However, it is important to note that it is a research tool that still depends on humans.
If you have any ancient Greek text on hand, you may try out a pared-down version of Ithaca here, or use one of their offered samples to see how it fills in desired gaps. Try it out in this Colab notebook if you have lengthier parts or more than 10 letters missing.
DeepMind also collaborated on an interactive version of Ithaca with Google Cloud and Google Arts & Culture. It has also open-sourced the code as well as the pre-trained model, encouraging additional study. DeepMind also stated on its blog that it was already working on additional Ithaca versions based on other ancient books. Other ancient writing systems, including Akkadian, Demotic, Hebrew, and Mayan, might be used by historians in their research. Ithaca is available on this GitHub page.
The United Nations (UN) recently condemned Facebook’s parent company, Meta, for allowing users to post hate speech against Russians.
Meta did not restrict its users from throwing malevolent remarks against Russians and calling for violent actions against the Russian armed forces amidst the Russia-Ukraine crisis.
The United Nations and the Russian government are on the same page regarding the threats and risks Meta has instigated with its policy change. According to UN Secretary-General Antonio Guterres’ spokesperson Stephane Dujarric, such calls from either side in the Russia-Ukraine conflict are not tolerated by the international body.
“I can tell you, from our standpoint, we stand clearly against all hate speech, all calls for violence. That kind of language is just unacceptable, from whichever quarter it comes from,” said Dujarric.
Facebook allowed its users to celebrate Ukraine’s openly neo-Nazi military unit while calling for violence against “Russians and Russian soldiers,” which led to this criticism of Mark Zuckerberg in the UN.
The change in the company’s policy allowed users from Latvia, Lithuania, Estonia, Poland, Slovakia, Hungary, Romania, Ukraine, and Russia to shoot unacceptable remarks on Russian President Vladimir Putin and the country’s army.
Russia took a step forward and filed a criminal investigation against Meta because of its unethical actions. After this episode, prosecutors in Russia have sought a court to label Meta as an extremist organization.
“The Instagram social network distributes information and materials that contain calls for implementing violent actions against citizens of the Russian Federation, including military personnel,” said Russia’s Media Regulator, Roskomnadzor.
He further added that they aim to block Instagram access across Russia at the request of the Prosecutor-General’s Office with effect from 14th March.
Meta’s President for Global Affairs and former British Deputy Prime Minister Nick Clegg presented their take on the incident by saying that Facebook’s prime purpose was to let Ukrainians express their anger over Russia’s military actions.