Global technology giant Microsoft announces that it has partnered with actionable artificial intelligence services and solutions providing company Synaptiq to launch a new machine vision tool to reduce infections in hospitals.
The collaboration aims at addressing a challenging medical issue of infections caused by central lines inserted into the body to deliver chemotherapy or other drugs.
Microsoft identified central line infections as a major issue, which led to the commencement of this new project of developing solutions for the problem. It has been a long-standing problem in the United States as infections cause countless deaths every year.
The prime reason for the spread of such infections is inefficient and improper care of lines, allowing pathogens to enter the body.
Therefore the newly launched novel machine vision tool will considerably help reduce those numbers. The competent tool uses cell phone photographs to analyze and evaluate the condition line dressings to help identify risks.
The machine vision tool sends an alert to designated users when it identifies an issue in the lines using Microsoft Teams application, enabling workers to quickly fix it to minimize the chances of infection spreading.
Additionally, the tool also uses Microsoft Power BI for its dashboards collating the data. Synaptiq is now looking for healthcare partners for the pilot to train and evaluate the tool, which could be used in hospitals in the future.
United States-based actionable AI firm Synaptiq was founded by Sklarew and Tim Oates in 2015. The company specializes in building AI-powered apps in partnership with its customers.
Sklarew said, “We especially look for clients and partners that are interested in more strategic relationships where we have an opportunity to co-develop, co-market, and/or co-sell solutions or products. This includes creating spin-offs to bring new, leading-edge innovations to market, faster, as new products.”
According to him, his senior officials told him that posting such videos was against the company policy stating the act to be a conflict of interest, which led to job loss in the second week of February 2022.
As per a copy of the company’s social media policy provided by a Tesla employee, it does not mention anything regarding publicly criticizing the company’s products. The policy notes, “Tesla relies on the common sense and good judgment of its employees to engage in responsible social media activity.”
It mentions social media sites such as Facebook, Twitter, Instagram, Reddit, Snapchat, LinkedIn, WeChat, and personal blogs, but not YouTube. Therefore it is hard to judge the actual implication of the policy.
John Bernal did not speak about the reason behind his termination during the time of his release, but the matter came into the limelight once his videos started surfacing on social media.
“A manager from my Autopilot team tried to dissuade me from posting any negative or critical content in the future that involved FSD Beta. They held a video conference with me but never put anything in writing,” said Bernal.
He began his career with Tesla in August 2020 as a data annotation specialist in a San Mateo, California office and was recently put into the role of advanced driver assistance systems test operator before being fired.
Bernal claims that he has always been transparent with his job, as he has mentioned even on his LinkedIn profile about his YouTube channel and his association with Tesla.
A fully managed data warehouse-as-a-service platform, Snowflake has acquired Streamlit for $800 million. This company helps customers manage data in the cloud without cloud vendor lock-in. Streamlit developed a popular open-source project for building data-based apps.
Benoît Dageville, co-founder and president of products at Snowflake, said that Snowflake became familiar with Streamlit because their customers and in-house people were using it. “We have both the same vision — Streamlit and Snowflake — which is all about democratizing access to data. I would describe it very simply as making it super easy to interact with data,” Dageville said.
He also said that Streamlit fills in a big missing piece in the Snowflake platform by allowing data scientists to build apps that bring data to life for non-technical users. Snowflake has all the technical elements for accessing and managing the data in the cloud, and Streamlit fills the native data visualization piece that they lacked.
“It will make it easier for Snowflake customers to put their data-driven applications into production, which has been a consistent challenge in the data science and machine learning space,” Adam Ronthal, a research VP in Gartner’s ITL data and analytics group.
Streamlit was the brainchild of Google X and Zoox employees that wanted to build an open-source project that made it easier for users to build custom applications that interacted with data.
The co-founder and CEO of Streamlit, Adrien Treuille, started talking to snowflake last fall, and both the companies were a great fit both technologically and culturally. The company was working on a commercial cloud service that has become part of the Snowflake platform. Treuille said that thousands of people are using the platform to build applications, and millions are using apps built on top of Streamlit.
Strategic consulting and technology services providing company Impactsure Technologies announces it has joined the Google Cloud Partner Advantage program.
Impactsure, which is built on Google Cloud, will enable financial institutions and corporate customers to introduce AI and ML solutions to drastically increase operational efficiency while reducing compliance risks.
Recently the company was awarded Best Trade Finance Implementation Award in the best adoption of tools and governance category by IBS Intelligence Global FinTech Innovation Awards 2021.
Impactsure’s Trade Finance solution would enable banks, financial institutions, and corporations to manage critical processing and compliance-related requirements through advanced intelligent data analytics and document processing.
CEO and Founder of Impactsure, Dharmarajan, said, “Being a technology partner to Google Cloud is a significant development for Impactsure. It will enable us to expand our market presence through advanced data security tools that support compliance and data privacy.”
The Secure Unified Responsive Engine (SURE) of Impactsure scans, processes, classifies, and extracts structured and unstructured data from documents in various file formats, which streamlines complex corporate banking and trade finance operations that are labor-intensive, error-prone, and subject to stringent compliance regulations.
Mumbai-based technology consulting organization Impactsure was founded by Ashish Mohan Jha, Dharmarajan S, and Subramaniyan Neelakandan in 2019. The company specializes in providing artificial intelligence solutions to enable users to examine documents used as supporting documents for bank guarantees, letters of credit, credit operations, remittances, collections, mortgages, discounting, treasury documents, and many others.
Its customer base includes clients from various industries such as legal, education, BFSI, pharma, shipping, healthcare, manufacturing, and more. Impactsure’s products have helped its clients save more than 80% on human labor by allowing them to process complex documents in less than 10 minutes.
“Our products help human operators reduce the time taken to inspect complex documents by identifying different types of discrepancies, highlighting exceptions, and uncovering hidden patterns,” said Founder Director and CTO of Impactsure, Subramaniyan Neelakandan.
Leading cybersecurity for insurance providing company Cowbell Cyber announces that it has raised $100 million in its recently held series B funding round led by Anthemis Group.
Many investors, including Permira Funds, PruVen Capital, NYCA Partners, Viola Fintech, and other existing investors, also participated in the company’s latest funding round.
Cowbell Cyber plans to use the fresh funds to increase investment in data science, underwriting, risk engineering, claim management and to increase support for its reinsurance captive unit named Cowbell Re.
The COVID-19 pandemic had ignited the demand for cybersecurity as most businesses across the world started adopting a digital approach. Moreover, the current ongoing crisis between Russia and Ukraine has further fueled the need for cybersecurity.
According to Cowbell, one out of five medium-to small-sized businesses has remained uninsured in recent times. Therefore, the company plans to change the scenario and responsibly grow its cyber insurance business.
Co-founder and CEO of Cowbell Cyber, Jack Kudale, said, “Cowbell is known for closing insurability gaps, bringing transparency in risk selection and pricing, and offering relevant and customizable coverage, all while extending the value of cyber insurance with risk management solutions that enable organizations to strengthen their resilience to cyber attacks.”
He further added that they intend to be at the forefront of this second wave by revolutionizing cyber risk underwriting. Cowbell Cyber will primarily focus on three critical areas, including funding risk-bearing capabilities to augment reinsurance, investing in closed-loop risk management for policyholders, and improving broker efficiencies in the distribution of cyber insurance.
San Francisco-based cyber insurance firm Cowbell Cyber was founded by Jack Kudale, Prab Reddy, Rajeev Gupta, and Trent Cooksley in 2019. The company specializes in offering standalone, individualized, and easy-to-understand cyber insurance for small and medium-sized businesses. To date, Cowbell Cyber has raised more than $123 million over three funding rounds.
Reliance New Energy announces that it has acquired assets of premier Lithium Iron Phosphate Energy Systems Group Lithium Werks for $61 million through definitive agreements.
The acquired assets include Lithium Werks’ entire patent portfolio, key business contracts, and the hiring of existing employees. This new acquisition will allow Reliance to further strengthen its capabilities and offering in the lithium-ion industry.
Earlier, Reliance also acquired Faradion Ltd, a global leader in sodium-ion cell chemistry. Hence the new development proves Reliance’s plans of expanding in this industry.
Reliance will have access to one of the world’s leading portfolios of Lithium Iron Phosphate (LFP) patents, along with a management team with extensive experience in cell chemistry, custom modules, packing, and the construction of large-scale battery manufacturing facilities as a result of the acquisition.
Chairman of Reliance Industries, Mukesh Ambani, said, “LFP is fast gaining as one of the leading cell chemistries due to its cobalt and nickel free batteries, low cost, and longer life compared to Nickel, Manganese, and Cobalt (NMC) and other chemistries. Lithium Werks is one of the leading LFP cell manufacturing companies globally and has a vast patent portfolio and a management team that brings the tremendous experience of innovation across the LFP value chain.”
He further added that they are eager to collaborate with the Lithium Werks team and are encouraged by the rapid progress they are making toward establishing an end-to-end battery manufacturing and supply ecosystem for Indian markets.
United States-based energy system developing company Lithium Werks was founded in 2017. The firm specializes in multiple domains in the energy industry, including valence battery modules, battery management systems, and many more.
CEO of Lithium Werks Joe Fisher said, “This deal means increased resources and expanded global reach while leveraging our experienced team and IP portfolio and providing scale and momentum to help drive our product innovation, capacity expansion and accelerate our clean energy strategy.”
He also mentioned that they are delighted to join the Reliance New Energy initiative.
In any scientific research, leveraging the ‘Trial and error’ methodology helps in finding optimal solutions for a research problem. However, it can be an expensive and time-consuming affair, especially when using this method to train models to deliver desired outcomes. Recently, in the paper titled, Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer,’ co-authored by Microsoft Research and OpenAI, researchers discuss a new technique called µTransfer. This technique has been shown to reduce the amount of trial and error required in the costly process of training large neural networks.
Across different tuning budgets, µTransfer dominates the baseline method of directly tuning the target model. Source: Microsoft
Models of AI (machine learning or deep learning) are mathematical functions that express the relationship between various data points. It takes time to train such models to handle a specific issue, such as image classification, object identification, image segmentation, or any other NLP application. One of the most important reasons is to acquire the best model by properly tuning hyperparameters. In other words, training a model entails selecting the best hyperparameters for the learning algorithm to employ in order to learn the best parameters that accurately map input data (independent variables) to labels or targets (dependent variable), resulting in ‘artificial intelligence.’
Every machine learning/deep learning model is defined using model parameters. Model parameters can be defined as the variables that your chosen machine learning algorithm utilizes to adjust to your data. They are specific to each model and are what separates it from other analogous models that are operating on similar data. Hyperparameters are variables whose values influence the training process and affect the learning algorithm’s model parameters. These variables aren’t linked to the training data in any way. They are configuration variables that remain constant during a task, as opposed to parameters that vary throughout training.
The process of discovering the best combination of hyperparameters to improve model performance is known as hyperparameter tuning (or hyperparameter optimization). Multiple trials are done in a single training job to perform hyperparameter optimization. Each trial is a complete execution of your training application with values for your selected hyperparameters set within the limitations you define. The outcomes of each trial are tracked by a training service, which makes improvements for future trials. When the job is completed, you can receive a summary of all the trials as well as the most effective value configuration based on the criteria provided. Given the crucial aspect of hyperparameter tuning, researchers spend huge time on the same.
The training of hyperparameters in large neural networks consumes resources since the network must estimate which hyperparameters to employ each time. The Microsoft research demonstrates that there is a highly particular parameterization that ensures excellent hyperparameters across a wide range of model sizes. Known as µ-Parametrization (µP, pronounced “myu-P”) or Maximal Update Parametrization, this technique makes use of the fact that neural networks of various sizes share the same optimum hyperparameters under certain conditions, allowing for substantially cheaper costs and improved efficiency while tuning large-scale models. Essentially, this implies that instead of directly tuning an entire multi-billion-parameter model, a small-scale tuning process may be extrapolated outwards and mapped onto a much larger model.
As per studies, large neural networks are difficult to train because their behavior changes as they expand in size. This homogeneity, however, falls off at varying model widths as training develops.
Furthermore, analytically analyzing training behavior is significantly more challenging. To reduce numerical overflow and underflow, the team worked to achieve comparable consistency, such that as the model width develops, the change in activation scales throughout training stays consistent and equivalent to initialization.
As a result, during training, their parameterization was founded on two fundamental ideas:
Gradient updates in neural networks behave differently than random weights when the width is large. This is because gradient updates are data-driven and take into account correlations, whereas random initializations do not. They must, as a result, be scaled differently.
The parameters of different shapes also behave differently when the width is large. While weights and biases are often used to partition parameters, with the former being matrices and the latter being vectors, some weights behave like vectors in the large-width scenario.
Researchers used these fundamental ideas to develop µ-Parametrization, which assures that neural networks of various widths act consistently during training. This enables them to converge to a desirable limit (feature learning limit) in addition to maintaining a constant activation scale during training.
The Microsoft team’s scaling theory paves the way for the development of a mechanism for transferring training hyperparameters across model sizes. If µ-Parametrization networks of various widths exhibit identical training dynamics, their ideal hyperparameters will most likely be similar. As a consequence, they could take the best hyperparameters from a small model and apply them to a bigger one. On the other hand, the findings suggest that hyperparameters can achieve the same effect without using a different initialization and learning rate scaling algorithm. This phenomenon was termed as µTransfer.
The Microsoft researchers collaborated with OpenAI to evaluate the efficacy of µTransfer for GPT-3. They first experimented with a smaller model to find the best hyperparameters, then transferred them to a bigger, scaled-up system. The team then used µTransfer on the GPT-3 to transfer hyperparameters from a 40-million-parameter model to a 6.7-billion-parameter model. The researchers calculated that their hyperparameter-tuning expenses leveraging µTransfer were just 7% of what it would have been to pre-train the model, only by eliminating the need to tune the bigger GPT-3’s hyperparameters continually. By transferring pretraining hyperparameters from a 13 million parameter model, it also produced remarkable results on BERT-large (350 million parameters).
Microsoft applied µTransfer to GPT-3 6.7-billion parameter model with relative attention and obtained better results than the baseline with absolute attention used in the original GPT-3 paper, all while only spending 7 percent of the pretraining compute budget on tuning. The performance of this µTransfer 6.7-billion model is comparable to that of the 13-billion model (with absolute attention) in the original GPT-3 paper. Source: Microsoft
Microsoft has released a PyTorch package to enable other practitioners to profit from these insights by integrating µ-Parametrization into their current models, which otherwise could be difficult to implement.
Microsoft Research first launched Tensor Programs in the year 2020. The research was based on µ-Parametrisation, which allowed for maximum feature learning in the infinite-width limit. µTransfer operates automatically for intricate neural networks such as Transformer and ResNet and is based on the theoretical underpinning of Tensor Programs.
However, Microsoft admits that there is still much to learn about the scalability of AI models and promises to continue its efforts to derive more principled approaches to large-scale machine learning.
India’s first Artificial Intelligence and Robotics Technology Park (ARTPARK) has been established in the Indian Institute of Science (IISc) Bengaluru campus. The technology park was launched with seed money of Rs 230 crores.
According to Karnataka Minister for IT-BT CN Ashwath Narayan, the Center will bear Rs 170 crore, and the rest of the Rs 230 crore will be taken care of by the Karnataka government.
ARTPARK aims to use futuristic technologies to connect the unconnected, with a focus on developing India’s Artificial Intelligence and Robotics Innovation ecosystem.
According to the plan, the lab’s primary focus will be on promoting new-age technologies such as 5G, artificial intelligence for inclusive learning, enhancing healthcare services, and more.
Chief Executive Officer of ARTPARK, Umakant Soni, said, “In the age of AI, knowledge will be everywhere. Students won’t have to cram information anymore and can focus on applying it to create things. Similarly, healthcare should be available everywhere and not just in hospitals.”
The artificial intelligence industry is expected to reach over $15 trillion by 2030, and this new technology park will help the country harness the potential of artificial intelligence.
The ARTPARK aims to channel innovations into societal impact by executing ambitious mission-mode R&D projects in healthcare, education, mobility, infrastructure, agriculture, retail, and cyber-security aimed at solving problems unique to India.
Narayan said, “This initiative to push the narrative for ‘Connecting the unconnected’ by ARTPARK will help the youth outside urban India not only access the next generation of digital work but also acquire the skills they need to thrive in an AI-driven future.”
He further added that Karnataka would take the lead in developing a new economic growth model for Atmanirbhar Bharat.
Predictive intelligence company Windward announces the launch of its new “Russia” sanctions solution as a part of Windward’s Maritime AI platform.
Windward is launching the new “Russia” sanctions solution to assist customers in minimizing their risk exposure.
Windward’s Maritime artificial intelligence-powered platform will enable every organization to configure, monitor, and adjust its practices in response to changing trade restrictions and business preferences.
According to the company, its solution allows stakeholders to understand the full scope of Russian-related trade, including cargo destinations and sources, allowing them to conduct business confidently and comply with new, rapidly evolving restrictions.
The development comes as an aftereffect of the ongoing Russia-Ukraine war that has caused global powers, including the United States and the United Kingdom, to impose heavy economic sanctions on Russia to pressure it in stopping the war. Companies and vessels identified as being associated with Russia in the Windward database will be labeled as Moderate Risk in the platform.
CEO and Co-founder of Windward Ami Daniel said, “As the fog of the conflict and increased sanctions make conducting trade even more complex, we will continuously update our platform so our customers can continue to conduct business with confidence.”
He further added that during these uncertain times, Windward is committed to providing our customers with the best visibility possible. Windward says that as crude oil and other potentially illegal or sanctioned commodities are transported out of Russia and into destination nations, its platform’s analytical tools will assist organizations in assessing and identifying them.
Israel-based predictive intelligence firm Windward was founded by Ami Daniel and Matan Peled in 2010. The company specializes in providing AI and big data to digitalize the global maritime industry, enabling organizations to achieve business and operational readiness.
To date, Windward has raised a total funding of more than $32 million from multiple investors, including XL Innovate, Aleph, Horizon Ventures, BizTec, and many more over three funding rounds.
Interested individuals can enjoy the benefits of the newly launched Sanction Compliance Solution of Windward at no extra cost for a period of two weeks.
Artificial intelligence-powered chatbot providing company Ivy.ai launches its new self-building chatbot technology named Genie.
It is a highly complex chatbot and live chat platform that allows organizations to build pre-trained, conversational chatbots that can understand unique content.
Genie by Ivy.ai uses a business’s website, knowledge base, and other documents to automatically create bot knowledge, making it a competent technology.
The company has packed Genie with preloaded training data, eliminating the need for time-consuming and long questionnaires or templates.
CEO of Ivy.ai Mark McNasby, said, “Genie is a response to the market’s evolution in its demand for chatbots. Most companies have accepted that chatbots are the future but don’t want to wait months for implementation or constantly tweak templates that don’t help them differentiate from the competition.”
He further added that Genie gives every company the ability to own a sophisticated, high-functioning chatbot that produces immediate results and stays up-to-date over time with minimal effort. Ivy.ai leverages its proprietary technology to get users started, and from there, customers can use Genie to modify and configure their bots.
Below mentioned are the key features of Genie by Ivy.ai –
The software is easy to install as it comes with an in-built intuitive setup assistant. The assistant also allows companies to quickly get started with the software without spending a lot of time training.
Genie has been trained by over five million inquiries and counting, allowing it to understand natural language questions from users within an organization right away.
It is very user-friendly as users need not have any prior coding knowledge to use the chatbot.
Genie has an extremely high accuracy rate of nearly 90% for inbound inquiries.
Genie examines websites for flaws with the Web Content Accessibility Guidelines (WCAG) and notifies users of any differences that need to be addressed.
Genie keeps bot knowledge up to date by scanning websites on a daily basis, ensuring that the website and bot are always in sync.
United States-based AI chatbot providing company Ivy.ai was founded by Mark McNasby in 2016. The firm specializes in providing chatbots for healthcare, education, and government institutions.