Friday, November 29, 2024
ad
Home Blog Page 327

AI Can Now Predict Viral Infections In Humans Using Gene Expression

AI Can Now Predict Viral Infections In Humans

In New York, a massive team of researchers has used Artificial Intelligence (AI) based algorithms to find similarities in the gene expression data of past pandemic infected individuals, which includes infections like Swine Flu and SARC. Pradipta Ghosh, one of the researchers from the University of California, indicated two telltale signatures. The first is a sum total of 166 genes, which shows the immune reaction of human beings against viral infection. The second set of 20 gene signatures show the severity of the infection in patients. This AI algorithm can predict the need for ventilator support and hospitalization for a patient. 

Pradipta Ghosh said, “These viral pandemic-associated signatures tell us how a person’s immune system responds to a viral infection and how severe it might get, and that gives us a map for this and future pandemics.” Also she added that the AI algorithm has been developed with publicly available gene expression data.

The study states that our body releases a protein called cytokine in the blood when infected with a virus. These proteins aid the immune cells to find the infection, but sometimes, our body releases an excess amount of the protein, which leads the immune system to attack its own healthy tissues. 

Read More : Microsoft’s Free AI Classroom Series With Certification 14-19 December

This condition in the body, known as the Cytokine storm, is one of the main reasons for some and not all people catching a viral infection, which also includes common flu. The research team found a similar gene expression pattern in each set of data they tested of Covid-19 patients. After a thorough examination of the data, the researchers said the cells lining the lung airway and macrophages, which are also known as white blood cells, are the reason behind the initiation of cytokine storms in the human body. 

The researchers claim that this AI algorithm can help doctors in treatment by providing cellular level information of the patient’s condition and also by showing benchmarks to measure improvement in the infected.

Advertisement

Microsoft Announces The End Of Support For Windows 10 In 2025

Microsoft announces end of windows 10 support

The updated windows lifecycle fact sheet mentions that Microsoft will end support for various versions of Windows 10 which includes Home, Pro, Pro for Workstation and Pro for Education on October 14, 2025. US-based tech giant, Microsoft, will no longer push updates and security patches beyond that date.

Microsoft released Windows 10 in July 2015 describing it as an “operating system as a service” that would regularly receive updates and security fixes. During the launch, Microsoft also claimed that Windows 10 would be their last version of their operating system. But recently the company’s website teased about the launch of the all-new Windows 11 that has been confirmed to release by the end of June 2021. 

A new event has been posted on the company’s website which will be held on 24th of June.  At the event, Microsoft will throw light on this development and also highlight the key features which the new Windows will have. The company confirmed about the event through their twitter handle and mentioned that the event will begin at 8:30PM IST.

Read More : MIT Releases A Free Machine Learning Course

Satya Nadela, CEO of Microsoft at the Microsoft build 2021 event said that the next windows update will be the greatest one of the past decade. Windows 11 is being rumored to have a major UI update. 

Nadela said “Soon we will share one of the most significant updates to Windows of the past decade to unlock greater economic opportunity for developers and creators. I’ve been self-hosting it over the past several months, and I’m incredibly excited about the next generation of Windows.” As the past suggests, it took a long time for users to shift from Windows 7 to Windows 10, it can be expected that complete retirement of Windows 10 would take quite longer than 2025.

Advertisement

Apple Hires EX-BMW Employee To Boost Its Electric Car Project

Apple hires ex BMW employee

Apple has hired ex BMW AG’s electric car division’s former employee named Ulrich Kranz to head its automobile department, confirms an Apple spokesperson. Krans was senior VP of the team for 30 years who developed cars like i3 and i8 at BMW. He then cofounded Canoo Inc that develops self-driven electric vehicles. 

A month before he stepped down as the CEO of Canoo, he was approached by Apple Inc. This bold move by Apple shows that the smartphone giant wants to get a good grip on the electric car segment and aims to rival Tesla and others. 

Kranz will be working with Doug Fields, who is an ex-employee of Tesla. By selling iPhones and Macs, Apple has become the most valuable company globally with a net worth of over $2trillion. 

Read More: NVIDIA Will Acquire DeepMap For Advancing The Autonomous Vehicle Industry

Now to diversify their product array, Apple has targeted AR headsets and Self-driven electric cars. Apple got into the automobile industry in the year 2014. In time, Apple has hired several Tesla employees who now develop self-driving software and other engineering tasks. Though Apple is working rigorously towards developing a self-driven electric car, the developments still are at early stages, the launch is expected no earlier than 2026.

Apple recently lost a few key auto employees like Dave Scott, Benjamin Lyon, and Jaime Waydo, who worked in several important departments. It is suspected that the hiring of Kranz is to make up for this loss. 

In 2008, Kranz and his team developed an all battery-powered vehicle that yielded the i8 sports car and the all battery i3 compact. Kranz parted ways with BMW in 2016, after which he joined an electric vehicle startup called Faraday Future. 

After a few months, he founded his own startup Canoo. The company struggles to produce a vehicle and was also planning to sell itself to Apple. Apple has a good history with BMW. From integrating BMW’s infotainment system with Apple’s iPod back in 2004 to making iPhone work as a key for BMW cars recently, they have collaborated a lot of time. 

In 2014, Tim Cook, CEO of Apple, was found checking out a BMW i8 in front of Apple’s office in California, USA.

Advertisement

NVIDIA Will Acquire DeepMap For Advancing The Autonomous Vehicle Industry

NVIDIA Acquires DeepMap

NVIDIA has agreed to acquire DeepMap, a five-year-old startup founded by the former employees of Google, Apple, and Baidu, for an undisclosed amount. DeepMap focuses on high-definition mapping to support self-driving cars’ requirements.

The need for guiding autonomous vehicles is crucial for democratizing self-driving cars. However, the shortage of labeled datasets about precise real-world maps is slackening the development of cars. What makes its worse is that localization is critical for having safer self-driving cars. A dataset built on an American road would not work effectively on an Indian road.

In addition, to diverse roads, there is a need for continuously updating the dataset to map the recent changes across the roads. To mitigate this, DeepMap strives to map the road and keep the latest information for autonomous cars to perform in the real world better.

And since NVIDIA is going big on autonomous vehicles by offering solutions like NVIDIA DRIVE, a software-defined, end-to-end platform for high-seed compute in the vehicle, Omnivers, which was announced at NVIDIA GTC 2021, the acquisition will advance NVIDIA’s capabilities to better serve self-driving vehicles developers.

“The acquisition is an endorsement of DeepMap’s unique vision, technology and people,” said Ali Kani, vice president and general manager of Automotive at NVIDIA. “DeepMap is expected to extend our mapping products, help us scale worldwide map operations and expand our full self-driving expertise.”

“NVIDIA is an amazing, world-changing company that shares our vision to accelerate safe autonomy,” said James Wu, co-founder and CEO of DeepMap. “Joining forces with NVIDIA will allow our technology to scale more quickly and benefit more people sooner. We look forward to continuing our journey as part of the NVIDIA team.”

Advertisement

IIT Kanpur To Setup Center Of Excellence For Artificial Intelligence In Noida

IIT Kanpur Artificial Intelligence Center of Excellence

As a part of UP Startup Policy, 2020, the Uttar Pradesh government approved the setting up of a center of excellence in Noida. IIT Kanpur and FICCI (Federation of Indian Chambers of Commerce and Industry) will set up a center of excellence for artificial intelligence and entrepreneurship.

The center of excellence will support startups and will provide the necessary exposer to get funding from investors. The UP government wants to create an ecosystem for innovative startups as it has set a target of being in the top 3 states in the “States’ Startup Ranking” introduced by the Government of India.

Also Read: ISRO Is Offering A Free Machine Learning Course On Remote Sensing

In the coming years, the state government wants to achieve its target of creating an ecosystem for at least 10,000 startups in the state. For the next five years, the center of excellence will support 250 startups — 50 in a year — mainly from cybersecurity, artificial intelligence, the internet of things, and augmented reality.

“While IIT-Kanpur will give the technological expertise and incubation support, FICCI India will provide industry tie-ups and business connect. Also under the Start-up Policy 2020, financial assistance will be given to startups,” said Abhay Karandikar, Director of IIT Kanpur.

Advertisement

ISRO Is Offering A Free Machine Learning Course On Remote Sensing With Certification

isro remote sensing course

ISRO will offer a free course — Machine Learning to Deep Learning: A journey for remote sensing data classification — from July 5 to July 9, 2021. As part of ISRO’s outreach program, ISRO and IIRS facilitate online learning under two modes of content delivery — EDUSAT and e-Learning mode. 

The Indian Institute of Remote Sensing (IIRS), a research institute of remote sensing, which offers education and training in the field of Geoinformatics and GPS technology for natural resources and environmental and disaster management under the Indian Department of Space since 1966.

The Machine Learning to Deep Learning: A journey for remote sensing data classification is a short, 5-day course that would only require 1.5 hours every day till the end of the course. Over the five days, you will learn basic machine learning methods like supervised, unspecified, and reinforcement learning, application of machine learning in remote sensing, and network-based learning algorithms like ANN and CNN/RNN.

On completion of the course, a certificate of participation will be offered who attend 70% of the live sessions. After the course, the material will be provided by IIRS ftp link, and video lectures will be uploaded on its YouTube channel. However, people who attend the course sessions live on YouTube but will have to mark their attendance offline, which will be available after 24 hours, to get the certificate.

To register for the course, visit the e-learning platform of IIRS and can live class can be attended through this link. You can also check the course brochure here.

Advertisement

Mythic Launches Analog AI Processor That Delivers Superior Performance With 10x Less Power

Mythic Analog AI Processor

Mythic, an analog AI processor company, launches the M1076 Analog Matrix Processor (Mythic AMO™), which delivers an exceptional performance with low power consumption. According to the company, M1076 AMP can support up to 25 TOPS of AI compute in 3W power envelop, thereby consuming up to10x less power than traditional SoC or GPU.

The chip comes in a wide range of form factors — a standalone processor, an ultra-compact PCIe M.2 card, and a PCIe card — to support diverse applications. Its 16-chip configuration can offer up to 400 TOPs of AI compute with 75W.

M1076 AMP is also optimized for scale from edge endpoints to serve applications, catering to multiple markets like smart cities, enterprise applications, consumer devices, and more.

“Mythic’s groundbreaking inference solution takes AI processing and energy-efficiency to new heights,” said Tim Vehling, senior vice president, product & business development at Mythic. “With its ability to scale from a compact single-chip to a powerful 16-chip PCIe card solution, the M1076 AMP makes it easier for developers to integrate powerful AI applications in a wider range of edge devices that are constrained by size, power, and thermal management challenges.”

M1076 AMP supports a wide range of AI workloads like object detection, depth estimation, autonomous drones, classification, surveillance cameras, among others. The processor also goes beyond AI and offers superior performance with AR/VR applications.

The groundbreaking technology by Mythic is backed by several investors that poured in $165.2 million till now. Recently, Mythic raised $70 million in Series C funding, which the company will use for scaling production and expanding its footprint across the world.

Advertisement

Google Releases 1.4 Petabyte Datasets and Visualizations Of Human Brain Tissue

Google H01 dataset

Google and Lichtman laboratory at Harvard University releases the H01 dataset — a 1.4 petabyte of small sample of human brain tissues. According to Google, The H01 sample was imaged at 4nm-resolution by serial section electron microscopy, reconstructed and annotated by automated computational techniques, and analyzed for preliminary insights into the structure of the human cortex.

The dataset consists of one cubic millimeter of imaging data, millions of neuron fragments, tens of thousands of reconstructed images of neurons, 100 proofread cells, and 183 million annotated synapses. 

To access the dataset, you can use the Neuromancer browser interface. It allows you to get cross-sectional views of volumetric data, 3-D meshes, and line segment-based models. However, you can also directly download the datasets from Google Cloud Storage’s gsuit tool. Google has also released a Colaboratory notebook to get started with accessing the datasets via TensorStore, a library for reading and writing large multi-dimensional arrays.

Collectively, Google and Lichtman have hosted a live demo, but for specific links, you can open links like FlyEm Hemibrain, which comes with loaded datasets for visuals.

Along with datasets, both firms have released a companion preprint that used H01 to study interesting patterns of the human cortex. Several new discoveries were made with the emergence of new cell types. “While these findings are a promising start, the vastness of the H01 dataset will provide a basis for many years of further study by researchers interested in the human cortex,” Google believes.

Advertisement

NVIDIA Releases Jarvis & NeMo 1.0 For Speech Recognition and Conversational AI Tasks

NVIDIA releases Jarvis and NeMo 1.0

NVIDIA releases Jarvis and NeMo 1.0 for developers and researchers looking to build state-of-the-art machine learning-based solutions for pre-trained models. At NVIDIA GTC 2020 and NVIDIA GTC 2021, NeMo and Jarvis were among the top announcements that evoked interest from natural language processing enthusiasts.

While NVIDIA Jarvis is a speech recognition model that has an accuracy of 90 percent out of the box, NVIDIA NeMo 1.0 is an open-source toolkit for developing Automatic Speech Recognition (ASR).

Jarvis can be fine-tuned with NVIDIA Transfer Learning Toolkit (TLT), a tool to build production-ready machine learning models without coding, to support a wide range of domain-specific conversations in healthcare, telecommunications, finance, automobile, and neuroscience. According to NVIDIA Jarvis was trained on noisy data, multiple sampling rates including 8khz for call centers, variety of accents, and dialogue, all of which contribute to the model’s accuracy.

Today, Jarvis supports English, Japanese, German, and more to allow the development of products that can transcribe in different languages in real-time. NVIDIA is committed to adding support of other languages in Jarvis. While you can download Jarvis from the catalog, you need pre-requisites like access to NVIDIA Ampere architecture-based GPU and NVIDIA GPU Cloud account.

To further support Conversational AI, NLP, and Text-to-Speech (TTS) workflows, NVIDIA has released NeMo 1.0 for enabling the building of new NLP-based models on top of existing ones. Coupled with PyTorch, PyTorch Lightning, and Hydra frameworks, NeMo 1.0 allows researchers to effectively leverage one of the widely used deep learning frameworks.

NeMo 1.0 includes support of bidirectional machine translation from five languages — English, Spanish, Russian, Mandarin, German, and French. It also comes with new CitriNet and Conformer-CTC ASR models. With NeMo, developers can export most of the models to NVIDIA Jarvis for production deployment. 

Check out NeMo 1.0 on GitHub.

Advertisement

China Now Has The Largest Language Model With WuDao 2.0

WuDuo 2.0

Beijing Academy of Artificial Intelligence (BAAI) developed the largest language model, WuDao 2.0, with 1.75 trillion parameters supporting English and Chinese. Google had earlier claimed the top spot for the largest language model with its Switch Transformer, a 1.6-trillion parameter model, when it introduced its supremacy in January 2021. 

Since the release of NVIDIA’s Megatron in 2019 (8.2 billion parameter model), the race to obtain the largest language model has skyrocketed. Microsoft released Turing-NLG or T-NLG (17 billion parameter model) in early 2020, and OpenAI further pushed the bar with GPT-3, a 175 billion parameter model, in June 2020.

WuDao 2.0 has been trained with 4.9 terabytes of texts and images, including 1.2 terabytes of Chinese and English texts.

According to the researchers, WuDao 2.0 is able to perform tasks like writing poems, comprehending conversations, and understanding images. Just like GPT-3, WuDao 2.0 has the potential to revolutionize how humans work, especially with the Chinese language. Developed by a team of 100 researchers from different organizations, WuDao 2.0 demonstrates how close China with the U.S in terms of research.

While some researchers do not consider building a large language model as a breakthrough, many believe such advancements are crucial as they can open up new doors for other research. Gary Marcus and Yann LeCun have been critical of GPT-3 and consider it as the hype.

However, OpenAI’s model is on its way into the business world with the integration of GPT-3 in Microsoft’s Power App. Microsoft recently announced that the company is exploring ways to equip more products with GPT-3 to enhance productivity.

WuDao 2.0 has over 22 partners, including Xiaomi and Kuaishou, that can integrate the largest language model in the products and services in the coming month.

Advertisement