Tuesday, November 11, 2025
ad
Home Blog Page 335

ISRO Is Offering A Free Machine Learning Course On Remote Sensing With Certification

isro remote sensing course

ISRO will offer a free course — Machine Learning to Deep Learning: A journey for remote sensing data classification — from July 5 to July 9, 2021. As part of ISRO’s outreach program, ISRO and IIRS facilitate online learning under two modes of content delivery — EDUSAT and e-Learning mode. 

The Indian Institute of Remote Sensing (IIRS), a research institute of remote sensing, which offers education and training in the field of Geoinformatics and GPS technology for natural resources and environmental and disaster management under the Indian Department of Space since 1966.

The Machine Learning to Deep Learning: A journey for remote sensing data classification is a short, 5-day course that would only require 1.5 hours every day till the end of the course. Over the five days, you will learn basic machine learning methods like supervised, unspecified, and reinforcement learning, application of machine learning in remote sensing, and network-based learning algorithms like ANN and CNN/RNN.

On completion of the course, a certificate of participation will be offered who attend 70% of the live sessions. After the course, the material will be provided by IIRS ftp link, and video lectures will be uploaded on its YouTube channel. However, people who attend the course sessions live on YouTube but will have to mark their attendance offline, which will be available after 24 hours, to get the certificate.

To register for the course, visit the e-learning platform of IIRS and can live class can be attended through this link. You can also check the course brochure here.

Advertisement

Mythic Launches Analog AI Processor That Delivers Superior Performance With 10x Less Power

Mythic Analog AI Processor

Mythic, an analog AI processor company, launches the M1076 Analog Matrix Processor (Mythic AMO™), which delivers an exceptional performance with low power consumption. According to the company, M1076 AMP can support up to 25 TOPS of AI compute in 3W power envelop, thereby consuming up to10x less power than traditional SoC or GPU.

The chip comes in a wide range of form factors — a standalone processor, an ultra-compact PCIe M.2 card, and a PCIe card — to support diverse applications. Its 16-chip configuration can offer up to 400 TOPs of AI compute with 75W.

M1076 AMP is also optimized for scale from edge endpoints to serve applications, catering to multiple markets like smart cities, enterprise applications, consumer devices, and more.

“Mythic’s groundbreaking inference solution takes AI processing and energy-efficiency to new heights,” said Tim Vehling, senior vice president, product & business development at Mythic. “With its ability to scale from a compact single-chip to a powerful 16-chip PCIe card solution, the M1076 AMP makes it easier for developers to integrate powerful AI applications in a wider range of edge devices that are constrained by size, power, and thermal management challenges.”

M1076 AMP supports a wide range of AI workloads like object detection, depth estimation, autonomous drones, classification, surveillance cameras, among others. The processor also goes beyond AI and offers superior performance with AR/VR applications.

The groundbreaking technology by Mythic is backed by several investors that poured in $165.2 million till now. Recently, Mythic raised $70 million in Series C funding, which the company will use for scaling production and expanding its footprint across the world.

Advertisement

Google Releases 1.4 Petabyte Datasets and Visualizations Of Human Brain Tissue

Google H01 dataset

Google and Lichtman laboratory at Harvard University releases the H01 dataset — a 1.4 petabyte of small sample of human brain tissues. According to Google, The H01 sample was imaged at 4nm-resolution by serial section electron microscopy, reconstructed and annotated by automated computational techniques, and analyzed for preliminary insights into the structure of the human cortex.

The dataset consists of one cubic millimeter of imaging data, millions of neuron fragments, tens of thousands of reconstructed images of neurons, 100 proofread cells, and 183 million annotated synapses. 

To access the dataset, you can use the Neuromancer browser interface. It allows you to get cross-sectional views of volumetric data, 3-D meshes, and line segment-based models. However, you can also directly download the datasets from Google Cloud Storage’s gsuit tool. Google has also released a Colaboratory notebook to get started with accessing the datasets via TensorStore, a library for reading and writing large multi-dimensional arrays.

Collectively, Google and Lichtman have hosted a live demo, but for specific links, you can open links like FlyEm Hemibrain, which comes with loaded datasets for visuals.

Along with datasets, both firms have released a companion preprint that used H01 to study interesting patterns of the human cortex. Several new discoveries were made with the emergence of new cell types. “While these findings are a promising start, the vastness of the H01 dataset will provide a basis for many years of further study by researchers interested in the human cortex,” Google believes.

Advertisement

NVIDIA Releases Jarvis & NeMo 1.0 For Speech Recognition and Conversational AI Tasks

NVIDIA releases Jarvis and NeMo 1.0

NVIDIA releases Jarvis and NeMo 1.0 for developers and researchers looking to build state-of-the-art machine learning-based solutions for pre-trained models. At NVIDIA GTC 2020 and NVIDIA GTC 2021, NeMo and Jarvis were among the top announcements that evoked interest from natural language processing enthusiasts.

While NVIDIA Jarvis is a speech recognition model that has an accuracy of 90 percent out of the box, NVIDIA NeMo 1.0 is an open-source toolkit for developing Automatic Speech Recognition (ASR).

Jarvis can be fine-tuned with NVIDIA Transfer Learning Toolkit (TLT), a tool to build production-ready machine learning models without coding, to support a wide range of domain-specific conversations in healthcare, telecommunications, finance, automobile, and neuroscience. According to NVIDIA Jarvis was trained on noisy data, multiple sampling rates including 8khz for call centers, variety of accents, and dialogue, all of which contribute to the model’s accuracy.

Today, Jarvis supports English, Japanese, German, and more to allow the development of products that can transcribe in different languages in real-time. NVIDIA is committed to adding support of other languages in Jarvis. While you can download Jarvis from the catalog, you need pre-requisites like access to NVIDIA Ampere architecture-based GPU and NVIDIA GPU Cloud account.

To further support Conversational AI, NLP, and Text-to-Speech (TTS) workflows, NVIDIA has released NeMo 1.0 for enabling the building of new NLP-based models on top of existing ones. Coupled with PyTorch, PyTorch Lightning, and Hydra frameworks, NeMo 1.0 allows researchers to effectively leverage one of the widely used deep learning frameworks.

NeMo 1.0 includes support of bidirectional machine translation from five languages — English, Spanish, Russian, Mandarin, German, and French. It also comes with new CitriNet and Conformer-CTC ASR models. With NeMo, developers can export most of the models to NVIDIA Jarvis for production deployment. 

Check out NeMo 1.0 on GitHub.

Advertisement

China Now Has The Largest Language Model With WuDao 2.0

WuDuo 2.0

Beijing Academy of Artificial Intelligence (BAAI) developed the largest language model, WuDao 2.0, with 1.75 trillion parameters supporting English and Chinese. Google had earlier claimed the top spot for the largest language model with its Switch Transformer, a 1.6-trillion parameter model, when it introduced its supremacy in January 2021. 

Since the release of NVIDIA’s Megatron in 2019 (8.2 billion parameter model), the race to obtain the largest language model has skyrocketed. Microsoft released Turing-NLG or T-NLG (17 billion parameter model) in early 2020, and OpenAI further pushed the bar with GPT-3, a 175 billion parameter model, in June 2020.

WuDao 2.0 has been trained with 4.9 terabytes of texts and images, including 1.2 terabytes of Chinese and English texts.

According to the researchers, WuDao 2.0 is able to perform tasks like writing poems, comprehending conversations, and understanding images. Just like GPT-3, WuDao 2.0 has the potential to revolutionize how humans work, especially with the Chinese language. Developed by a team of 100 researchers from different organizations, WuDao 2.0 demonstrates how close China with the U.S in terms of research.

While some researchers do not consider building a large language model as a breakthrough, many believe such advancements are crucial as they can open up new doors for other research. Gary Marcus and Yann LeCun have been critical of GPT-3 and consider it as the hype.

However, OpenAI’s model is on its way into the business world with the integration of GPT-3 in Microsoft’s Power App. Microsoft recently announced that the company is exploring ways to equip more products with GPT-3 to enhance productivity.

WuDao 2.0 has over 22 partners, including Xiaomi and Kuaishou, that can integrate the largest language model in the products and services in the coming month.

Advertisement

Stack Overflow Will Be Acquired By Prosus For $1.8 Billion

Stack Overflow Acquisition by Prosus

Prosus, the largest consumer electronics company in Europe, wants to acquire Stack Overflow for $1.8 billion. The company is also the largest stakeholder of the Chinese-based conglomerate, Tencent. With the potential acquisition of Stack Overflow, Prosus would include another online learning platform after investing in BYJU’s, Udemy, and Codecademy. Other popular investments from Prosus include sectors like food delivery and fintech. 

The acquisition from Prosus will come after selling 2% of its Tencent equity for $15 billion. The company holds more than $200 billion in Tencent, which it bought for $34 million in 2001.

According to the CEO of Stack Overflow, Prashanth Chandrasekar, the deal will allow Stack Overflow to operate as an independent company with their current team. With the backing of Prosus, Stack Overflow is looking to expedite its expansion globally.

“Prosus is a long-term investor and loves what our company and community have built over these last 13+ years. They are impressed by the SaaS transformation the company has been on since the launch of Stack Overflow for Teams and especially over the last two years. Prosus recognizes our platform’s tremendous potential for impact and they are excited to launch and accelerate our next phase of growth,” mentions Prashanth.

Stack Overflow was revamping its business model since 2019 and shared its impressive results of the previous financial year. The company witnessed its strongest quarter ever as the SaaS-based model is infusing revenue at scale.

Advertisement

NVIDIA Releases Base Command Platform For Optimizing AI Workflows

NVIDIA Base Command Platform

At Computex 2021, NVIDIA unveiled Base Command™ Platform to help developers take artificial intelligence projects from prototypes to production. Base Command is a cloud-hosted development hub designed for optimizing workflows in enterprises to expedite the development process of AI-based projects.

According to NVIDIA, the software is designed for large-scale, multi-user and multi-team AI development workflows hosted either on premises or in the cloud. It enables numerous researchers and data scientists to simultaneously work on accelerated computing resources, helping enterprises maximize the productivity of both their expert developers and their valuable AI infrastructure.

Build for NVIDIA’s internal research team to integrate several AI-workflows, the Base Command Platform simplifies sharing resources through graphical user interfaces and command-line APIs.

NVIDIA Base Command Platform is currently available on a monthly subscription basis, which starts from $90,000. And later this year, Google Could would allow users to leverage Base Command Platform in its marketplace.

“World-class AI development requires powerful computing infrastructure, and making these resources accessible and attainable is essential to bringing AI to every company and their customers,” said Manuvir Das, head of Enterprise Computing at NVIDIA. “Deployed as a cloud-hosted solution with NVIDIA-accelerated computing, NVIDIA Base Command Platform reduces the complexity of managing AI workflows, so that data scientists and researchers can spend more time developing their AI projects and less time managing their machines.”

Advertisement

Udacity Launches AWS Machine Learning Scholarship Program

Udacity invites applications for its AWS machine learning scholarship program for enthusiasts who are looking to become machine learning engineers. Similar to other scholarship programs of Udacity, learners will have to complete the foundation course to stand a chance to get the scholarship — AWS Machine Learning Engineer Nanodegree program. However, after completion of the foundational course, we will have to take an assessment, which will allow Udacity to evaluate top performers for the AWS machine learning scholarship.

Based on the results of the assessment, a total of 425 learners will be selected for the Nanodegree program, where learners will learn advanced machine learning and gain a certificate on completion.

According to Udacity, all the learners applying for the foundational course would be accepted, thereby giving an opportunity to all the learners to showcase their learning ability to obtain the scholarship.

The foundational course includes topics like object-oriented programming skills, fundamentals of machine learning, and other AWS technologies like AWS DeepRacer, AWS DeepLens, and AWS DeepComposer. Learners will even get a certificate for completing the foundational course. There are other perks for participating and completing first — the first 150 learners to successfully complete the free course will receive AWS DeepLense devices and $35 AWS credits will be offered to the first 2500 applicants.

Both the foundation course and assessment should be completed by learners before October 11, 2021; the foundation course will be available for scholarship participants in the Udacity Classroom on June 28, 2021, and the winners will be announced on October 21, 2021. The second phase will start on October 25, 2021, and will end on January 25, 2022.

Register for the scholarship here.

Advertisement

OpenAI Invites Applications For $100M Startup Fund With Microsoft

OpenAI Startup Fund

At Microsoft Build, Sam Altman, CEO of OpenAI, announces a $100 million fund for startups leveraging OpenAI for innovation. In a pre-recorded video, Altman said that the company, along with Microsoft and other partners, is willing to make big early bets in close to 10 companies with OpenAI Startup Fund.

OpenAI is looking for ambitious developers and entrepreneurs who are primarily into health care, climate change, and education. However, these are not the only sectors they would invest in, although those are their top priorities.

The OpenAI Startup Fund is not limited to just providing the necessary capital but also offering Microsoft Azure credits, support from their team, and early access to OpenAI’s future systems.

Microsoft’s partnership with OpenAI goes back to 2019, when the company invested $1 billion for OpenAI’s research and development in artificial intelligence. As a part of the investment, Microsoft also gained access to some of the intellectual property of OpenAI. Case in point, a year later, Microsoft obtained an exclusive license of GPT-3, a 175 billion parameters natural language model.

Since the early private beta access, GPT-3 is being used by thousands of developers to create innovative solutions. Earlier this week, Microsoft also announced the integration of GPT-3 for application development and is committed to equipping other products and services for enhancing the productivity of numerous tasks.

With the OpenAI Startup Fund, the company would push the solutions developed using GPT-3 in the market to showcase the advancement of their research. This would be unprecedented as a lot of research does not creep into the commercial market. OpenAI wants to capitalize on the hype and push solution build with GPT-3.

Many researchers, including Yann LeCun, have been critical of the GPT-3 hype. For instance, Yann LeCun said that people have unrealistic expectations from GPT-3 and other large-scale NLP models

It would be interesting to see how the OpenAI Startup Fund will help GPT-3 do mainstream.

You can apply for OpenAI Startup Fund here.

Advertisement

Microsoft’s Power Apps Will Allow You To Generate Code With GPT-3

Power Apps GPT-3

At Microsoft Build, the company announced that it is integrating GPT-3 into a wide range of applications. The company fined tuned GPT-3 to enable one of the most popular natural language processing models to generate Power Fx formulas. Power Apps Studio is a low-code platform, which allows people without coding knowledge to build business applications on the fly. Applications build using Power Apps works seamlessly on browsers and mobile devices, making it popular among a wide range of developers.

The new feature will be available in North America by the end of June, allowing users to leverage natural language for generating code/formulas. On providing input through natural language, the system will come up with a few options that can be selected by developers based on their requirements. Such features will expedite the development process of applications and will lead to an ultimate low-code experience.

“Using an advanced AI model like this can help our low-code tools become even more widely available to an even bigger audience by truly becoming what we call no code,” said Charles Lamanna, corporate vice president for Microsoft’s low code application platform.

Late last year, Microsoft obtained an exclusive license of GPT-3 from OpenAI after investing $1 billion in the company in 2019. After the release of the Azure-powered API of GPT-3 at a reasonable pricing, several developers have showcased the capability of GPT-3 in writing code, summarising email, generating synthetic text for articles, and more. Microsoft believes that GPT-3 will be capable of writing code to help the developers’ community in simplifying the application development process.

“We think there are a whole bunch more things that GPT-3 is capable of doing. It’s a foundational new technology that lights up a ton of new possibilities, and this is sort of that first light coming into production,” Eric Boyd, corporate vice president for Azure of Microsoft.

In the future, Microsoft will be integrating GPT-3 into its wide range of products to revolutionize the way developers or non-experts work with natural language processing.

Advertisement