Monday, November 10, 2025
ad
Home Blog Page 346

Data Science Is Not Only About Technical Skills – Arihant Jain, Lead Data Scientist Of ZestMoney

Arihant Jain

Analytics Drift interacted with Arihant Jain, Lead Data Scientist at ZestMoney, to get his perspective on various data science topics to help freshers make effective career decisions. Arihant is a mechanical engineer by degree and a data scientist by choice. Over the years, he has been actively mentoring students who want to get into the data science field. Currently, Arihant has more than six years of data science experience while working with some prominent organizations like Vodafone, RBL Bank, and Genpact. 

AD: According to you, what are some of the mistakes made by freshers?

Arihant Jain: Beginners often consider themselves data scientists after learning Python, Statistics, SQL, and Mathematics. But, the real world is different; one has to have a business understanding to obtain returns on investment for companies. Data scientists are expected to assimilate business challenges and analyze whether it is solvable using machine learning techniques. If problems can be solved, the next step involves collecting the right data from different sources to eventually implement technical learnings. Along with technical skills, they should focus on structural and critical thinking, understanding the domain, converting business problems into analytical problems, and more.

AD: Why do most of the data science projects fail to go into production?

Arihant Jain: There are multiple factors as to why data science projects fail to go into production. One of the reasons projects remain in proof of concept for life is because professionals fail to accurately define the problem statements. This is where business understanding plays a significant role. In addition, failing to communicate the results of the proof of concepts with the decision-makers leads to failure in many data science projects. As a result, effective storytelling skills are vital for data scientists to bring ideas into reality within organizations. Critical thinking, business understanding, and storytelling, although underlooked, are essential for any data scientist to thrive in their careers as these help in delivering value in organizations.

Also Read: Lab To Product: Solving Problems Like A Real Data Scientist With Chiranjiv Roy

AD: Do you think data scientists need to know about product development?

Arihant Jain: In the last couple of years, there was a demand for talents who could build models, but the curve is shifting, and it is going to move rapidly. Companies now realize that there is no value in hiring people who can only create models in Jupyter Notebook. Relying on software engineers to develop the products might not be the best way forward since they may not implement data science projects in the desired way. This is why MLOps is a prominent trend, where data scientists need to write production-level code and understand the deployment of models.

It might be too much to ask from beginners to learn MLOps, but merely an understanding will differentiate them from the rest. Obviously, they can become proficient as they move ahead in their careers, at least, a basic knowledge like fundamentals of Dockers and wrapping it up to deploy models locally and on the cloud will take them a long way.

Today, data scientists do not need to know some secrets to build models. Anyone can refer to articles and GitHub to create ML models. Now, the core skill which remains is how to design a model, how to design a problem statement, how to sell it through better storytelling, and how to deploy it and measure the effects. Organizations are already considering these skills while hiring, and, in the coming years, the demand for such skills will grow dramatically.

AD: How will the AutoML impact data science jobs?

Arihant Jain: AutoML is another hype created in the last couple of years, but it surely will not eat up the data science jobs. However, I am still optimistic about AutoML solutions’ role in automating redundant jobs in data science workflows. But, identifying and defining a problem is something that can only be done by professionals. Automating trivial tasks is a win-win for data scientists and AutoML providers, as professionals can focus on creative things, thereby bringing value to the organizations. Data scientists still spend a lot of time cleaning and other redundant tasks. Eliminating such practices with AutoML is what everyone wants.

AD: Is there a talent supply and demand gap in the industry?

Arihant Jain: In a way, there is no supply and demand gap because we see many data scientists coming out every day in a month. But, there is a difference between a data scientist who learns in nine months with certification versus having real skills, which is where the gap exists. Data science would not have been popular the way it is only because someone can import a library and run code. Rather it is about the impact data science can bring if implemented with due diligence. Organizations need professionals who can assist in generating revenue in organizations, not just write some algorithms that do not deliver value.

Although data science practitioners are coming out of institutes who are trained on Python and other tools, when it comes to the value they can bring, the industry is struggling to find the right talent. Unfortunately, I think the talent gap will still exist until aspirants start thinking independently because that is what data scientists do; learn the necessary skills while having a strong foundation. However, aspirants get lost in the ocean of information and learn several techniques instead of honing up basics.

AD: Many aspirants pursue data science because of the hype. What would you advise them?

Arihant Jain: If learners want to identify whether data science is for them or are just here because of the hype, they should work for three months on various projects, and if they feel like learning more, then data science is for them. One of the best ways to access oneself is by participating in Kaggle competitions that run for three to six months. After the contest ends, evaluate the leaderboard position and contemplate if they loved the process. If the answer to the latter is no, then one will eventually get frustrated in a few months. Do not enter the field because of the high paycheck. This field requires dedication, commitment, and hard work to solve problems and create impact.

Advertisement

Facebook Acquires Kustomer, A CRM Provider

Facebook acquires Kustomer

Facebook announces to acquire Kustomer, a CRM solution provider, in an undisclosed deal. However, according to various sources, the deal is a little over $1 billion. Kustomer is an omnichannel CRM tool that brings all the interaction from different sources on its platform to enable organizations to interact with customers effortlessly. 

Post the closing of the transaction Brad Birnbaum and Jeremy Suriel, co-founders of Kustomer, and the rest of the team will join Facebook. Started in 2015, the New York-based Kustomer has quickly gained traction among organizations that are providing superior customer experience with chatbots, in-house forums, and social platforms.

The adoption of chatbots due to the pandemic has increased rapidly to automate customer service and support. Organizations can process such data and target users with ads, improve services, and more. With the acquisition of Kustomer, Facebook will try to streamline customer service management by integrating CRM with Facebook products. While the data will not be automatically used from Kustomer for Facebook Ads, organizations will be able to use it for their own marketing campaigns.

Source: Kustomer

Also Read: Google Verse By Verse Will Help You Become A Poet Using AI

Over 175 million people use WhatsApp to interact with organizations for several queries and services. Facebook sees this as a massive opportunity for them to further enhance the offerings and empower organizations to harness the power of unstructured data from web chat, email, and messaging.

“Facebook plans to support Kustomer’s operations by providing the resources it needs to scale its business, improve and innovate its product offering, and delight its customers,” wrote Dan Levy, VP of Ads and Business Products, and Matt Idema, COO of WhatsApp.

Advertisement

Simplilearn’s SkillUp Platform Is Offering Free Data Science Courses

Simplilearn's Skillup

Simplilearn, an edtech platform that offers a wide range of tech courses, launches the SkillUp platform to provide over 300 top skills. Deliver by experts from the technology space, the platform has on-demand video lessons not only on data science and machine learning but also DevOps, IT service & architecture, and more. With over 1000+ hours of video lessons, it makes for a good start for beginners.

The idea behind Simplilearn’s SkillUp initiative is to allow aspirants to get started with technologies of their choice. If you are interested in exploring new fields of technology, you can also enroll in different in-demand skills like cybersecurity, software development, among others on the platform for fee.

In addition, to allow beginners to make effective career decisions, Simplilearn also offers free guides on career paths, interview tips, and salaries on Simplilearn’s SkillUp platform. The course and guides are available on both web and mobile platforms, thereby eliminating the form factor barrier for learners.

Also Read: Google Verse By Verse Will Help You Become A Poet Using AI

“We have launched this initiative to benefit millions of learners across the globe who may find it difficult to afford or access quality learning programs. It is our humble effort to democratize online skilling and help our learners to boost their careers and stay ahead of the curve. We will continue to add more topics and content to SkillUp over time, making it the premier destination for free digital-skills education,” said Krishna Kumar, founder and CEO of Simplilearn.

Although the courses are not for advanced learners, it can be help enthusiasts in either getting started or exploring new technologies. Some of the key courses beginners can enroll in are Data Science with PythonBasics of Data Science with RIntroduction of Data Analytics,  Business Analytics with ExcelBasics of MongoDB, and Big Data Hadoop and Spark Developer.

Explore all the courses on Simplilearn’s SkillUp platform here.

Advertisement

Artificial Intelligence Hub–ARTPARK–In IISc, Bengaluru, Raises ₹170 Cr Seed Fund

ARTPARK IISc

Indian Institute of Science (IISc) sets up an Artificial Intelligence and Robotics Technology Park (ARTPARK) in Bengaluru to innovate and solve problems related to Indian ecosystems. The non-profit ARTPARK is a public-private partnership where the Department of Science and Technology (DST) invested ₹170 crores as seed funding under the National Mission of Inter-disciplinary Cyber-Physical Systems (NM-ICPS). The idea behind NM-ICPS is to establish 25 Technology Innovation Hubs (TIHs) in India at top academic institutions. The mission is not only to perform but also to integrate the new technologies into real-world applications of sectors like agriculture, retail, automobiles, and more.

“While Silicon Valley might be innovating for the first billion, in India, we have the data and the talent to take on the problems of the developing world–the so-called six billion people,” said Umakant Soni, ARTPARK Co-Founder.

Being a public-private partnership, ARTPARK will receive ₹60 crores in a span of five years from the state government of Karnataka. Similar associations of center and state will further gain steam as the country is poised to introduce Science, Technology and Innovation Policy, 2020.

Also Read: Google Launches Professional ML Engineer Certification

“Indian academia has been carrying out cutting-edge technology research in various domains. However, we have had systemic issues in moving the results of this research from university labs into the outside world. ARTPARK would go a long way in establishing a template for addressing this need,” said Govindan Rangarajan, director of IISc.

ARTPARK will bring researchers, entrepreneurship, and global industry partners to bring diverse perspectives in the innovation to mitigate some of the strenuous challenges witnessed in India.

“Google was born out of Stanford University and it is a trillion-dollar company now. We need to build such companies out of India. We work with industry, academia and entrepreneurs from across the world to make this happen, and I invite them to come and collaborate with us,” said Subhashis Banerjee, Chief Investment Officer, ARTPARK.

“We have also started a venture studio to create such deep tech companies and are also raising a $100M international venture fund along with the support of DST to support these and other AI and robotics companies.”

Advertisement

Google Verse By Verse Will Help You Become A Poet Using AI

Google Verse by Verse

Google Verse by Verse will assist you in composing poems by predicting verses according to your inputs. In an attempt to integrate creativity with artificial intelligence, Google researchers released Verse by Verse as an experimental product to leverage machine learning techniques in highly creative tasks. Writing poems require your creative juice to make it thought-provoking as well as impactful. To help your creative juices flow, Verse by Verse suggests you with a pool of possible verses.

The researchers leverage two machine learning models to deliver relevant verses. While a generative model is trained on classical poetry, the other is trained to systematically understand the verses best follow the instructions.

With Verse by Verse, you can start composing poems with classical poets like William Cullen Bryant, Emily Dickinson, Sidney Lanier, Henry Wadsworth Longfellow, and more. Select any three poets from the list on the website to get predictions in the style of selected poets that act as your muses. You will then be asked to choose the poetic forms, syllable count, and rhyme (if you choose Quatrain as a poetic form). You are now good to start with the first line based on which the machine learning model provides predictions.

Also Read: Google Launches Professional ML Engineer Certification

You can either use Google Verse by Verse predicted verses or take inspiration from the generated suggestions to write your own. Below is an example of a composed poem after providing the input as “Poetry flowing through my thoughts.”

“… to be able to suggest relevant verses, the system was trained to have a general semantic understanding of what lines of verse would best follow a previous line of verse. So even if you write on topics not commonly seen in classic poetry, the system will try its best to make suggestions that are relevant,” wrote Dave Uthus, software engineer of Google AI.

Advertisement

Google Launches Professional Machine Learning Engineer Certification

Google Professional Machine Learning Engineer certification

Google introduces the Professional Machine Learning Engineer certification to allow data scientists to differentiate themselves from the rest. Professionals can take the two-hour-long multiple choice and multiple select exam online or onsite-proctored exam to get certified. Over the years, organizations struggle to hire the right data scientists who can work on end-to-end on data-driven projects to generate business value. According to the IT Skills and Salary 2019 report, 52% of IT decision-makers are in need of professionals who can meet their organizational goals and close the skills gap.

Due to the current hype around artificial intelligence, many professionals pursue data science through various online courses. And since data science courses are devised to quickly complete in a few months, learners acquire similar knowledge. This, however, solves only a part of the talent gap problem in the ever-changing field. Data science is not limited to a few libraries and building predictive models that are being taught in online courses. Instead, data science is about framing ML problems with critical thinking, creating effective ML pipelines, optimizing solutions, managing infrastructure, monitoring models’ performance, and more.

Organizations do not need data scientists who can only write certain ML algorithms; today, recruiters focus on a wide range of skills in applicants, which can help companies in developing AI-based products to generate revenue.

Also Read: Top Python Image Processing Libraries

Consequently, there was a need for a methodology to differentiate the best among the rest. Google Professional Machine Learning certification mitigates the challenge by evaluating professionals on a wide range of skills like statistics, ML models, preparing data processes, GCP, framing ML problems, and more.

Although there are no prerequisites to take this exam, Google recommends 3+ years of industry experience, including 1+ years of designing and managing solutions using GCP. To let you prepare, Google is also conducting a webinar and has created a learning path that includes courses to ensure you are well versed before attempting the exam that costs $200.

Advertisement

CAST AI Launches Multi-Cloud Platform To Connect AWS, Azure, & GCP

CAST AI

CAST AI, a multi-cloud company that helps developers deploy, manage, and cost-optimize applications in multiple clouds simultaneously, today announced its multi-cloud platform launch and closing of its $7.7M Series Seed round. 

“We built CAST AI with a simple idea in mind – developers need tools to take advantage of everything that AWS, Azure, and other public cloud vendors have to offer. Until now, developers had to pick their preferred cloud partner. With CAST AI, our clients can deploy their infrastructure across all public cloud providers simultaneously,” said Yuri Frayman, Co-founder and CEO of CAST AI.

CAST AI enables businesses to deploy an application in a unified infrastructure that spans multiple public clouds. “We connected the clouds via a secure network mesh, allowing any cloud services of one cloud provider to be available on any other cloud. With autoscaling, we adjust compute resources in real-time, and, with the power of Kubernetes, we add workload replicas when the application needs it,” said Laurent Gil, Co-founder, and CPO of CAST AI. “This is our breakthrough, one application that spans many clouds on an infrastructure that is constantly resized to fit the application needs,” continued Laurent Gil.

The Covid-19 pandemic has massively accelerated cloud adoption. But what do cloud users find once they start scaling? Ballooning bills, growing complexity, and vendor lock-in. This means thousands of wasted DevOps hours and skyrocketing company budgets. 

To combat these problems, many companies are now diversifying their infrastructure to multi-cloud. According to Gartner, more than 75% of mid-sized and large businesses will adopt a multi-cloud or hybrid cloud approach by 2021. Using multiple cloud services solves some problems, like vendor lock-in and cost optimization. But up to now, migrating to multi-cloud and managing such complicated infrastructure has been challenging and expensive.

“We went through all these struggles ourselves while trying to solve problems like stopping cloud bills from growing and saving the time of our teams. But it was only getting more complex and challenging. That was enough!” said Leon Kuperman, Co-founder, and CTO of CAST AI. 

“We decided to solve this problem and finally let developers easily move their cloud infrastructures across all major providers and take advantage of everything that AWS, Azure, and Google Cloud have to offer. No more vendor lock-in, no more cloud waste. Giving the power back to developers is our mission, and we’re proud to share our platform with the world,” continued Leon Kuperman.

Also Read: How Scanta Is Fortifying ML-Based Chatbots From Cyberattacks

CAST AI recently raised $7.7 million from VC and angel investors for its Seed round, including a contribution from TA Ventures. “I believe that multi-cloud is the next big thing in the tech world. That’s why we invested in CAST AI. We support their mission of democratizing the cloud market and helping all developers to avoid vendor lock-in,” said Oleg Malenkov, Partner at TA Ventures.

CAST AI in its beta version attracted over 30 business clients. Launching out of stealth, the firm intends to use its Seed funding to expand sales efforts and continue investing in product development.

“As enterprises embrace digital transformation and move applications to the cloud, the centralized management of multi-cloud environments becomes critical,” said Alan Dumas, CEO of Red River, a partner of CAST AI. “CAST AI has the potential to be a game-changer for our customers by giving negotiating power back to cloud users, helping them avoid vendor lock-ins, reduce costs, and benefit from provider diversity.”

“We launched CAST AI because we believe that managing a multi-cloud setup should be as easy as buying anything online. With the backing of our investors and early clients, we are here to make the cloud a better place,” said Yuri Frayman.

Advertisement

Top Python Image Processing Libraries

Python Image Processing Libraries

Image processing in Python is comparatively easier than any other programming language because of numerous available libraries in the market. Python libraries for image processing simplify the process as anyone can import and run a few lines of code to quickly mould based on the requirements. Today, a colossal amount of data is generated due to the rapid increase in smartphones and CCTV cameras. The abundance of image data has let to many companies building data-driven products to streamline business processes. Consequently, being proficient with image processing libraries can differentiate you in the market.

Here are the top 9 image processing libraries in Python: –

1. OpenCV

OpenCV is arguably the best image processing library in the world due to its wide range of use cases in computer vision. Written in C++ and C programming, OpenCV delivers the necessary speed for real-time computer vision. Originally developed by Intel and later supported by Willow Garage and Itseez, the library has been helping machine learning practitioners since 2000.

On GitHub, the library has over 49k stars and 40.5k forks, implying the popularity among developers. Such demand for the library has also lead the project managers to devise a course to help learners master the library. Although the library is free and comes under the Apache 2 License, the course comes at a reasonable cost. However, you can also learn from several tutorials and documentation to quickly get started with the library.

GitHub | OpenCV

2. Scikit-image

Scikit-image is another widely used Python library for almost every image processing workflow. It is a collection of numerous algorithms for tasks like feature detection, color space manipulation, segmentation, transformations, and more. Created in 2009, scikit-image has gained traction from the developer community to simplify image processing workflows. On GitHub, it has more than 1.7 fork and 4k stars.

Written in Cython, the library is more Pythonic, thereby, easy to leverage and process images without clutters. For any machine learning enthusiast, learning scikit-Image is a must-know library.

Since scikit-image represents images as NumPy arrays, you should be familiar with the NumPy library. If you are exceptional with NumPy, you can implement several image processing without using other libraries. But, since you do not want to write custom codes for complex image processing, you will have to make use of scikit-image to write your code quickly.

GitHub | Documentation

3. Pillow

Built on top of Python Image Library (PIL), Pillow is among the top three libraries for image processing. Especially used in batch processing, Pillow is commonly used within organizations. Another advantage of Pillow is that it supports a wide range of file format support, making it a one-stop-shop for all your image processing needs. Created in 2009, Pillow has gained over 1.5k forks, and 7.9k starts on GitHub.

Written in C and Python, you can effortlessly learn essential functions within a few days. The pillow library, in its documentation, has included tutorials to assist learners in getting hold of the library.

GitHub | Documentation

Also Read: Top Data Science Podcasts You Should Follow

4. Mahotas

Mahotas is an open-source library for computer vision in Python, which handles all the image data types. Similar to scikit-image, Mohotas also represents images as NumPy array structures. With Mohotas image processing library, you can expect speed as it is implemented in C++. With Mohotas, you can use over 100 functions for image processing and computer vision.

Other advantages of the library include the ability to retain the functionality of the code even after iteration. In other words, the old code remains relevant for many years. Nevertheless, some interfaces do depreciate over time. To understand the implementation of the Mahotas, read the preprint paper here.

GitHub | Documentation

5. SciPy

Although popular for scientific computation, SciPy is also used as image processing with scipy.ndimage submodule. Similar to scikit-image, SciPy works in tandem with NumPy to process images effortlessly. Due to the speed it offers, you can build several moderate-level workflows like feature extraction, face detection, image sharpening, denoising, geometrical transformations and more. But, SciPy cannot provide you with the flexibility to develop complex projects that require intensive workflows.

GitHub | Documentation

6. Matplotlib

Matplotlib, along with visualization, can be used for manipulating images. The library uses Pillow library to load images data and can handle float32 and uint8, but is limited to uint8 for PNG files. While working with Matplotlib, you can use plt.imshow() to display the NumPy array representation of images. Matplotlib allows you to apply pseudocolor, display color scale reference, perform interpolation, and more. If you want to do basic image processing, then Matplotlib can come in handy while getting started with image analysis.

GitHub | Documentation

7. SimplelTK

Also commonly known as ITK–Insight Segmentation and Registration Toolkit–is a widely used image processing library. ITK is a powerful library to use but is very large and complex. However, you can use their detailed guide to understand the most important features of the library for image processing and segmentation. Built to handle advanced projects, the library keeps evolving with the help of contributors on GitHub, which has 756 stars and 441 forks on the platform.

GitHub | Documentation

8. SimpleCV

SimpleCV is a very easy to use computer vision and image processing library, but it is not used for intensive projects. If you are new, you can leverage SimpleCV for computer vision tasks but will have to eventually move towards OpenCV. Although it has 2.4k stars and 769 forks on GitHub, there is no further development in the open-source project. Nevertheless, if you are a beginner, you can start with projects in just a few lines of code.

GitHub | Documentation

9. Pgmagick

Only officially supported on macOS and Linux, Pgmagic is another image processing library that is most common among enthusiasts. However, a Windows user can rely on unofficial binary packages to play with images. Pgmagick is a very simple library that works with over 88 image formats for processing basic manipulations like resizing, sharpening, blur filtering, rotation, and more.

GitHub | Documentation

Feature Image Credit: Wallup

Advertisement

Top AI News Of The Week [November 15, 2020]

Top AI News of The Week

This week, we witnessed a few collaborations among top AI organizations and universities to simplify the adoption of AI. While PyTorch, OpenMined, and the University of Oxford partnered to offer free courses on -preserving-privacy in AI, IBM and AMD came together to build frameworks and AI solutions for hybrid cloud. Besides, this week has several announcements like 3D image datasets by Google and self-driving firm, Nuro, raising money in Series C round. 

Here are the Top AI News of The Week (November 15, 2020): –

PyTorch Provided $600,000 To OpenMined For 4 Free Courses

OpenMined, along with PyTorch, Facebook AI, University of Oxford and more, will devise four free courses. The idea with the initiative is to bring awareness as well as educate people about preserving-privacy in AI technology. Starting from January 2, 2021, the courses will be released for anyone to enrol for free.

All the courses will be a part of The Private AI Series, which will be based on PyTorch. “New paradigms and skills are spread most effectively through education, so we’re building an entirely new learning platform starting with a series of courses on privacy-preserving machine learning,” notes OpenMined.

IBM And AMD Joined Forces For AI and Confidential Computing

As per the joint press release, the association between AI and confidential computing 

IBM and AMD will develop open-source software, open standards and open system architectures to drive confidential computing in hybrid cloud environments. The idea is to enable organizations to deploy critical applications that require high performance computing on hybrid infrastructure.

“The commitment of AMD to technological innovation aligns with our mission to develop and accelerate the adoption of the hybrid cloud to help connect, secure and power our digital world,” said Dario Gil, Director of IBM Research. “IBM is focused on giving our clients choice, agility and security in our hybrid cloud offerings through advanced research, development and scaling of new technologies.”

Vatican City Will Use AI To Protect Its Library

Image source: History

Apostolic Vatican Library of Vatican City has been digitizing its library of more than 80,000 manuscripts, which consists of 40 million images. This has led to attacks on the library by hackers every month. Founded by Pope Nicholas in 1451, the library witness more than 100 cyberattacks every month. Consequently, there is a need for a robust AI-based system to protect in order to avoid any malicious practices by hackers. 

“In the era of fake news, these collections play an important role in the fight against misinformation and so defending them against ‘trust attacks’ is critical,” said Manlio Miceli, chief information officer of the library, to The Guardian.

Nuro Raised $500M In Series C Round

Led by T. Rowe Price Associates, and participation from new investors–Fidelity Management & Research Company and Baillie Gifford–and existing investors, Nuro raised $500M in Series C round. Founded in 2016, the company has over 500 people who blaze the trail to make inroads into accomplishing self-driving cars. Currently, Nuro has developed two lightweight autonomous delivery vehicles. Its second-generation vehicle R2 was the only fully-autonomous (without safety driver) vehicle that drove on public roads in California, Texas, and Arizona.

Google Announced Objectron Dataset To Better Understand 3D Objects

Google released Objectron dataset, a collection of short video clips of common objects from different angles to further improve computer vision capabilities. With over 15k annotated video clips and 4M annotated images collected from a geo-diverse sample, the dataset will serve as the foundations for further research. “Each video clip is accompanied by AR session metadata that includes camera poses and sparse point-clouds. The data also contain manually annotated 3D bounding boxes for each object, which describe the object’s position, orientation, and dimensions,” noted the author.

Advertisement

PyTorch Joins Hands With OpenMined To Offer 4 Free Course

OpenMined The Private AI Series

OpenMined, an open-source community that ensures privacy-preserving artificial intelligence workflows, joined forces with PyTorch and others to make four free courses. PyTorch provided $600,000 to OpenMined to create and deliver free courses starting from January 2, 2021. The new courses by OpenMined will be under the banner The Private AI Series, which will be based on PyTorch.

“New paradigms and skills are spread most effectively through education, so we’re building an entirely new learning platform starting with a series of courses on privacy-preserving machine learning,” notes OpenMined.

The Private AI Series

The four courses — Privacy and Society, Foundations of Private Computation, Federated Learning Across Enterprises, Federated Learning on Mobile — will be of more than 146 hours, giving you a complete understanding of privacy and security while working with sensitive data.

Since the course will be created in association with Facebook AI, University of Oxford, PyTorch, Future of Humanity Institue (university of Oxford), BigData (UN Global Working Group), and OpenMined, the course will be delivered by experts, including guest experts from MIT, Harvard, and more. 

Some of the core features of courses include real-world projects, technical mentorship, and more. The Private AI Series by OpenMined will not only offer free learning but also prepare you for upcoming private data analysis certification by the United Nations Global Working Group (GWG) on Big Data.

Also Read: IBM Is Offering Special Free Certification On Coursera

Privacy and bias have become the most significant barrier to the proliferation of artificial intelligence. Addressing one of the issues — privacy — OpenMined is working toward pushing the technology’s development for the greater good. Federated learning, a technique to train models without sharing the data, is a groundbreaking process to preserve privacy. This course highly focuses on the privacy technique to teach both beginners and practitioners with its upcoming four free courses. 

You can sign up for the course today and start when it is available early next year.

Advertisement