Thursday, November 28, 2024
ad
Home Blog Page 331

OpenMined Releases The Second Free Course Of The Private AI Series

OpenMined Foundations of Private Computation

OpenMined, in collaboration with PyTorch, Facebook AI, Oxford releases the second free course of the Private AI Series. The second course — Foundations of Private Computation — is focused on educating techniques like federated learning, split neural networks, cryptography, homomorphic encryption, differential privacy, and more.

The Private AI Series includes four courses of which the first course was released early this year. Unlike the first course that sets the foundation for the entire series, the second course is technical and requires learners to have knowledge of Numpy and/or PyTorch. 

The first lesson of the second course was released on 17 March and over the weeks, more lessons will be released. The second course will be of 60 hours long, which also includes a project at the end. Although it is a free course, OpenMined has ensured that learners can get their queries resolved with mentors. You can join the course discussion board or Slack community for engaging with other learners and mentors. 

What makes OpenMined course stand out from the rest in the markets is its instructors, who are either developers of the tools or algorithms of privacy-related technologies or are experts in the domain.

Artificial intelligence has enormous potential to revamp the way we carry out personal or professional work. But, privacy concerns are impeding the embracement of artificial intelligence in highly regulated sectors like healthcare, finance, and more. 

One of the many ways to eliminate the privacy challenges is to educate learners and develop a workforce that can bring a change by bringing trust among users. To ensure users trust providing their data on others’ hands, the artificial intelligence industry has to process data without exposing personal information.

Advertisement

Waymo Claims Its Self-Driving Cars Can Avoid Most Of The Fatal Crashes

Waymo Self-Driving Car Avoids Fatal Crashes

In a first, Waymo, one of the worlds leading self-driving car companies, demonstrated how its autonomous technology could avoid fatal crashes. While all other self-driving companies are selling the future of driving without accidents, none of them have a proven track record in real-world scenario-based crashes. But Waymo came up with an evaluation of its superior technology that brings confidence among the critiques of autonomous cars.

Waymo used the data of crashes in Arizona from 2008 to 2017 from the Chandler Police Department and Arizona Department of Transportation (ADOT) and Arizona Department of Public Safety. However, Waymo excluded the data that was beyond the current operating domain of Waymo. The idea was to obtain an understanding of how Waymo’s technology would perform during each real-world crash scenario.

Using the crash data, Waymo reconstructed each crash and simulated it with Waymo Driver. In each of the crashes, Waymo Driver played an initiator and responder role. While the initiator is the vehicle that initiated the crash, the responder is the vehicle that responds to other vehicles’ actions.

Also Read: Baidu Receives Permit To Test Self-Driving Cars In California Without Safety Drivers

In the responder role, Waymo Driver avoided 82% of the simulated crash, and in the initiator role, it avoided every crash. Even in the responder role, where on numerous occasions a responder can do very little, it’s hit from the back. But Waymo Driver reduces the impact on such circumstances, showcasing the effectiveness of Waymo’s self-driving cars.

According to WHO, 1.3 million people die every year because of road accidents. Waymo, over the years, with constant effort, has ensured that self-driving car technology is not a fad. However, a lot of similar assurance is required to ingrain belief in the general public. As a result, Waymo also encourages other autonomous vehicle providers to make their reports publicly available.

Advertisement

Facebook AI’s SEER, A 1 Billion-Parameter Self-Supervised Computer Vision Model

Facebook AI SEER
Image Credit: Facebook AI

Facebook AI announces SEER (SElf-supERvised), a billion-parameter self-supervised model that can give superior output without labeled image data. Facebook’s head of AI and chief AI scientists have been vocal about the potential of self-supervised learning as the way forward for the artificial intelligence industry. As a result, Facebook has been actively working on furthering the development of the self-supervised learning technique.

Over the years, self-supervised learning has been the driving force of several natural language processing tasks like machine translation, natural language inference, and question answering. With the release of Facebook AI’s SEER, self-supervised learning is not making inroads into computer vision.

Although there is no shortage of images in the world, the research community has been struggling to curate labeled image data. Manually curating images takes a lot of effort and increases the cost of research and development or application that highly rely on label datasets to offer state-of-the-art results.

Also Read: Bring Your Picture To Life With Deep Nostalgia

Facebook AI’s SEER uses an open-source VISSL library to build this billion parameters model. VISSL is a PyTorch-based library that allows developers to implement self-supervised learning on image datasets. According to researchers, SEER, however, is not similar to large language models that easily scale to billions and trillion of parameters. In natural language processing, sentences can be broken into words but in computer vision determining the pixels are not straightforward. Other challenges include the variation of the same image due to different angles of capture.

To label new images, it is important to have large convolutional networks, along with algorithms that can learn from images without metadata. Facebook AI’s researchers used its in-house algorithm called SwAV to cluster images based on similarities. And to ensure they have a proper model architecture that can handle the workload, the researchers leveraged RegNets; it has the potential to even scale to trillions of parameters.

Read the full research paper here.

Advertisement

Bring Your Picture To Life With Deep Nostalgia

Deep Nostalgia
Image: Deep Nostalgia

MyHeritage, a website that analyzes your picture to find your relatives and create family trees, has released Deep Nostalgi that animates the faces of people. If you want to look at your ancestors in a real video, Deep Nostalgi can covert the still image you provide a deceptive video.

Founded by Gill Perry, Sella Blondheim, and Eliran Kuta, the D-ID platform is the driving force behind Deep Nostalgia. Powered by deep learning, the creates high-quality images from old images into video footage. 

“This feature is intended for nostalgic use, that is, to bring beloved ancestors back to life. Our driver videos don’t include speech in order to prevent abuse of this, such as the creation of “deep fake” videos of living people,” according to the company. However, the company has earlier used its technology to create a video of Abraham Lincon, added audio, and published it on Youtube.

Although default actions are assigned based on the image of a person, one can also choose from multiple options to look at different gestures like blink, smile, and more. For now, a few animations are free, but you will have to subscribe to get unlimited animations of images. 

Released in February, MyHeritage immediately gained traction as millions of images were uploaded on the very first day. People started sharing their nostalgia across the internet, and soon it became popular all over the world. 

The technology, however, has can potentially be misused for cyberbullying, but there are no signs of exploitation. Other issues include privacy flaws but the company has stated that they do not share your images with any third party and have no rights to the animated video. 

In the future, MyHeritage is committed to further enhance the capabilities of Deep Nostalgia to animate multiple faces in a single photo.

Animate your images here.

Advertisement

China Overtakes The US In AI Journal Citation – Stanford AI Index Report

Stanford AI Index Report

Stanford publishes its AI Index Report that focuses on the developments of the complex artificial intelligence landscape since 2017. The latest report — 2021 — shed some light on the impact of COVID-19 in AI research, countries leading the race in research, and more. In a total of 7 chapters, the Stanford AI Index Report also covers aspects like AI education, research and development, diversity in AI, and AI policy.

One of the most surprising revelations, for many, is that China overtakes the US in terms of journal citation, pinpointing the advancement in their research. This comes after China surpassed the US in the terms of the number of artificial intelligence research publications in 2017 after briefly overtaking in 2004. However, the US has significantly more cited AI conference papers than China.

While experts and AI enthusiasts may be at crossroads when it comes to the development of research breakthroughs from China, everyone can be blissful about the expanding research in artificial intelligence. According to Stanford AI Index Report, in 2020, the artificial intelligence journal publications grew by 34.5 percent from 2019. AI-related research publications also grew sixfold on arXiv since 2015 and reached 34,736 in 2020.

Also Read: MIT Task Force: No Self-Driving Cars For At Least 10 Years

The COVID-19 pandemic also provided the opportunity for machine learning enthusiasts and practitioners to attend more virtual conferences; attendance across nine AI-related conferences almost doubled in 2020. The interest among aspirants to build the real-world application using artificial intelligence is also evident, as, in 2019, 65 percent of PhDs from North America moved into industry, up from 44.4 percent in 2010.

While there was a lot to cheer about this report, some of the challenges the AI industry witnessing are startling. One of the major challenges that are impeding AI development is the need for diversity. The Stanford AI Index Report pinpointed that, in 2019, 45 percent of new U.S. resident AI Ph.D. graduates were white, 2.4 percent were African American, and 3.2 percent Hispanic.

You can read about the entire report here, which has a lot of information on the issues with the lack of proper benchmarking for artificial intelligence to measure the real-world impact and more.

Advertisement

NVIDIA Rolls Out Transfer Learning Toolkit 3.0: Build Production-Ready ML Models Without Coding

NVIDIA Transfer Learning Toolkit

The largest GPU provider releases NVIDIA Transfer Learning Toolkit 3.0 to help professionals build production-quality computer vision and conversational AI models without coding. As the name suggests, the toolkit leverages the transfer learning technique — a method where a deep learning model transfers its learning to another model to further improve the dexterity of the newer models.

To develop a deep learning model, data scientists need superior computation, large-scale data collection, labeling, statistics, maths, model development expertise, and more. This impedes practitioners from quickly developing machine learning models and bring them to the market. Consequently, NVIDIA rolls out the Transfer Learning Toolkit 3.0 to eliminate the need for a wide range of expertise to build exceptional models.

embedded-diagram-tlt3.0-launch-software-stack-diagram-1571862-01.png
NVIDIA Transfer Learning Toolkit

Data scientists can now develop models by just fine-tuning the pre-trained models from the NVIDIA Transfer, thereby building models even without having a knowledge of artificial intelligence frameworks. According to the company, the NVIDIA Transfer Learning Toolkit (TLT) can expedite the engineering efforts by 10x. In other words, deep learning models that usually take 80 weeks while building from the ground up, with NVIDIA TLT, development can be carried out in 8 weeks.

These pre-build models are available for free and can be accessed from NGC to develop common computer vision and conversational AI models like people detection, text recognition, image classification, license plate detection and recognition, vehicle detection and classification, facial landmark, heart rate estimation, and more.

All you have to do to get started is pull the NVIDIA Transfer Learning Toolkit container from NGC, which comes pre-packaged with Jupyter Notebooks, to build state-of-the-art models. Although the Toolkit is available for commercial use, you should check for the specific terms of license of models

Read more here and get started with tutorials here.

Advertisement

NVIDIA Deep Learning Institute Introduces Free Accelerated Data Science Teaching Kit

NVIDIA accelerated data science teaching kit

NVIDIA Deep Learning Institute that offers free hands-on training in artificial intelligence, has released a new Data Science Teaching Kit. The idea behind NVIDIA accelerated data science teaching kit is to help learners gain access to high performance computing, exceptional libraries, and other machine learning techniques. 

The accelerated data science teaching kit is packed with resources for fundamentals and advanced topics in data collection and processing accelerated data science with RAPIDS, GPU-accelerated machine learning, data visualization, data ethics and bias in datasets, data visualization, and more.

The NVIDIA accelerated data science teaching kit is devised in collaboration with two experts — Polo Cahu and Xishung Dong — of the Georgia Institute of Technology and Prairie View A&M University. The teaching kit has lecture slides, notes, quizzes, and other hands-on labs. And according to NVIDIA, in the future, videos will be released for the modules to allow an omnichannel learning experience.

Also ReadInfosys Cobalt Announces Applied AI Cloud Built On NVIDIA DGX A100s

Over time, NVIDIA Deep Learning Institute has released teaching kits that come with free GPU with AWS credits for educators and their students, self-paced courses and certificate opportunities, live instructor-led workshops, and more. With the release of the accelerated data science teaching kit, Deep Learning Institute has not released a total of 4 teaching kits for educators and their learners.

Currently, teaching kit is available in categories like Accelerated Computing, Data Science, Deep Learning, and Robotics. By gaining access to these teaching kits for free, learners can obtain knowledge of applying machine learning techniques and build end-to-end projects with ease. However, the toolkit is only available for qualified educations. You will have to apply to get access to the free toolkit. A wide range of people, including students, professors, and researchers, can join in by applying for the permission of Deep Learning Institute teaching kit.

Click here to apply for the NVIDIA teaching kit and learn the latest technologies for free.

Advertisement

Google Introduces Model Search, An Open-Source Platform To Find The Best ML Model

Google Model Search

Data scientists struggle to find the best model for their projects as there can be many factors that can influence the performance of machine learning models. To mitigate such challenges, Google introduces Model Search — a framework to implement AutoML algorithms for model architecture search at scale. Just like any other machine learning practitioner, if you come across questions like ‘which is the appropriate neural network should be implemented?’ ‘LSTMs or Transformer?’ ‘Ensembling or distillation for performance?’ and more, the library is for you to find the right answers.  

Google Model Search library will allow data scientists to run AutoML algorithms on their data to find the best model with the right layers for the project. What makes Google Model Search superior is that it considers which domain the project is catering to for finding the best model architecture.

Developers can get started with the model search with only a few lines of code, and the library can run hundreds of machine learning models. Post checking with several models data scientists can check the results of individual models’ performance in the root directory. There are default specifications that are used while evaluating numerous models on data but developers can create their own specifications as well. Besides, Google Model Search also enables developers to test their own models — called blocks.

Google has also ensured that the search can run parallelly to reduce the turnaround time with the help of multiple machines. However, the current version of the framework only supports classification problems only. In the future, it will also empower developers to use regression problems.

As per the researchers, Google Model Search has shown exceptional results that were demonstrated in the recent paper in the speech domain. “Over fewer than 200 iterations, the resulting model slightly improved upon internal state-of-the-art production models designed by experts in accuracy using ~130K fewer trainable parameters (184K compared to 315K parameters),” mentions the researchers in a blog post.

Find the Google Model Search library on GitHub.

Advertisement

AWS To Host A Free AI/ML Edition Conference Of AWS Innovate

aws innovate ai ml

AWS will host a free AI/ML edition of AWS Innovate for both machine learning beginners and practitioners. AWS Innovate is packed with 50+ sessions, model deployments best practices, business use cases, hands-on guides, technical demos, and more. The event will be hosted on 24 February 2021 to assist machine learning enthusiasts to learn from the experts of AWS.

Learners will also get a certificate of attendance if they completely watch 5 sessions or more of AWS Innovate AI/ML edition.

The sessions are designed into 4 levels of attendees’ expertise on various AWS and AI/ML topics. While the basic levels include imparting knowledge of getting started with AWS for AI and ML, advanced levels have sessions like optimization of recommendations, choosing the right ML algorithms for different use cases, and more.

AWS Innovate AI and ML edition also caters to startups as it has separate sessions like building AI-powered applications without any machine learning expertise, scaling your startup, among others.

Other interesting events at AWS Innovate AI/ML edition are AWS DeepRacer — a beginner’s way of getting started with reinforcement learning by training an agent for autonomous driving. Users can choose to participate in the AWS DeepRacer event and compete with learners from across the world.

Register for the AWS Innovate AI/ML virtual event here.

Advertisement

Trello Adds Visualisation To Bring Insights Into Projects

Trello Update
Image credit: Trello

As remote working is becoming the new normal, Trello releases an update to its project management tool. This major update brings several features like a dashboard, a timeline, a table, and more. 

The idea behind the new release is to offer context to the content to help teams work effortlessly while being at different places. In addition to the feature release, there is a once-in-a-decade makeover of Trello’s logo and brand.

Of the many features, visualization invoked interest among users as it offers insight into projects’ progress, assigned cards, deadlines, and others. With this update, Trello is focused on bringing new visibility into projects with ease, assisting in generating reports that can be assimilated quickly with visuals.

Built on top of AWS, Trello has over 50 million users and has plans to double down on acquiring new users with added functionalities that streamline the workflow of organizations. For this, the new Trello has added tables, which are capable of replacing spreadsheets for project management, according to the company.

To transform teams’ capabilities, Trello has also added new functionalities to views, cards, and more. Besides, calendar and due date are a new addition, where you can manage tasks and create commands to assign actions of cards based on events. For instance, the moment a card is due, move the card to the top of the list “To Do” and join the card.

Note: Several features are only available for business class and enterprise customers.

Read more here.

Advertisement