LinkedIn Fairness Toolkit (LiFT) was released by the largest professional networking giant to enhance explainable AI initiatives by practitioners and organizations. This comes at the time when we are witnessing heated debate about the fairness of computer vision technology. Today, artificial intelligence is being used in a wide range of solutions right from identity authentication to determining defects in products and finding people with facial recognition. However, the lack of explainability in machine learning models has become a major roadblock that is hampering the proliferation of the latest technologies.
Although a wide range of libraries, services, and methodologies have been released in the last few years such as LIME, IBM Fairness 360 Toolkit, FAT-Forensics, DeepLIFT, Skater, SHAP, Rulex, Google Explainable AI, among others, the solutions have failed to democratize among users as they fail to deliver in large-scale problems or are very specific to cloud or use cases. Consequently, there was a need for a toolkit that can be leveraged across different platforms and problems.
This is where LinkedIn Fairness Toolkit bridges such gaps in the artificial intelligence landscape by enabling practitioners to deliver machine learning models to users without bias.
LinkedIn Fairness Toolkit (LiFT)
LinkedIn Fairness Toolkit is a Scala/Spark library that can evaluate biases in the entire lifecycle of model development workflows, even in large-scale machine learning projects. As per the release note of LinkedIn, the library has broad utility for organizations that wish to conduct regular analyses of the fairness of their own models and data.
Since machine learning models are now actively used in healthcare and criminal justice, it is necessary for an explainable toolkit to find correlation among different categories effectively. Consequently, LinkedIn Fairness Toolkit is a near-perfect fit for such use cases as it can be deployed to measure biases in training data, detect statistically significant differences in models’ performance across different subgroups, and evaluate fairness in ad hoc analysis.
In addition, LinkedIn Fairness Toolkit comes with a unique metric-agnostic permutation testing framework that identifies statistically significant differences in model performance (as measured according to any given assessment metric) across different subgroups. However, the Evaluating Fairness Using Permutation Tests methodology will appear at KDD 2020, the authors noted.
What’s Behind LiFT?
Built to work effortlessly on large scale machine learning workflows, the library can be used at any stage of the models’ development. As a result, you can carry out ad hoc analysis and still interact with the library for explainability. While the library provides flexibility as it can be used in production and even on Jupyter Notebook, for scalability while measuring bias, it leverages Apache Spark to enable you to process a colossal amount of data distributed over numerous nodes.
The design of the LinkedIn Fairness Toolkit also offers multiple interfaces based on the use cases with its high-level and low-level APIs for assessing fairness in models. While the high-level APIs can be used to compute metrics available, with parameterization handled by appropriate configuration classes, the low-level APIs enable users to integrate just a few metrics into their applications, or extend the provided classes to computer custom metrics.
LinkedIn has been using LinkedIn Fairness Toolkit for various machine learning models for explainability. For one, its job recommendation, when checked with LiFT, did not discriminate between gender. The model in production showed no significant difference in the probability of ranking a positive/relevant result above an irrelevant result between men and women.
By making the library open-source, the social media giant is committed to further enhancing its functionality. The AI team at LinkedIn is also working toward bringing fairness while eliminating bias for recommendation systems and ranking problems.
To learn more about how the library was optimized for different types of tests and analysis, click here.
Qlik® today announced the acquisition of the assets and IP of Knarr Analytics, an innovative start-up that provides real-time collaboration, sophisticated data exploration, and insight capture capabilities, to complement Qlik’s cloud data and analytics platform. Acquiring Knarr Analytics advances Qlik’s vision of Active Intelligence, where technology and processes trigger immediate action from real-time, up-to-date data to accelerate business value across the entire data and analytics supply chain.
“Every process and decision can be informed and enhanced by real-time data to trigger action and augment decision making when it matters most – what we call Active Intelligence,” said James Fisher, Chief Product Officer at Qlik. “Acquiring Knarr Analytics will help us further advance customers’ Active Intelligence, enabling tighter collaboration between data stewards and business users that will increase data use and value throughout the organization.”
The need for real-time data-driven decision making, which requires merging and using new data sources on-demand, has exposed a gap in data and analytics supply chains. This gap requires a new way of thinking that centers on collaboration between all data and analytics personas, from data integrators and data stewards, to BI developers and analytics consumers.
Knarr can help create unique data and insight fabric by engaging more users throughout the analytical process, surfacing greater business context for both underlying data and resulting insights. This level of collaboration and sharing is essential to the creation of continuous intelligence at the core of Active Intelligence that drives action and value from data.
Knarr IP will enhance the Qlik Sense® analytics cloud platform Insight Advisor experience, as well as the data exploration experience in the catalog. Qlik’s customers will realize increased value and benefits through:
1. Sophisticated visual exploration of underlying data models before building analytics.
2. A glossary in the catalog for added business context, helping data consumers understand what specific data will best help answer their questions
3. Real-time, multi-player collaboration to generate insights interactively with a team, helping organizations remove barriers between data and analytics users
4. Ability to capture and share these insights with notes and snapshots, while automatically capturing the exploration state and context, enriching understanding and driving action
5. Increased effectiveness through machine learning of Qlik’s unique approach to Augmented Intelligence, helping drive more complex analysis and better outcomes for users of all levels
The terms of the deal are not being disclosed. Knarr Analytics co-founder and CTO Speros Kokenes, a Qlik Luminary, will be joining Qlik as a member of the Applied Research and Emerging Technology Team. As of today, Knarr Analytics products will no longer be for sale, and Qlik will support existing prospects and partners while bringing Knarr IP into Qlik’s cloud platform throughout 2021.
fast.ai — an independent research center that makes superior solutions for easy accessibility of deep learning — has released a free course, along with the book. The course — Practical Deep Learning for Coders — is aimed at the introduction to machine learning and deep learning, and production and deployment of models.
Practical Deep Learning Course by fast.ai covers the material from the book PyTorch: AI Applications Without a PhD. Every video lesson covers one chapter of the book that is also freely available if you do not want to purchase it from Amazon. PyTorch: AI Applications Without a PhD. is hosted on GitHub, where you can access the book in a freely available interactive Jupyter Notebooks.
However, the entire book is not covered in this Practical Deep Learning for Coders course. In the future, fast.ai will release the second part of the course that will complete the book’s remaining lessons.
Unlike most of the free courses and short-term courses, this deep learning course by fast.ai covers an end-to-end data science workflow as it also provides lessons on the deployment of models and data ethics. This makes it a must for aspirants who want to learn advanced techniques and make themselves relevant to the industry.
What you will learn:
Optimize models to get exceptional results in computer vision, NLP, recommenders, and more
How to turn your models into web applications, and deploy them
How to enhance your models’ accuracy, speed, and reliability
Ethical implementation while making models
Other techniques such as random forests and gradient boosting, parameters and activations, random initialization and transfer learning, among others
Over the years, fast.ai has been releasing some of the best deep learning courses online for aspirants to learn for free; Last week, it released a course on ethics — Practical Data Ethics.
Note: Please do not make the Jupyter Notebook version of the book into PDF and distribute it to respect the provider’s generosity. Use it only for your education or learning if you cannot buy it on Amazon.
Dataiku announced that it raised $100 million Series D funding for enhancing the platform to empower different data-driven firms to leverage data effectively. Lead by Stripes with Tiger Global Management and joined by existing investors Battery Ventures, CapitalG, Dawn Capital, FirstMark Capital, and ICONIQ. In December 2019, Dataiku was valued at $1.4 billion when Alphabet’s investment firm CapitalG poured money into the company.
In an attempt to democratize Enterprise AI, Dataiku is committed to explore new opportunities and include functionalities in its end-to-end data science platform. There is a sudden change in people behaviour, which has forced organizations to transform the delivery of services and products.
Dataiku wants to capitalize on the opportunity by allowing companies to bring resilience in the difficult times caused by COVID-19 with its robust platform. The firm already enables users to collaborate for data science projects, code to click, model development, model prediction, and model deployment.
Founded in 2013, today, the company has more than 450 employees, 300 customers, and partners around the globe. Due to its feature-rich Dataiku platform, the company has quickly gained traction in the data science domain.
Currently, Dataiku’s significant competitors are Alteryx, Databricks, and more, which have made reasonable grounds in the self-service analytics domain. All these platforms, including Dataiku, are working towards democratizing data by allowing users to get insight into information without programming skills.
According to Dataiku, with this Series D funding, the company will continue to simplify its platform for organizations to have hundreds, thousands, or hundreds of thousands of machine learning models in production.
Friends App, a social app, is an Indian platform where you can post short videos. Not only can you share and post videos in India but also share these across the world. One of the best features of the app is that it gives you access to the outer world, allowing you to stay connected to your friends and family. Enabling access globally, one can interact and know more about other cultures. Another great feature is that it introduces a new platform that can be trusted and is reliable for not only India but also across the world.
It is currently available on Google Play store, listed under the Social category, and has secured an amazing rating of 4.8 stars. The app is for indigenous users, for entertainment and movies. You can post your thoughts, feed, blog on your account and classify the language that the video is in.
Besides that, you can even select the language of the videos that you prefer. Follow the pages and people that you like in order to stay updated with their activities. Not only does it provide a platform for short videos and posts but also can keep yourself up to date with the world affairs. There is a feature named CineTV, which you may be able to see on the home screen. This feature displays short videos from across the world.
It has been downloaded by hundreds of people now and has gained positive feedback from those who have used it. Founded by Manju Patil and Mrityunjay Patil, Friends App is based in Bangalore. Currently, the app size is 15M, which is quite light and is compatible with Android devices of 5.1 and above.
We live in a world where many people struggle for what they are trying to achieve. The creative approach of the app might just be the best chance for people trying to do what they like to. The app tries to bring out the creativity that might be recognized somewhere around the world. You can get recognized by some known creators around the world and have an opportunity at something great that you can achieve with this platform.
As mentioned above, the potential of the app is unknown, but there’s almost no way it can affect society negatively. Since it is a platform open to anyone and everyone the content you’ll get is more likely to be authentic and not copied from somewhere else. It is a platform designed for everyone that can enjoy as well as present their talent.
With the Chinese apps on the downfall, people are looking for alternatives all over the world. The best part is, it is not an alternative to any other apps but rather one of its kind apps as it provides a unique platform to all the video creators that are focused on movies and entertainment and much more. This is a unique opportunity for Indians to come together, show their creativity, and grow along, staying updated with the world by introducing this platform on a global level.
Summarizing, it is an overall platform for all the users over the world to present their ideas and thoughts as well as their creativity in a very unique way. It is a trustworthy and reliable app that opens opportunities. Being a form of entertainment and for better interactions for everyone around the world. It’ll be a great choice as it will be a very distinctive approach to everybody.
Qlik® announced today that Qlik was named as a Challenger by Gartner, Inc. in the 2020 Magic Quadrant for Data Integration Tools.* This designation marks the fifth consecutive year that Qlik has been recognized in the quadrant. A complimentary copy of the full report is available for download at this link.
“Qlik’s data integration platform can help any enterprise improve their data-to-insights capabilities, which has been proven to increase overall data usage and value to the organization,” said James Fisher, Chief Product Officer at Qlik. “Our modern data integration platform automates the flow of real-time and continuous data that powers modern analytics, cloud migration, data lake management, and data integration strategies.”
Qlik’s data integration platform, when combined with the company’s analytics platform and its data literacy as a service offering, delivers the industry’s only end-to-end approach to Active Intelligence. Unlike traditional approaches, Active Intelligence realizes the potential in data pipelines by bringing together data at rest with data in motion for continuous intelligence derived from real-time, up-to-date information, and is specifically designed to take or trigger immediate actions. This approach closes the gaps from relevant to actionable data (Qlik Data Integration), actionable data to actionable insights (Qlik Analytics) and from investment to value (Data Literacy as a Service).
Indian Institute of Technology Madras has once again been adjudged as the Top Innovative Educational Institute in India for the second consecutive year by the Government of India.
The Institute has been Ranked #1 in the Atal Ranking of Institutions on Innovation Achievements (ARIIA) launched last year by the Innovation Cell of Ministry of Education (formerly Ministry of Human Resource Development) in ‘Institutions of National Importance, Central Universities and Centrally Funded Technical Institutions’ category. Around 674 Institutions have participated in the ARIIA Rankings this year compared with 496 Institutions last year.
Shri. M. Venkaiah Naidu, Hon’ble Vice President of India, announced the results today (18th August 2020) in a virtual event in the presence of Shri. Ramesh Pokhriyal Nishank, Hon’ble Union Minister for Education, Government of India and other officials.
Addressing the event, Shri. M. Venkaiah Naidu, Hon’ble Vice President of India, said, “We have to be self-reliant. Self-reliance requires us to innovate, to seek implementable solutions for developmental challenges. To create a favourable environment for experimentation, India’s higher education system should play the role of an enabler and force multiplier to drive Indian innovation and start-up ecosystem.”
Further, Shri. M. Venkaiah Naidu said, “Innovation must become the heartbeat for education. The quest for excellence should become the norm. I congratulate all institutes which have secured the top honours. I would like to compliment not only the Heads but also the other faculty members and youngsters who are assisting them. The other Institutes should re-double their effort as this is an annual effort…Learn from the best in the world and aim to better than the best. I hope this ranking, named after one of India’s Illustrious Prime Minister, will inspire all higher education institutes to be ‘Atal’ (steadfast) in their commitment to creating world-class Institutions.”
ARIIA endeavours to systematically rank education institutions and universities primarily on innovation related indicators. It aims to inspire Indian institutions to reorient their mind-set and build ecosystems to encourage high quality research, innovation and entrepreneurship. More than quantity, ARIIA focuses on quality of innovations and measures the real impact created by these innovations nationally and internationally.
Congratulating all the institutions, Shri. Ramesh Pokhriyal Nishank, Hon’ble Union Minister for Education, Government of India, said, “Innovation and hence the ARIIA will be the foundation stone for the New India which will be self-reliant. The rankings aim to acknowledge the efforts of institutes which are coming up with new innovations and bridging the gap between industry and innovation.”
IIT Madras excelled in ARIIA owning to its strong entrepreneurial eco-system that encourages students to become job-generators. The IIT Madras Incubation Cell (IITMIC) in the IITM Research Park is India’s leading deep-technology startup hub with Innovation and Impact as key differentiators/drivers.
Thanking the Education Ministry for according this recognition to the Institute, Prof Bhaskar Ramamurthi, Director, IIT Madras, said, “IIT Madras is known for its world-class innovation ecosystem consisting of the Research Park and several other centres, that has already produced world-class companies and disruptive technologies for India. We are proud to be recognized a second time in succession as India’s top innovative educational institution under ARIIA.”
IITMIC houses incubators such as Healthcare Technology Innovation Centre (HTIC), Rural Technology and Business Incubator (RTBI) andBio-Tech incubators. IITMIC has incubated over 200 deep tech startups (Till June 2020) that have attracted VC/Angel Investor funding to the tune of US$ 235 Million and having a valuation of US$ 948 Million. They had a cumulative revenue of US$ 61 Million in 2019-20 financial year, creating over 4,000 jobs and generating over 100 patents. The startups are in sectors such as Manufacturing, IoT, energy/renewables, healthcare/ medical devices, water, edu-tech/skill development, agri-tech, robotics, AI, ML and, data analytics, among others.
Center For Innovation has augmented the ‘tech’ space in the institute wherein their projects have gone on to produce high-profile start-ups and internationally acclaimed competition teams. ‘Nirmaan’ seeks out students with ideas that have commercial potential and puts them through a one-year programme.
The GDC puts cohorts of entrepreneurs-to-be through the paces and helps them position their idea properly to meet customer needs and build a company that can deliver the product.
Other dignitaries who addressed the event include Shri. Sanjay Dhotre, Hon’ble Minister of State for Education, Government of India, Shri Amit Khare, Higher Education Secretary, Ministry of Education, Government of India, Prof. Anil Sahasrabudhe, Chairman, All India Council for Technical Education (AICTE), Dr. BVR Mohan Reddy, Chairman, ARIIA Evaluation Committee and Dr. Abhay Jere, Chief Innovation Officer, Ministry of Education, Government of India.
ARIIA 2020 has rankings for six categories of institutions, namely:
Institutions of National Importance, Central Universities and Centrally Funded Technical Institutions,
Private or Self-Financed Colleges/Institutions,
Private or Self-Financed Universities,
Government and Government Aided Universities,
Government and Government aided College/Institutes,
Higher Education Institutions Exclusively for Women.
Advancement in computer vision technology can not only empower self-driving cars but also assist developers in building unbiased facial recognition solutions. However, the dearth of 3D images is impeding the development of computer vision technology. Today, most of the existing image data are in 2D, but 3D data are equally essential for researchers as well as developers to understand the real-world effectively. 3D images of varied objects can allow them to create superior computer vision solutions that can revolutionize the way we live. But, gathering 3D images is not as straightforward as collecting 2D images. It is near impossible to manually collect 3D images of every object in the world. Consequently, to mitigate one of the longest standing predicaments around 3D images, researchers from the University of Oxford proposed a novel approach of creating 3D images from 2D raw single-view images.
The researchers of the Visual Geometry Group, University of Oxford — Shangzhe Wu, Christian Rupprecht, and Andrea Vedaldi — introduced unsupervised learning of probably symmetric deformable 3D objects from images in the wild. The idea behind the technique is to create 3D images of symmetric objects from single-view 2D images without supervising the process. The method is based on an autoencoder — a machine learning technique to encode input images and then provide output by reconstructing it with decoders. In this work, an autoencoder factors each input image into depth, albedo, viewpoint, and illumination. While the depth is to identify the information related to the distance of surfaces of objects from a viewpoint, albedo is the reflectivity of a surface. Along with depth and albedo, viewpoint and illumination are crucial for creating 3D images as well; the variation in light and shadow is obtained from the illumination with the changing angle/viewpoint of the observer. All the aforementioned concepts play an important role in image processing to obtain 3D images. These factors will be used to generate 3D images by further processing and clubbing it back together.
To create 3D images with depth, albedo, viewpoint, and illumination, the researchers leveraged the fact that numerous objects in the world have a symmetric structure. Therefore, the research was mostly limited to symmetric objects such as human faces, cat faces, and cars from single-view images.
Challenges
Motivated to understand images more naturally for 3D image modelling, the researchers worked under two challenging conditions. Firstly, they ensured that no 2D or 3D ground truth information such as key points, segmentation, depth maps, or prior knowledge of a 3D model was utilized while training the model. Being unaware of every detail of images while generating 3D images is essential as this would eliminate the bottleneck of gathering image annotations for computer vision solutions. Secondly, the researchers had to create 3D images using only single-view images. In other words, it eliminates the need to collect multiple views of the same object. Considering these two problems was vital for the researchers because it would expedite working with fewer images while making computer vision-based applications.
In this work, the internal decomposition of images into albedo, depth, viewpoint, and illumination was carried out without supervision. Following this, the researchers used symmetry as a geometric cue to construct 3D images with the generated four factors of input images. Although objects might never be completely symmetric, neither in shape nor appearance, the researchers compensated for their consideration later in their work. For instance, one cannot always take the left part of a face to construct the other half of the face considering symmetry, as hairstyle or expression of humans may vary. Similarly, albedo can be non-symmetric since the texture of a cat’s both sides of the face may differ. And to make the matter worse, even if both shape and albedo are symmetric, the appearance may still be different due to illumination. To mitigate these challenges, the researchers devised two approaches. They explicitly modelled illumination to recover the shape of the input objects’ images. In addition, they augmented the model to reason the absence of symmetry in objects while generating 3D images.
By combining the two methods, the researchers built an end-to-end learning formulation that delivered exceptional results compared to previous research that used an unsupervised approach for morphing 3D images. Besides, the researchers were able to outperform a recent state-of-the-artmethod that leveraged supervision for creating 3D images from 2D images.
Techniques Of Creating 3D Images from 2D Images
The four factors — albedo, depth, viewpoint, and illumination — that are decomposed from the input images are supposed to regenerate the image on their combination. But, due to the asymmetric nature of objects, even for the faces of humans and cats, clubbing the albedo, depth, viewpoint, and illumination, might not be able to replicate the input image in 3D. To compensatefor their earlier consideration of symmetricity of objects, even though objects were not really symmetric, the researchers devised two measures. According to researchers, asymmetries arise from shape deformation, asymmetric albedo, and illumination. Therefore, they explicitly modelled asymmetric illumination. They then ensured that the model estimates a confidence score for each pixel in the input image that explains the probability of the pixel having a symmetric counterpart in the image.
The technique can be explained broadly in two different steps as follows: –
Photo-geometric autoencoder for image formation
How symmetries are modelled
Photo-geometric autonecoder: After deforming the input images into albedo, depth, viewpoint, and illumination, the reconstruction takes place in two steps: lighting and reprojection. While the lightning function createsa version of the object based on the depth map, light direction, albedo, as seen from a canonical viewpoint, the reprojection function simulates the effect of a viewpoint change and generates images given the canonical depth and shaded canonical image.
Probably symmetric objects: Creating 3D image from 2D images requires determining the object points in images that are symmetric. By combining depth and albedo, the researchers were able to get a canonical frame. But, this process was also carried out by flipping the depth and albedo along a horizontal axis. This was embraced to get another image that was reconstructed with the flipped depth and albedo. Following this, they used the reconstruction losses of both the images that were leveraged for reasoning the symmetry probabilistically while generating the 3D images. The imperfections in created images caused blurry reconstruction. However, it was addressed by adding perceptual loss — a function to compare discrepancies among images.
Benchmarking
The model created by the researchers was put to the test on three human face datasets — CelebA, 3DFAW, and BFM. While CelebA consists of images of real human faces in the wild annotated with bounding boxes, the 3DFAW contains 23k images with 66 3D keypoint annotations, which was used for assessing the resulting 3D images. The BFM (Basel Face Model) is a synthetic face model considered for evaluating the quality of 3D reconstructions.
The researchers implemented the protocol of this paper to obtain a dataset, sampling shapes, poses, textures, and illumination randomly. They also used SUN Database for background and saved ground truth depth maps to evaluate their generated 3D images.
“Since the scale of 3D reconstruction from projective cameras is inherently ambiguous, we discount it in the evaluation. Specifically, given the depth map predicted by our model in the canonical view, we wrap it to a depth map in the actual view using the predicted viewpoint and compare the latter to the ground-truth depth map using the scale-invariant depth error (SIDE),” noted the authors in the preprint paper.
Besides, the researchers reported the mean angle deviation (MAD), which can estimate how well the surface of generated images are captured.
This comparison is carried out by a function that evaluates the depth, albedo, viewpoint, lighting, and confidence maps using individual neural networks. While encoder-decoder networks generate the depth and albedo, viewpoint and lighting are regressed using simple encoder networks.
The above table compares the depth reconstruction quality achieved by leveraging photo-geometric autoencoders with other baselines: fully-supervised baseline, const. null depth, and average g.t depth (BFM dataset was used to create all the models). The unsupervised model outperforms the other models by a wider margin; a lower MAD means the deviation of generated and original images were bare minimal.
To understand how significant every step of the model construction was, the researchers evaluated the individual parts of the model by making several modifications.
Following approaches were tweaked:
Assessment by avoiding the albedo flip
Evaluation by ignoring the depth flip
Predicting shading map instead of computing it from depth and light direction
Avoiding the implementation of perceptual loss
Replacing the ImageNet pretrained image encoder used in the perceptual loss with one trained through a self-supervised task
Switching off the confidence maps
The result of such modification can be seen below:
“Since our model predicts a canonical view of the objects that is symmetric about the vertical centerline of the image, we can easily visualize the symmetry plane, which is otherwise non-trivial to detect from in-the-wild images. In [the below figure], we warp the centerline of the canonical image to the predicted input viewpoint. Our method can detect symmetry planes accurately despite the presence of asymmetric texture and lighting effects,” mentioned the researchers.
Outlook
Construction of 3D images from 2D images can transform the entire computer vision landscape as it can open up the door for building groundbreaking image processing-based applications. Although the researchers devised a novel approach with their model, there are numerous shortcomings, especially with images of extreme facial expressions that need to be fixed in order to democratize the approach. Nevertheless, the model is able to generate 3D images from a single-view image of objects without supervision. In future, however, more research has to be carried out to extend the model to create 3D representation of complex objects than faces and other symmetric objects.
Amazon makes its machine learning course — Amazon Machine Learning University — free for everyone to access and learn the most in-demand skills. Earlier, the course was only available for the employees of Amazon. But, it has now been made public for anyone to access and learn machine learning skills such as computer vision and natural language processing and analyzing tabular data.
Initially, the tech giant started the Amazon Machine Learning University in 2016 to upskill its employees. Over the years, the course was used by various teams such as Alexa science, Amazon Go, and more, to implement the latest machine learning techniques in their work. The courses are being taught by Amazon ML experts, thereby providing all the necessary skills required for an individual to blaze a trail and deliver promising AI-based solutions.
Currently, only three courses have been hosted on YouTube by Amazon but by the end of the year, they will include nine more courses. And from 2021, all the classes will be available via on-demand video, along with associated coding materials.
The first three courses: NLP, Computer Vision, Tabular Data, will cover a wide range topic. As of now, there are more than 67 videos in three courses combined. Amazon will also be updating its machine learning courses for free as technology advances.
The courses are structured in a way that they will help solve business problems in various domains. In addition, the courses incorporate the book Dive into Deep Learning authored by Amazon scientists.
“We want to make sure we’re teaching the important things up front and that we’re making good use of students’ time,” he says. “With the transition to working from home, it’s even harder now for class participants to set aside multiple hours of time. We want to be flexible in how people can take these classes.” Brent Werness, AWS research scientist.
Qlik® debuted the “History of the Fortune Global 500” interactive data analytics site in partnership with Fortune Magazine and timed with the publication of the 30th anniversary of the Fortune Global 500 list. As the official analytics partner of the Fortune 500, Qlik developed the visual experience leveraging data storytelling and interactive visualizations to showcase global market-leading countries and industry sector status, the economic trends across different geographies, and the historical events that shaped those changes. Qlik and Fortune delivered a similar visual experience for the Fortune 500 earlier this year as part of a multi-year partnership.
“The History of the Fortune Global 500 is a unique visual guide through the global economic shifts that have shaped the worlds’ largest corporations over the last three decades,” said Rick Jackson, CMO of Qlik. “Similar to the Fortune 500 site from earlier this year that leveraged Qlik’s unique analytics platform, the Fortune Global 500 site provides a rich and detailed exploration of the industry-standard chronicle of market leaders and industry sectors. The site brings to life the data behind the best performing international corporations, along with detailing the global economic trends that shaped their performance.”
The site brings users through a variety of interactive data-driven sections including:
1. Starting in 1995 through today, users can experience a multi-chapter visual story detailing the evolving rankings of various countries revenue position and performance.
2. The story shows the rising importance of China, the decline in the UK’s position, and the rise and fall of Japan’s global economic status.
3. The site provides an interactive experience that dynamically shows the evolution of global economic rankings, resulting in the current dual leadership of China and the United States.
4. There’s also a page that provides an expansive sector-by-sector interactive comparison.
5. Visitors can then explore, through an interactive map, a country-level focus on current revenues and the number of companies per country, as well as explore the data on individual countries to see company, revenue, and profit metrics for any year from 1995 to 2020.
“The companies of the Fortune Global 500 produced a record $33.3 trillion in revenue last year,” says Clifton Leaf, Editor-in-Chief of Fortune. “Understanding the size and scope of this group is a task that challenges us to explore data visualization in innovative ways – a challenge that, I’m happy to say, Qlik has embraced.”