Sunday, November 9, 2025
ad
Home Blog Page 352

Qlik Named As A Challenger In The Gartner’s 2020 Magic Quadrant For Data Integration Tools

Qlik Named As A Challenger

Qlik® announced today that Qlik was named as a Challenger by Gartner, Inc. in the 2020 Magic Quadrant for Data Integration Tools.* This designation marks the fifth consecutive year that Qlik has been recognized in the quadrant. A complimentary copy of the full report is available for download at this link.

“Qlik’s data integration platform can help any enterprise improve their data-to-insights capabilities, which has been proven to increase overall data usage and value to the organization,” said James Fisher, Chief Product Officer at Qlik. “Our modern data integration platform automates the flow of real-time and continuous data that powers modern analytics, cloud migration, data lake management, and data integration strategies.”

Also Read: Amazon Makes Its Machine Learning Course Free For All

Qlik’s data integration platform, when combined with the company’s analytics platform and its data literacy as a service offering, delivers the industry’s only end-to-end approach to Active Intelligence. Unlike traditional approaches, Active Intelligence realizes the potential in data pipelines by bringing together data at rest with data in motion for continuous intelligence derived from real-time, up-to-date information, and is specifically designed to take or trigger immediate actions. This approach closes the gaps from relevant to actionable data (Qlik Data Integration), actionable data to actionable insights (Qlik Analytics) and from investment to value (Data Literacy as a Service).

Qlik partners with global cloud and platform providers such as AWSMicrosoft Azure, and Google Cloud, as well as organizations like Snowflake, Databricks and Confluent in delivering data warehouse automation, data lake automation and Kafka/streaming integration, with the continued expansion with global systems integrators like Accenture and Cognizant.

*Gartner, Magic Quadrant for Data Integration Tools, Ehtisham Zaidi, Eric Thoo, Nick Heudecker, Sharat Menon, Robert Thanaraj, 18 August, 2020.

Advertisement

IIT Madras Ranked ‘Top Innovative Educational Institution’ In India

Top Innovative Educational Institution in India

Indian Institute of Technology Madras has once again been adjudged as the Top Innovative Educational Institute in India for the second consecutive year by the Government of India.

The Institute has been Ranked #1 in the Atal Ranking of Institutions on Innovation Achievements (ARIIA) launched last year by the Innovation Cell of Ministry of Education (formerly Ministry of Human Resource Development) in ‘Institutions of National Importance, Central Universities and Centrally Funded Technical Institutions’ category. Around 674 Institutions have participated in the ARIIA Rankings this year compared with 496 Institutions last year.

Also Read: Amazon Makes Its Machine Learning Course Free For All

Shri. M. Venkaiah Naidu, Hon’ble Vice President of India, announced the results today (18th August 2020) in a virtual event in the presence of Shri. Ramesh Pokhriyal Nishank, Hon’ble Union Minister for Education, Government of India and other officials.

Addressing the event, Shri. M. Venkaiah Naidu, Hon’ble Vice President of India, said, “We have to be self-reliant. Self-reliance requires us to innovate, to seek implementable solutions for developmental challenges. To create a favourable environment for experimentation, India’s higher education system should play the role of an enabler and force multiplier to drive Indian innovation and start-up ecosystem.”

Further, Shri. M. Venkaiah Naidu said, “Innovation must become the heartbeat for education. The quest for excellence should become the norm. I congratulate all institutes which have secured the top honours. I would like to compliment not only the Heads but also the other faculty members and youngsters who are assisting them. The other Institutes should re-double their effort as this is an annual effort…Learn from the best in the world and aim to better than the best. I hope this ranking, named after one of India’s Illustrious Prime Minister, will inspire all higher education institutes to be ‘Atal’ (steadfast) in their commitment to creating world-class Institutions.”

Also Read: OpenAI Invites For It’s Scholars Program, Will Pay $10K Per Month

ARIIA endeavours to systematically rank education institutions and universities primarily on innovation related indicators. It aims to inspire Indian institutions to reorient their mind-set and build ecosystems to encourage high quality research, innovation and entrepreneurship. More than quantity, ARIIA focuses on quality of innovations and measures the real impact created by these innovations nationally and internationally.

Congratulating all the institutions, Shri. Ramesh Pokhriyal Nishank, Hon’ble Union Minister for Education, Government of India, said, “Innovation and hence the ARIIA will be the foundation stone for the New India which will be self-reliant. The rankings aim to acknowledge the efforts of institutes which are coming up with new innovations and bridging the gap between industry and innovation.”

IIT Madras excelled in ARIIA owning to its strong entrepreneurial eco-system that encourages students to become job-generators. The IIT Madras Incubation Cell (IITMIC) in the IITM Research Park is India’s leading deep-technology startup hub with Innovation and Impact as key differentiators/drivers.

While IITMIC handles the startups, there are other innovation platforms for students even before they graduate. They include Nirmaan, a pre-incubator, Center for Innovation (CFI), which provides a platform to the students to ‘walk in with an idea and walk out with a product’, and the Gopalakrishnan Deshpande Centre for Innovation and Entrepreneurship that grooms wannabe entrepreneurs.

Thanking the Education Ministry for according this recognition to the Institute, Prof Bhaskar Ramamurthi, Director, IIT Madras, said, “IIT Madras is known for its world-class innovation ecosystem consisting of the Research Park and several other centres, that has already produced world-class companies and disruptive technologies for India. We are proud to be recognized a second time in succession as India’s top innovative educational institution under ARIIA.”

IITMIC houses incubators such as Healthcare Technology Innovation Centre (HTIC), Rural Technology and Business Incubator (RTBI) and Bio-Tech incubators. IITMIC has incubated over 200 deep tech startups (Till June 2020) that have attracted VC/Angel Investor funding to the tune of US$ 235 Million and having a valuation of US$ 948 Million. They had a cumulative revenue of US$ 61 Million in 2019-20 financial year, creating over 4,000 jobs and generating over 100 patents. The startups are in sectors such as Manufacturing, IoT, energy/renewables, healthcare/ medical devices, water, edu-tech/skill development, agri-tech, robotics, AI, ML and, data analytics, among others.

Center For Innovation has augmented the ‘tech’ space in the institute wherein their projects have gone on to produce high-profile start-ups and internationally acclaimed competition teams. ‘Nirmaan’ seeks out students with ideas that have commercial potential and puts them through a one-year programme.

The GDC puts cohorts of entrepreneurs-to-be through the paces and helps them position their idea properly to meet customer needs and build a company that can deliver the product.

Other dignitaries who addressed the event include Shri. Sanjay Dhotre, Hon’ble Minister of State for Education, Government of India, Shri Amit Khare, Higher Education Secretary, Ministry of Education, Government of India, Prof. Anil Sahasrabudhe, Chairman, All India Council for Technical Education (AICTE), Dr. BVR Mohan Reddy, Chairman, ARIIA Evaluation Committee and Dr. Abhay Jere, Chief Innovation Officer, Ministry of Education, Government of India.

ARIIA 2020 has rankings for six categories of institutions, namely:

  1. Institutions of National Importance, Central Universities and Centrally Funded Technical Institutions,
  2. Private or Self-Financed Colleges/Institutions,
  3. Private or Self-Financed Universities,
  4. Government and Government Aided Universities,
  5. Government and Government aided College/Institutes,
  6. Higher Education Institutions Exclusively for Women.
Advertisement

Creating 3D Images From 2D Images Using Autoencoder

creating 3D images from 2D images

Advancement in computer vision technology can not only empower self-driving cars but also assist developers in building unbiased facial recognition solutions. However, the dearth of 3D images is impeding the development of computer vision technology. Today, most of the existing image data are in 2D, but 3D data are equally essential for researchers as well as developers to understand the real-world effectively. 3D images of varied objects can allow them to create superior computer vision solutions that can revolutionize the way we live. But, gathering 3D images is not as straightforward as collecting 2D images. It is near impossible to manually collect 3D images of every object in the world. Consequently, to mitigate one of the longest standing predicaments around 3D images, researchers from the University of Oxford proposed a novel approach of creating 3D images from 2D raw single-view images. 

The effectiveness of the approach was recognized by one of the largest computer vision conferences — Computer Vision and Pattern Recognition — as the best paper of the year award. 

Let us understand how researchers blazed a trail to accomplish this strenuous task of generating 3D images using single-view 2D images. 

Also Read: Amazon Makes Its Machine Learning Course Free For All

Creating 3D Images 

The researchers of the Visual Geometry Group, University of Oxford — Shangzhe Wu, Christian Rupprecht, and Andrea Vedaldi — introduced unsupervised learning of probably symmetric deformable 3D objects from images in the wild. The idea behind the technique is to create 3D images of symmetric objects from single-view 2D images without supervising the process. The method is based on an autoencoder — a machine learning technique to encode input images and then provide output by reconstructing it with decoders. In this work, an autoencoder factors each input image into depth, albedo, viewpoint, and illumination. While the depth is to identify the information related to the distance of surfaces of objects from a viewpoint, albedo is the reflectivity of a surface. Along with depth and albedo, viewpoint and illumination are crucial for creating 3D images as well; the variation in light and shadow is obtained from the illumination with the changing angle/viewpoint of the observer. All the aforementioned concepts play an important role in image processing to obtain 3D images. These factors will be used to generate 3D images by further processing and clubbing it back together. 

To create 3D images with depth, albedo, viewpoint, and illumination, the researchers leveraged the fact that numerous objects in the world have a symmetric structure. Therefore, the research was mostly limited to symmetric objects such as human faces, cat faces, and cars from single-view images. 

Challenges 

Motivated to understand images more naturally for 3D image modelling, the researchers worked under two challenging conditions. Firstly, they ensured that no 2D or 3D ground truth information such as key points, segmentation, depth maps, or prior knowledge of a 3D model was utilized while training the model. Being unaware of every detail of images while generating 3D images is essential as this would eliminate the bottleneck of gathering image annotations for computer vision solutions. Secondly, the researchers had to create 3D images using only single-view images. In other words, it eliminates the need to collect multiple views of the same object. Considering these two problems was vital for the researchers because it would expedite working with fewer images while making computer vision-based applications. 

Also Read: OpenAI Invites For It’s Scholars Program, Will Pay $10K Per Month

Approach 

In this work, the internal decomposition of images into albedo, depth, viewpoint, and illumination was carried out without supervision. Following this, the researchers used symmetry as a geometric cue to construct 3D images with the generated four factors of input images. Although objects might never be completely symmetric, neither in shape nor appearance, the researchers compensated for their consideration later in their work. For instance, one cannot always take the left part of a face to construct the other half of the face considering symmetry, as hairstyle or expression of humans may vary. Similarly, albedo can be non-symmetric since the texture of a cat’s both sides of the face may differ. And to make the matter worse, even if both shape and albedo are symmetric, the appearance may still be different due to illumination. To mitigate these challenges, the researchers devised two approaches. They explicitly modelled illumination to recover the shape of the input objects’ images. In addition, they augmented the model to reason the absence of symmetry in objects while generating 3D images. 

By combining the two methods, the researchers built an end-to-end learning formulation that delivered exceptional results compared to previous research that used an unsupervised approach for morphing 3D images. Besides, the researchers were able to outperform a recent state-of-the-art method that leveraged supervision for creating 3D images from 2D images. 

Techniques Of Creating 3D Images from 2D Images

The four factors — albedo, depth, viewpoint, and illumination — that are decomposed from the input images are supposed to regenerate the image on their combination. But, due to the asymmetric nature of objects, even for the faces of humans and cats, clubbing the albedo, depth, viewpoint, and illumination, might not be able to replicate the input image in 3D. To compensate for their earlier consideration of symmetricity of objects, even though objects were not really symmetric, the researchers devised two measures. According to researchers, asymmetries arise from shape deformation, asymmetric albedo, and illumination. Therefore, they explicitly modelled asymmetric illumination. They then ensured that the model estimates a confidence score for each pixel in the input image that explains the probability of the pixel having a symmetric counterpart in the image. 

The technique can be explained broadly in two different steps as follows: – 

  1. Photo-geometric autoencoder for image formation 
  2. How symmetries are modelled 
Creating 3D images from 2D images

Photo-geometric autonecoder: After deforming the input images into albedo, depth, viewpoint, and illumination, the reconstruction takes place in two steps: lighting and reprojection. While the lightning function creates a version of the object based on the depth map, light direction, albedo, as seen from a canonical viewpoint, the reprojection function simulates the effect of a viewpoint change and generates images given the canonical depth and shaded canonical image.  

Probably symmetric objects: Creating 3D image from 2D images requires determining the object points in images that are symmetric. By combining depth and albedo, the researchers were able to get a canonical frame. But, this process was also carried out by flipping the depth and albedo along a horizontal axis. This was embraced to get another image that was reconstructed with the flipped depth and albedo. Following this, they used the reconstruction losses of both the images that were leveraged for reasoning the symmetry probabilistically while generating the 3D images. The imperfections in created images caused blurry reconstruction. However, it was addressed by adding perceptual loss — a function to compare discrepancies among images.

Benchmarking 

The model created by the researchers was put to the test on three human face datasets — CelebA, 3DFAW, and BFM. While CelebA consists of images of real human faces in the wild annotated with bounding boxes, the 3DFAW contains 23k images with 66 3D keypoint annotations, which was used for assessing the resulting 3D images. The BFM (Basel Face Model) is a synthetic face model considered for evaluating the quality of 3D reconstructions. 

The researchers implemented the protocol of this paper to obtain a dataset, sampling shapes, poses, textures, and illumination randomly. They also used SUN Database for background and saved ground truth depth maps to evaluate their generated 3D images.

“Since the scale of 3D reconstruction from projective cameras is inherently ambiguous, we discount it in the evaluation. Specifically, given the depth map predicted by our model in the canonical view, we wrap it to a depth map in the actual view using the predicted viewpoint and compare the latter to the ground-truth depth map using the scale-invariant depth error (SIDE),” noted the authors in the preprint paper.

Besides, the researchers reported the mean angle deviation (MAD), which can estimate how well the surface of generated images are captured.

This comparison is carried out by a function that evaluates the depth, albedo, viewpoint, lighting, and confidence maps using individual neural networks. While encoder-decoder networks generate the depth and albedo, viewpoint and lighting are regressed using simple encoder networks.

The above table compares the depth reconstruction quality achieved by leveraging photo-geometric autoencoders with other baselines: fully-supervised baseline, const. null depth, and average g.t depth (BFM dataset was used to create all the models). The unsupervised model outperforms the other models by a wider margin; a lower MAD means the deviation of generated and original images were bare minimal.

To understand how significant every step of the model construction was, the researchers evaluated the individual parts of the model by making several modifications.

Following approaches were tweaked: 

  1. Assessment by avoiding the albedo flip
  2. Evaluation by ignoring the depth flip
  3. Predicting shading map instead of computing it from depth and light direction
  4. Avoiding the implementation of perceptual loss
  5. Replacing the ImageNet pretrained image encoder used in the perceptual loss with one trained through a self-supervised task
  6. Switching off the confidence maps 

The result of such modification can be seen below: 

Creating 3D images from 2D images

“Since our model predicts a canonical view of the objects that is symmetric about the vertical centerline of the image, we can easily visualize the symmetry plane, which is otherwise non-trivial to detect from in-the-wild images. In [the below figure], we warp the centerline of the canonical image to the predicted input viewpoint. Our method can detect symmetry planes accurately despite the presence of asymmetric texture and lighting effects,” mentioned the researchers. 

Outlook 

Construction of 3D images from 2D images can transform the entire computer vision landscape as it can open up the door for building groundbreaking image processing-based applications. Although the researchers devised a novel approach with their model, there are numerous shortcomings, especially with images of extreme facial expressions that need to be fixed in order to democratize the approach. Nevertheless, the model is able to generate 3D images from a single-view image of objects without supervision. In future, however, more research has to be carried out to extend the model to create 3D representation of complex objects than faces and other symmetric objects. 

Advertisement

Amazon Makes Its Machine Learning Course Free For All

Amazon Machine Learning University

Amazon makes its machine learning course — Amazon Machine Learning University — free for everyone to access and learn the most in-demand skills. Earlier, the course was only available for the employees of Amazon. But, it has now been made public for anyone to access and learn machine learning skills such as computer vision and natural language processing and analyzing tabular data.

Initially, the tech giant started the Amazon Machine Learning University in 2016 to upskill its employees. Over the years, the course was used by various teams such as Alexa science, Amazon Go, and more, to implement the latest machine learning techniques in their work. The courses are being taught by Amazon ML experts, thereby providing all the necessary skills required for an individual to blaze a trail and deliver promising AI-based solutions.

Also Read: OpenAI Invites For It’s Scholars Program, Will Pay $10K Per Month

Currently, only three courses have been hosted on YouTube by Amazon but by the end of the year, they will include nine more courses. And from 2021, all the classes will be available via on-demand video, along with associated coding materials.

The first three courses: NLP, Computer Vision, Tabular Data, will cover a wide range topic. As of now, there are more than 67 videos in three courses combined. Amazon will also be updating its machine learning courses for free as technology advances.

The courses are structured in a way that they will help solve business problems in various domains. In addition, the courses incorporate the book Dive into Deep Learning authored by Amazon scientists. 

“We want to make sure we’re teaching the important things up front and that we’re making good use of students’ time,” he says. “With the transition to working from home, it’s even harder now for class participants to set aside multiple hours of time. We want to be flexible in how people can take these classes.” Brent Werness, AWS research scientist.

Advertisement

Qlik and Fortune Magazine Launch “History of the Fortune Global 500” Data Analytics Site

History of the Fortune Global 500

Qlik® debuted the “History of the Fortune Global 500” interactive data analytics site in partnership with Fortune Magazine and timed with the publication of the 30th anniversary of the Fortune Global 500 list. As the official analytics partner of the Fortune 500, Qlik developed the visual experience leveraging data storytelling and interactive visualizations to showcase global market-leading countries and industry sector status, the economic trends across different geographies, and the historical events that shaped those changes. Qlik and Fortune delivered a similar visual experience for the Fortune 500 earlier this year as part of a multi-year partnership. 

Visualization from History of the Global 500

“The History of the Fortune Global 500 is a unique visual guide through the global economic shifts that have shaped the worlds’ largest corporations over the last three decades,” said Rick Jackson, CMO of Qlik. “Similar to the Fortune 500 site from earlier this year that leveraged Qlik’s unique analytics platform, the Fortune Global 500 site provides a rich and detailed exploration of the industry-standard chronicle of market leaders and industry sectors. The site brings to life the data behind the best performing international corporations, along with detailing the global economic trends that shaped their performance.”

The site brings users through a variety of interactive data-driven sections including: 

1. Starting in 1995 through today, users can experience a multi-chapter visual story detailing the evolving rankings of various countries revenue position and performance. 

2. The story shows the rising importance of China, the decline in the UK’s position, and the rise and fall of Japan’s global economic status. 

3. The site provides an interactive experience that dynamically shows the evolution of global economic rankings, resulting in the current dual leadership of China and the United States. 

4. There’s also a page that provides an expansive sector-by-sector interactive comparison. 

5. Visitors can then explore, through an interactive map, a country-level focus on current revenues and the number of companies per country, as well as explore the data on individual countries to see company, revenue, and profit metrics for any year from 1995 to 2020. 

“The companies of the Fortune Global 500 produced a record $33.3 trillion in revenue last year,” says Clifton Leaf, Editor-in-Chief of Fortune. “Understanding the size and scope of this group is a task that challenges us to explore data visualization in innovative ways – a challenge that, I’m happy to say, Qlik has embraced.”

Advertisement

Julia Programming 1.5 Released – What’s New

Julia Programming Language 1.5

Julia programming has quickly gained traction in the data science community due to numerous advantages such as speed and usability over Python and other prominent programming languages. And in a continuous effort, Julia Computing has further improved the Julia programming language and released its latest version — 1.5 — earlier this week.

The release of Julia 1.5 has a lot of significant advancements like struct layout and allocation optimizations, multithreading API stabilization and improvements, per-module optimization levels, other latency improvements, implicit keyword argument values, the return of “soft scope” in the REPL, faster random numbers, automated rr-based bug reports, among others.

Julia programming language’s update doesn’t usually aim at specific features but follows timed releases. However, this time around, there are major features updates that one must know.

Also Read: OpenAI Invites For It’s Scholars Program, Will Pay $10K Per Month

Allocation Optimizations

One of the most relevant updates is the ‘struct layout and allocation optimizations.’ According to Julia Computing, this was a long-desired optimization that significantly reduced heap allocations. As Julia programming language has both mutable and immutable objects that enable flexibility to programmers in using objects according to their requirement, handling immutable objects has significant limitations.

When an immutable object is pointed to a heap-allocated mutable object, it would need to be heap-allocated too. To address this challenge, Jameson Nash enhanced the compiler’s performance as it would now track multiple roots inside object fields. This simplified the use of immutable objects irrespective of the type of object it is pointing to.

Multithreading API

Threading was introduced in v0.5 for enabling parallelism to increase performance. Over the last couple of releases, the threading was added with safety and new features. One of the major use cases was the multithreaded CSV parsing, which provides an edge to Julia programming language from other data science-friendly languages.

After being labeled as ‘experimental’ for years, it has been moved to a stable category with newer updates: improved safety for top-level expressions (type definition, global assignments, modules), improvements to task switch performance, and more.

Per-Module Optimization

Julia programming language uses -O2 optimization, which is similar to -O2 in GCC or clang. Although this method is effective while setting up benchmarks, for some plotting and non-inner-loop support tasks, -O2 is not the ideal optimization as it delays the process. Consequently, Julia programming language introduced -O1 optimization level that can be used to provide level hints in each module.

While there are other updates like default package manager and faster random number generation, we listed the most important enhancements. However, you can read more here.

Advertisement

OpenAI Invites For It’s Scholars Program

OpenAI — one of the best research firms — invites people from underrepresented groups in science and engineering (S&E) to learn and innovate in deep learning for six months. The underrepresented group consists of blacks, Hispanics, and American Indians or Alaska Natives. While you can identify if you come underrepresented groups here, you can still apply for the OpenAI Scholars Program if you think the list does not have your group included in it and actually comes under S&E. However, you will have to mention it while applying for the position.

The six-months-long OpenAI Scholars Program is aimed to bring diversity in the developments of the AI landscape. OpenAI will offer a stipend of $10k per month for the entire tenure.

This is OpenAI’s fourth program, which was started in 2018 and completed its program for the year 2020 earlier this year. For 2021, the application is open from 28 July and will be closed on 8 August. A total of 10 people will be selected who will be notified on 21 August. The program will start from 12 October and end on 9 April 2021.

Read Also: Google Releases MCT Library For Model Explainability

Here are the prerequisites that are expected from applicants:

  1. Should have US work authorization and are physically located in the US
  2. Should have 2+ years of experience in software engineering
  3. Programming experience in Python (other languages are helpful too)
  4. Strong mathematics background
  5. Interest in the machine learning field
  6. Should have completed or will be able to finish practical deep learning for coders, v3 or deep learning specialization or an equivalent before the start of the program

If selected, one can get a huge benefit such as access to computing resources, one-on-one video calls each week with mentors, recruitment support, a stipend for AI-related conferences during the OpenAI Scholars program, access to Slack workspace.

In a few cases, people who have received the scholar have in the future worked for OpenAI. Consequently, this is an exceptional opportunity to give your machine learning journey the wing it needs.

Click here to know more and apply for the OpenAI Scholars Program: Fall 2020.

Advertisement

Google Releases MCT Library For Model Explainability

Google Explainability

Google, on Wednesday, released the Model Card Toolkit (MCT) to bring explainability in machine learning models. The information provided by the library will assist developers in making informed decisions while evaluating models for its effectiveness and bias.

MCT provides a structured framework for reporting on ML models, usage, and ethics-informed evaluation. It gives a detailed overview of models’ uses and shortcomings that can benefit developers, users, and regulators.

To demonstrate the use of MCT, Google has also released a Colab tutorial that has leveraged a simple classification model trained on the UCI Census Income dataset.

You can use the information stored in ML Metadata (MLMD) for explainability with JSON schema that is automatically populated with class distributions and model performance statistics. “We also provide a ModelCard data API to represent an instance of the JSON schema and visualize it as a Model Card,” note the author of the blog. You can further customize the report by selecting and displaying the metrics, graphs, and performance deviations of models in Model Card.

Read Also: Microsoft Will Simplify PyTorch For Windows Users

The detailed reports such as limitations, trade-offs, and other information from Google’s MCT can enhance explainability for users and developers. Currently, there is only one template for representing the critical information about explainable AI, but you can create numerous templates in HTML according to your requirement.

Anyone using TensorFlow Extended (TFX) can avail of this open-source library to get started with explainable machine learning. For users who do not utilize TFX, they can leverage through JSON schema and custom HTML templates. 

Over the years, explainable AI has become one of the most discussed topics in technology as today, artificial intelligence has penetrated in various aspects of our lives. Explainability is essential for organizations to bring trust in AI models among stakeholders. Notably, in finance and healthcare, the importance of explainability is immense as any deviation in the prediction can afflict users. Google’s MCT can be a game-changer in the way it simplifies the model explainability for all.

Read more here.

Advertisement

Microsoft Will Simplify PyTorch For Windows Users

Microsoft PyTorch

Installing and using PyTorch will become easier for Windows users as Microsoft will become the maintainer of the Windows version of PyTorch. Facebook, on Tuesday, announced that Microsoft will take the ownership of the development and maintenance of the PyTorch build for Windows.

According to the Stack Overflow developer survey 2020, Windows is still the preferred choice for most of the developers as 45.8% of them use Windows, which is ahead of 27.5% of users who prefer MacOS and 26.6% or programmers who leverage Linux-based operating systems.

Also Read: Intel’s Miseries: From Losing $42 Billion To Changing Leadership

However, installing PyTorch on Windows is a tedious task, and on top of that, it does not support functionalities like distributed training and TorchAudio domain library. Nevertheless, Windows has committed to improving the overall functionality, along with the aforementioned challenges.

Earlier, Jiachen Pu attempted to improve PyTorch’s compatibility with Windows just like it is with other platforms such as Linux and MacOS. But, due to the absence of necessary resources, the initiative was not futuristic. “Lack of test coverage resulted in unexpected issues popping up every now and then. Some of the core tutorials, meant for new users to learn and adopt PyTorch, would fail to run. The installation experience was also not as smooth, with the lack of official PyPI support for PyTorch on Windows,” noted the authors of the announcement.

With the announcement of Microsoft’s extended support, PyTorch 1.6 was released. In the updated version of PyTorch, the overall core quality was enhanced by introducing test coverage for core PyTorch and its domain libraries, along with automatic tutorial testing.

It also added the support for TorchAudio, and test courage to all three domain libraries: TorchVision, TorchText and TorchAudio. In addition, Microsoft added GPU compute support to Windows Subsystem for Linux (WSL) 2 distros. It is devised to allow the use of NVIDIA CUDA for AI and ML workloads. You can now run native Linux based PyTorch applications on Windows without the requirement of a virtual machine or a dual boot setup.

Advertisement

Intel’s Miseries: From Losing $42 Billion To Changing Leadership

Intel's Misery

Intel’s stocks plunged around 18% as the company announced that it is considering outsourcing the production of chips due to delays in the manufacturing processes. This wiped out $42 billion from the company as the stocks were trading at a low of $49.50 on Friday. Intel’s misery with production is not new. Its 10-nanometer chips were supposed to be delivered in 2017, but Intel failed to produce in high-volumes. However, now the company has ramped up the production for its one of the best and popular 10-nanometer chips.

Intel’s Misery In Chips Manufacturing

Everyone was expecting Intel’s 7-nanometer chips as its competitor — AMD — is already offering processors of the same dimension. But, as per the announcement by the CEO of Intel, Bob Swan, the manufacturing of the chip would be delayed by another year.

While warning about the delay of the production, Swan said that the company would be ready to outsource the manufacturing of chips rather than wait to fix the production problems.

“To the extent that we need to use somebody else’s process technology and we call those contingency plans, we will be prepared to do that. That gives us much more optionality and flexibility. So in the event there is a process slip, we can try something rather than make it all ourselves,” said Swan.

This caused tremors among shareholders as it is highly unusual for a 50 plus year world’s largest semiconductor company. In-house manufacturing has provided Intel an edge over its competitors as AMD’s 7nm processors are manufactured by Taiwan Semiconductor Manufacturing Company (TSMC). If Intel outsources the manufacturing, it is highly likely that TSMC would be given the contract, since they are among the best in producing chips.

But, it would not be straight forward to tap TSMC as long-term competitors such as AMD, Apple, MediaTek, NVIDIA, and Qualcomm would oppose the deal. And TSMC will be well aware that Intel would end the deal once it fixes its problems, which are currently causing the delay. Irrespective of the complexities in the potential deal between TSMC and Intel, the world’s largest chipmaker — TSMC — stock rallied 10% to an all-time high as it grew by $33.8 billion.

Intel is head and shoulder above all chip providers in terms of market share in almost all categories. For instance, it has a penetration of 64.9% in the market in x86 computer processors or CPUs (2020), and Xeon has a 96.10% market share in server chips (2019). Consequently, Intel’s misery gives a considerable advantage to its competitors. Over the years, Intel has lost its market penetration to AMD year-over-year (2018 – 2019): Intel lost 0.90% in x86 chips, -2% in server, -4.50% in mobile, and -4.20% in desktop processors. Besides, NVIDIA eclipsed Intel for the first time earlier this month by becoming the most valuable chipmaker. 

Also Read: MIT Task Force: No Self-Driving Cars For At Least 10 Years

Intel’s Misery In The Leadership

Undoubtedly, Intel is facing the heat from its competitors, as it is having a difficult time maneuvering in the competitive chip market. But, the company is striving to make necessary changes in order to clean up its act.

On Monday, Intel’s CEO announced changes to the company’s technology organization and executive team to enhance process execution. As mentioned earlier, the delay did not go well with the company, which has led to the revamp in the leadership, including the ouster of Murthy Renduchintala, Intel’s hardware chief, who will be leaving on 3 August. 

Intel poached Renduchintala from Qualcomm in February 2016. He was given a more prominent role in managing the Technology Systems Architecture and Client Group (TSCG). 

The press release noted that TSCG will be separated into five teams, whose leaders will report directly to the CEO. 

List of the teams:

Technology Development will be led by Dr. Ann Kelleher, who will also lead the development of 7nm and 5nm processors

Manufacturing and Operations, which will be monitored by Keyvan Esfarjani, who will oversee the global manufacturing operations, product ramp, and the build-out of new fab capacity

Design Engineering will be led by an interim leader, Josh Walden, who will supervise design-related initiatives, along with his earlier role of leading Intel Product Assurance and Security Group (IPAS)

Architecture, Software, and Graphics will be continued to be led by Raja Koduri. He will focus on architectures, software strategy, and dedicated graphics product portfolio

Supply Chain will be continued to be led by Dr. Randhir Thakur, who will be responsible for the importance of efficient supply chain as well as relationships with key players in the ecosystem

Also Read: Top 5 Quotes On Artificial Intelligence

Outlook

Intel, with this, had made a significant change in the company to ensure compliance with the timeline it sets. Besides, Intel will have to innovate and deliver on 7nm before AMD creates a monopoly in the market with its microarchitectures that are powering Ryzen for mainstream desktop and Threadripper for high-end desktop systems.

Although the chipmaker revamped the leadership, Intel’s misery might not end soon; unlike software initiatives, veering in a different direction and innovating in the hardware business takes more time. Therefore, Intel will have a challenging year ahead.

Advertisement