GitHub CLI has been made available for all after being in beta since its official announcement on 12 February 2020. With GitHub CLI, you can use GitHub from the command line to simplify your workflow with version control. Developers usually switch between terminals and browsers while handling pull requests, mergers, and changes. But, with GitHub CLI, they can now operate entire GitHub from terminals.
Is GitHub CLI Really Effective?
Post the announcement, developers have used GitHub CLI to create 250,000 pull requests, made 350,000 mergers, and generated 20,000 issues. During the release it was only available for GitHub Team and Enterprise Cloud, but not for GitHub Enterprise Server. However, one can now use the solution on-premise with GitHub Enterprise Server, which will further increase the user base of GitHub CLI.
To make it easy for the users, GitHub CLI can also be customized the commands using gh alias set. Therefore, you will not be required to adapt to an entirely new workflow. “And with the powerful gh api allowing you to access the GitHub API directly, there’s no limit to what you can do with gh,” notes the author on GitHub blog.
Besides, you can list pull requests, view the details of request changes, create pull requests, among others. By bringing every functionality right in the terminal/command line, GitHub has mitigated a significant pain point of developers as they do not need to switch between windows for managing projects.
Start cloning repositories and control the projects without leaving your terminal:
What’s best about GitHub is that you do not need to configure anything in the terminal. Just download it and start using the commands and bring GitHub in your terminal.
Over the month, this open-source initiative has been iterated with the help of developers to bring advance features and make the life of users more comfortable. This has helped GitHub in releasing a stable version yesterday, which can enhance the productivity of developers without revamping their practices of managing projects.
Data science internship opportunities are at plenty, but aspirants struggle to get internship offers that could open up the doors for their career. This is because beginners do not follow the right approach while applying for internships to increase their possibility of differentiating among other applicants. To ensure you get an internship opportunity, you need to follow best practices to increase the likelihood of landing a data science internship. A well-devised strategy while hunting for internships is essential to not only reduce the friction but also get internships at organizations you want to work with.
In this article, we list down a step-by-step guide that can help you get an internship in the data science domain.
Programming And Courses
A strong foundation is what you should be focusing on in the beginning. Learn any programing language be it Python or R and become proficient by practising on platforms like HackerRank. Do not fall for some comparison among the best programing language for data science. Just be good at whatever you choose as a language to solve problems. “A data scientist is a problem solver not only a Python programmer,” said Chiranjiv Roy. Do not waste your time debating or searching for answers about the best programing language that leads nowhere.
You can enrol in free courses on platforms like Udacity, Coursera, and edX, to learn the programming language. Following this, you can take courses for statistics and mathematics. Further, you can also learn data analysis and machine learning techniques on any edtech platform.
Certifications Are Not Necessary To Get Data Science Internships
Another misconception among data science aspirants is that they think certifications are necessary to get internships. This misconception is usually spread by influencers on social media to promote paid data science courses on their pages. Organizations want skills, not certifications; even blue-chip companies do not seek certifications while hiring for data science jobs, let alone internships. If you prefer guided learning, you can enrol in paid courses, but do not pay for the sake of obtaining certificates.
Build Portfolio
Instead of certificates, showcase your worth with data science projects. However, doing the usual projects will not demonstrate your skills. You can start with common projects but gradually progress toward solving real-world problems. Do not always rely on getting data from Kaggle or similar platforms to prepare data for you. In organizations, you are given a business problem and asked to solve using data science skills. Often, you will have to wrangle data from different sources before you can implement machine learning techniques. Doing a project that requires efforts from the gound-up will strengthen your position to receive internship offers.
Avoid Relying On Job/Internship Portals
Most of the aspirants randomly visit several online portals and start applying to every internship posting. Data science being a lucrative job, thousands of people apply for any possible opportunity to get started in the domain. If you follow others and rely only on job portals, you will fail to differentiate yourself from the rest. This does not mean that you should completely ignore online portals but do not burnout yourself by randomly applying for internships to as many as you can. But, where should you focus on getting data science internships? LinkedIn.
LinkedIn is the best platform that you can use to get data science internships. It can be leveraged to share your data science learning/project through posts and articles. This will increase your visibility among decision-makers or other data scientists, which will help you in getting an internship or job offer.
In addition, you can personally reach out to data scientists on LinkedIn, discuss data-science related topics, and eventually ask for referrals for internships. Referrals are one of the best techniques to get internships.
Outlook
While there are numerous ways in which you can showcase your ability, the steps mentioned above can help you land a data science internship quickly. Undoubtedly, winning hackathons, and progressing on Kaggle will help you too, but those can be daunting for beginners as it takes time.
Microsoft and NASSCOM’s FutureSkills collaborated for launching AI Classroom Series to impart artificial intelligence skills to 1 million students by 2021. This initiative, however, is only for Indians who are enrolled in Indian universities and are residents of India. This step is in an attempt to comply with their commitment towards making people ready for the new age technologies. While Microsoft wants to skill 25 million people with its global skilling initiative, NASSCOM is devising and executing plans to promote skilling as a national priority.
Microsoft AI classroom is divided into three modules: Data Science Basics and Introduction to Microsoft AI Platform, Building Machine Learning Models on Azure, and Building Intelligent Solutions using Cognitive Services, which start from 21 September 2020. Since the sessions will be delivered in numerous ways: live demos, workshops, and assignments, one will have to choose the preferred timeslots for three days. Every session will be of 150 minutes long.
For registration, you will need to provide your college name, email id, name, and contact number. After verification, you will receive a confirmation mail about the booked timeslot, which usually takes 5-10 minutes. However, the tedious part is that you will have to register separately for all three sessions.
The registration page fields can be confusing, as it looks like they are targeting working professionals with mandatory fields like Job Role, Company Name, and Company Size. Although Microsoft has mentioned that one can enter the college name in the ‘company name’ field, they have not specified what you should enter in the ‘job role’ field. But, we entered ‘student,’ and the registration was verified.
After completing the modules, you will have to take an assignment to receive a certification of participation, which can be taken between 28 September to 10 October. You will have 20 minutes to answer 30 multiple-choice questions, where you need to score 80% or more to qualify for the certificate. If you fail to pass in the first attempt, you will have another two attempts to clear the assessment.
As a pre-requisites, you will have to set up the environment by installing VS Code, set up Jupyter Notebook in VS Code. You can follow the below steps.
In addition, you will be required to activate your free Azure Student account, create a GitHub profile. One can also leverage the benefits of GitHub Student Developer Pack, which offers access to paid tools for free.
YouTube released Shorts, a Tik-Tok clone for short-form video experience. At first, new features have been rolled out in India and later on will be introduced in other countries. However, for now, you can only create 15-second-long-videos with YouTube Shorts. “User-generated short videos were born on YouTube starting with our first upload, a short 18-second video called ‘Me at the Zoo,’” mentioned YouTube in its blog.
Similar to TikTok, the beta of Shorts will come with a few tools that will allow creators to edit videos on the go. YouTube is also committed to enhancing the features and providing power at users’ hands to make appealing videos. Currently, it has features to append video clips, record with music, and speed control. YouTube has a new create icon on Android applications, which can be witnessed on the homepage.
Since the ban of Tik-Tok in India, there has been a colossal amount of applications trying to fill the void created by the most popular Chinese short-video app. Tik-Tok had over 200 million users in India before its ban in June. While Instagram is striving to replicate Tik-Tok with Reels, it has failed to engage users mostly because of ineffective recommendation accuracy. Along with useful in-app editing features, the state-of-the-art recommender system of Tik-Tok was its uniques selling point.
Unlike other social media platforms, creators were able to produce engaging content right from the Tik-Tok application. Such intuitive features were a huge miss on other social media platforms such as Instagram, Facebook, and LinkedIn. One had to use tools outside these applications to create intriguing videos.
While Reels, Moj, Roposo, among other applications, are replicating the video editing features, recommending according to users’ preference has been a huge miss in these applications. As per reports, Tik-Tok will not be sharing its code after the deal of its US operations with Oracle or any other firm.
We will have to wait and watch to see if YouTube can triumph with Shorts–not only with in-app editing features but also with the recommender system.
Microsoft enhanced its open-source deep learning optimization library DeepSpeed for empowering developers or researchers to make 1-trillion-parameters models. Initially, when the library was released on 13 February 2020, it enabled uses to build 100-billion-parameters models. With the library, practitioners of natural language processing (NLP) can train large models with reduced cost and compute while scaling to unlock new intelligence possibilities.
Today, many developers have embraced large NLP models as a go-to approach for developing superior NLP products with higher accuracy and precision. However, training large models is not straightforward. It requires computational resources for parallel processing, thereby increasing the cost. To mitigate such challenges, Microsoft, in February, also released Zero Redundancy Optimizer (ZeRO), a parallelized optimizer to reduce the need for intensive resources while scaling the models with more parameters.
ZeRO allowed users to train models with 100 billion parameters on the existing GPU clusters by 3x-5x times. In May, Microsoft released ZeRO-2 to further enhance the workflow by allowing 200 billion parameters for model training with 10x the speed of the then state-of-the-art approach.
Now, with the recent release of DeepSpeed, one can even use a single GPU for developing large models. The library includes four new system technologies that support long input sequences, high-end clusters, and low-end clusters.
With DeepSpeed, Microsoft offers a combination of three parallelism approaches—ZeRO powered data parallelism, pipeline parallelism, and tensor-slicing model parallelism.
In addition, with ZeRO-Offload, NLP practitioners can train 10x bigger models using CPU and GPU memory. For instance, with NVIDIA V100 GPU, you can build a model up to 13 billion parameters without running out of memory.
For long sequences with text, image, and audio, DeepSpeed provides sparse attention kernels, which powers 10x longer sequences and 6x faster execution. Besides, its 1-bit Adam–a new algorithm–reduces communication volume by up to 5x. This makes the distributed training in communication-constrained scenarios 3.5x faster.
Wipro’s Survey — State of Intelligent Enterprises — received inputs from 300 decision-makers of the US and UK firms. The survey was focused on understanding organizations’ current state of ‘Intelligent Enterprise’ and the use of AI across various business functions.
Divided into four key areas, Wipro’s Survey provides insights into intelligent enterprise’s current state, challenges and benefits, adoption of AI, and future success.
One of the startling revelations was: only 17% of the respondents use AI across the entire organization. But, 80% of decision-makers understand the importance of AI and believe the technology is vital for being an intelligent enterprise. So, what is stopping them from embracing AI and other automation solutions?
Of the many challenges organizations witness, data security is the most pressing issue. Today, it is essential for tech companies to securely streamline data across various business functions to accomplish insights and make decisions in real time. However, companies struggle with security of data and other data-related barriers; 91% of business leaders feel data barriers limit organizations from becoming an intelligent enterprise.
One possible solution for organizations that are struggling while mitigating such challenges is to collaborate with the right partners. Along with IT partners, other ways of accomplishing the intelligent enterprise is by investing in people by reskilling the workforce, investment in technology, research and development, and more. Besides, 95% of respondents said that the use of AI is also essential to become an intelligent enterprise.
69% of respondents say that investment in technology is one of the top three enabling factors for becoming an intelligent enterprise.
However, the companies are willing to transform their business, but the decision-makers believe roadblocks like the cost of new technology, security concerns over new technology, navigating organizational culture, and restriction due to dated systems are major challenges.
Wipro’s survey also notes that intelligent enterprise brings numerous advantages for organizations as it strengthens IT security, improves customer experience, enhances agility, among others.
Data science aspirants, be it freshers or other IT professionals, believe that obtaining online certification is more than enough to land a job in the competitive domain. This is mainly because they consider themself data scientists after acquiring the knowledge of tools and techniques from different sources. Aspirants think they are ready to join any organizations and analyze whatever data they are provided with. However, this is a myth. You are not always provided with the necessary data in organizations. You would not have everything readily available at your table to apply your learning from MOOCs or other courses. Consequently, devising a data science career and succeeding in it is not as straightforward as aspirants envision.
To help our readers make the right data science career decision and understand why data scientists fail to deliver outside their theoretical knowledge, we interacted with Chiranjiv Roy, Chief of Data Science & AI Products at Access2Justice Technologies. Chiranjiv provided several valuable insights into the industrial practices of data science while also suggesting the right approach for aspirants to succeed.
We also asked Chiranjiv about his journey and the practices he embraced to make a fruitful data science career. In his 20 years long career, Chiranjiv has become a learned data scientist while working for some prominent organizations like ResoluteAI.in, Nissan Motors, Mercedes-Benz Research and Development, Hewlett Packard Enterprise, WNS Global Services, and HSBC.
Today, Chiranjiv is also a visiting faculty at Engineering and Business Schools. He has 14+ patents filed on the usage of data science to solve real-world automotive and manufacturing problems by developing, enhancing products and gaining efficiency.
Chiranjiv’s Data Science Journey Has No Shortcuts
Chiranjiv has a Bachelor in Statistics and Dual Masters in Statistics and Computational Mathematics, and a PhD in Applied Data Science for Industrial Engineering. His foundation was laid with a keen interest in computational mathematics, physics, and applied statistics during his education. The love for data helped him carry out his master’s thesis and research work on failure models in a manufacturing factory, thereby giving him the confidence to keep learning.
Starting his career at HSBC in 2001 as a Data Analyst, Chiranjiv’s expertise with data and statistics helped him quickly become a manager at the company. After his five-year and eight-month spell at HSBC, he moved to WNS Global Services as a Senior Data Manager. Later he went on to become a Lead Data Scientist at Mercedes Benz and then Nissan Motors. “Data science was not a field of great importance or as popular as it is now when I started, but just got lucky that I never had to make a shift in my career and had my journey from data engineering to data science in the last two decades,” says Chiranjiv.
While working in countries like the US and India for companies like Hewlett Packard, Nissan Motor, and Mercedes-Benz Research and Development India, and more, Chiranjiv has extensively worked in the area of risk management, automobile, manufacturing, and optimization systems. He believes that working with top-line researchers has helped him learn and fall more in love with data science in developing Real-Time Applications of Data Engineering, Analytics and Sciences in Data Monetizations.
Chiranjiv says it takes years to become a data scientist since the roadmap is quite linear. Over the years, Chiranjiv has learned about the field in academics as well as while working for various organizations to gradually become the data scientist he is today. However, witnessing the unrest in beginners/practitioners about becoming a data scientist, he stresses the fact that today aspirants want to quickly dive into their data science career without understanding the intricacies of the landscape.
“Data engineering and analysis is the first step to be a good data scientist. But, aspirants believe learning some programming languages and algorithms will make them a successful data scientist. What they do not understand is that a data scientist is a problem solver not only a Python programmer,” says Chiranjiv.
To become a problem solver, one should know or understand the business challenges and convert them into data problems. Practitioners, in contrast, try to fit algorithms into problems. Given a business challenge, beginners immediately think of applying some machine learning models without even understanding the business domain or the real challenge.
This mostly happened due to numerous misinformation spread by different sources. Consequently, Chiranjiv explains the ideal approach, which one should follow in order to succeed in their data science career.
“From data science courses one can only obtain the fundamentals. But, that is not enough to solve problems. One needs to spend time in understanding the problem and knowing which domain the problem is related to. If you do not understand the business domain, talk to people who are well versed in the domain. The most critical aspect of becoming a data scientist is to understand the domain before forging towards data analysis. Unless you acquire domain knowledge, you should not jump into solving the problems by fitting data science algorithms,” says Chiranjiv.
“Once you have the business understanding, the next step is to assess the data and form business problems. Following this, you can start developing the models. That means, you need to evaluate which model is the right fit and then create a proof of concept (POC). However, this does not end here.”
“As a data scientist you also need to be a good storyteller. After the POC, you have to leverage your visualization skills and showcase how your models’ result can be effective in mitigating business challenges to the decision makers.”
“Storytelling is an essential skill for data scientists as top management do not know which machine learning or deep learning models are implemented to get the results. This is where visualization simplifies the job of decision makers in inferring outcomes and forecasting the value models can deliver. But value is only created if you are able to communicate effectively with the product developers. For this, you need to have knowledge of agile-based software development practices. In organizations, data scientists’ efforts can only bring business growth if their models are in the production,” he adds.
Is a PhD Essential To Succeed?
PhD teaches your process, time management, approach perfections, focus amidst enormous challenges, a self-starting attitude and helps you become a problem-solver more than a researcher.
When asked about whether PhD candidates have the edge over people who learn from online courses, Chiranjiv said that it depends on what aspirants want to become. There are two aspects of data science: Approach and Implementation. Academics focus on teaching the best approaches to solve problems. But, three or six month online courses can only teach the fundamentals of data science.
You should take at least a year-long course to ensure you go beyond just the fundamentals and learn the best data science approaches. This is where long-term courses, primarily PhD, assist; which is apparent through the number of research papers they publish.
Although long-term programs make you exceptional at approach, for the second aspect, implementation, you need to develop a product mindset. More often than not, the ideal approach for a solution might not be feasible to make a product. For instance, while solving a business problem, if you develop one of the superiorclasses of neural networks or deep learning techniques, the practical implementation might not be possible because of the absence of the required libraries or computational resources. Then you have to compromise on the best approach, which gives 99% accuracy versus an approach that can be implemented with the available libraries and computational resources but only delivers 90% accuracy. Such bargains have to be carried out by data scientists as product development strategy includes return on investment, timeline, and agile principles.
Academics can give you knowledge of fundamentals and techniques. But, your intelligence will come into the picture when you talk to the software development team and come out with the top five models out of which they might be able to implement the number three, four, and five to achieve the same result as the non-feasible but the best models — one and two. Optimizing the number three, four, or five models to achieve the outcome of what the best models would deliver is your intelligence. This is what organizations expect from data scientists.
You cannot make products just by doing MOOCs or other academic courses. It is not always about the best approach, you can only have a successful data science career in organizations if your skills can help deliver products. There is a high probability and propensity that your top three models will never get into the production — this is a practical reality.
Most of the business challenges in the world are solved by regression and support vector machines. Then who wins the game? Is it the data scientist who deploys basic approaches and makes products or the data scientist who has learned every approach in the world but keeps struggling to get his/her model into the production? The former, since he/she can bring business value to organizations with data-driven products. Which means a PhD is not necessary. You can learn from any resources but make sure that the fundamentals are clear.
However, if you do not want to work in organizations and be a professor, PhD is a must.
Working with a wide range of technology organizations and entrepreneurs has helped Chiranjiv gain knowledge in highly regulated industries such as financial services, manufacturing, IT systems, and automotive.
Currently, as a data science leader at Access2Justice Technologies, Chiranjiv manages a team of data scientists, communicates with stakeholders, and creates a culture for the team to thrive. According to Chiranjiv, culture can be the difference between a successful and a bummer data science initiative, which can define organizational growth.
To accomplish the right culture, Chiranjiv pinpoints the importance of a leader’s role in companies. A leader has to set aside his time to guide the new joinee by educating them about the problems and ensure the delivery of desired results in the future. But, this seldom happens. Often, professionals are left to fit in the team, as a result, organizations fail to harness the full potential of practitioners.
As a part of his job, Chiranjiv also hires and promotes data scientists and ML engineers for potential roles. He assesses their intent to learn, knowledge of data structures and algorithms, and a self-challenging attitude. “I also help leaders and HRs who ask me to recommend proficient data scientists. As a result, I always helped aspirants/professionals in obtaining the first role or a good transition.
Final Thoughts For Aspirants
For aspirants, Chiranjiv explains that data science is value-driven, it is not cost reduction or resource calibration for any business. He suggests that problems are everywhere, which means one can obtain data from a wide range of things around them; Kaggle is not the only place where you can get datasets.
To further clarify, Chiranjiv cited an instance where a student from IIM asked him how he will get an internship or a job or newer datasets on Kaggle to solve real-world problems, given the hiring and the activity on data science platforms have slowed down due to COVID-19. Chiranjiv explained to him how he could use an Arduinoin his hostelbetween the power source and televisions of different brands to collect the data of power fluctuation during power outages. Then he can use the gathered information to create a time series plot and develop a model to analyze the surge in power of several television brands. Eventually, he can come to a conclusion and showcase which manufacturer is more reliable, thereby helping the college to buy superior televisions in the future.
Going back to coding, working and mentoring startups from understanding actual business problems, developing system designs, data architecture, data models, DevOps, DataOps, and MLOps, the world demands from a data scientist is a leader who can take a concept to real-life product.
“Aspirants should come up with different use cases on their own, this will showcase their interest in solving business problems. However, aspirants are dependent on Kaggle for datasets because it prepares and provides the data. The reality, in contrast, is that you will not get data while working for organizations. You only get problem statements. One has to think through to find different sources to collect data, and trust me DATA IS ALL AROUND US,” concludes Chiranjiv.
Stay tuned to our website for our upcoming column: How To Become A Successful Data Scientist...
OpenAI rolls out the pricing of its GPT-3. A researcher named Gwern Branwen revealed the subscription plans through a Reddit post. It was mentioned that the users who got access to the beta API of GPT-3 had received the subscription plan of the largest natural language model. However, not many would be able to get their hand on the model extensively as GPT-3 costs around $100 to $400 per month.
There are different pricing plans:
Explore: 3-month free trial with a limit of 100K tokens
Create: $100/month with a limit of 2M tokens. 8 cents for every additional 1K tokens
Build: $400/month with a limit of 10M tokens. 6 cents for every additional 1K tokens
Scale: You will have to contact OpenAI
As per Branwen, 3,000 pages of text can be written by GPT-3 by utilizing 2M tokens.
OpenAI started private beta on 11 July, where one can request for access to the API for free. But, from 1 October, users will have to pay to leverage the arguably superior artificial intelligence language model.
This does not mean that the API will be available for the general public from 1 October; it will still be private. Nevertheless, you can assimilate the pricing, although the plans may change in the future.
Since the release of GPT-3 API, users have created never-before-seen use cases such as code generation, semantic search, chat, synthetic writing, among others. A wide range of GPT-3 examples can be accessed on the official site of OpenAI.
Although many see GPT-3 as a game-changer, several practitioners, along with the CEO of OpenAI, believe that the model is overhyped.
Undoubtedly, uses cases of artificial intelligence that look good on social media rarely comes to life due to several constraints during productization.
Consequently, we will have to wait and see whether the model can assist researchers and data scientists in building groundbreaking real-world solutions.
With the uncertainty of GPT-3’s effectiveness in the real-world, the pricing of the GPT-3’s API might be costly for developers to leverage the model once it is released for everyone.
The Odisha Government has made all the courses on Coursera free for its residents. One can not only access data science courses but also all other existing Coursera courses. The initiative was undertaken under the banner “Skilled in Odisha” to ensure people can upskill during the COVID-19 pandemic. This will allow people of Odisha to come out stronger with new skills post the lockdown.
To get access to the Coursera platform for free, you will have to register for the special skill development program here. In addition to Coursera courses, the initiative also offers access to SAP ERP. You can choose either Coursera or SAP ERP based on your interest. However, Analytics Drift recommends opting for Coursera as you can learn from a wide range of courses right from data science to web development and android developments.
It has been a few weeks since the start of this initiative, but only a few aspirants have shown interest that is evident from the number of page visits (19337 at the time of writing this news). However, The Odisha Government will stop accepting requests for free access to these platforms on 29 September 2020.
Nevertheless, you can register for access by filling the necessary details like identity card number, name, phone no, email address, and current address. After the registration, you will get an email immediately with a registration id, which can be used for further communication. However, you will have to wait for a few days — around 1 to 5 days — before you get a final confirmation from the Odisha Skill Development Authority (OSDA) about your application after verifying the provided details/documents.
On receiving the final confirmation email, you can create your Coursera account with the email you provided during registration. But, if you already have a Coursera account with your email id, Coursera will automatically identify that you have been provided free access to all the courses by the Odisha Skill Development Authority.
You can enroll in multiple courses for free and get certificates post-completion. The access, however, will be revoked on 30 December 2020. Consequently, you will have to complete the courses before the free access ends to get the certificates.
Hurry up! Learn as much as you can in the next four months.
Kneron, an edge AI chip maker for edge devices, has released its latest chip that is at least 2x more power-efficient than Intel and Google’s existing processors. KL720 AI SoC focuses on driving the processing needs of real-world use cases such as Smart TVs, AI glasses and headsets, and high-end IP Cams.
According to the company, KL720 AI SoC comes in at 0.9 TOPS per Watt to provide better performance for AI workloads, of course, with an added 2 to 4 times the power-efficiency of the rest. Kneron leverages the Arm Cortex M4 CPU for navigating different workloads across the resources.
The processor can handle heavy workloads at the edge by processing 4K images, Full HD videos, and 3D sensing for fool-proof facial recognition and gesture control for gaming, among others.
Kneron earlier chip–KL520–was focused toward edge AI for smart locks, security cameras, drones, and home appliances. However, with its latest processor, it has doubled down on high-performance tasks at the edge. What differentiates Kneron is that its products deliver high power efficiency. Its first-generation chip (KL 520) runs on 8 AA batteries for fifteen months, thereby delivering value without hassles.
“KL720 combines power with unmatched energy-efficiency and Kneron’s industry-leading AI algorithms to enable a new era for smart devices. Its low cost enables even more devices to take advantage of the benefits of edge AI, protecting user privacy, to an extent competitors can’t match. Combined with our existing KL520, we are proud to offer the most comprehensive suite of AI chips and software for devices on the market,” said Kneron founder and CEO Albert Liu.
The robustness of their solutions has been recognized by Gartner’s Cool Vendor in AI Semiconductor 2020 and some funding of $40M in January 2020. The company is committed to enhancing end-to-end integrated hardware and software solutions for edge AI.