Have you ever wondered how your voice would sound if it came out of a musical instrument like a flute or violin? Google’s team–Magenta–explored the possibility of using machine learning to convert your voice into music with Tone Transfer.
Leveraging Differential Digital Signal Processing (DDSP), the team developed an open-source technology to convert your voice into music. “DDSP allows developers to combine interpretable structure of classical DSP elements (such as filters, oscillators, reverberation, etc.) with the expressivity of deep learning,” noted Magenta.
With DDSP, the researchers created complex realistic signals by controlling various parameters, which was done by training neural networks to adapt to a dataset through standard backpropagation. This novel approach was also presented in ICLR 2020, where the researcher demonstrated the conversion of the sound of Voice → Violin and Violin → Flute.
DDSP learns by extracting different characters of a musical instrument and maps it with different sounds, thereby creating appealing music. You can use a wide range of audio right from someone singing to dog’s barking and people talking to convert them into a sound from violin, flute or more.
Google last week released a portal where you can upload or record your audio and then transform it into a wide range of sounds of instruments.
Since researchers used western music to train the model, it might give unexpected results. However, DDSP transforms sound by modeling frequencies in the audio, ignoring the standards of western music. This reduces the possibility of distorted or undesired sound.
You can use the tool–Tone Transfer–to listen to how your voice sounds when matched with the sound of instruments. You can also download the audio to share across your network.
Besides, you can use Magenta, a distributed open-source library for Python and JavaScript, to experiment and make your own machine learning models while playing with music to generate resounding audio.
Amazon introduces Amazon One–a device for a new approach to enable contactless payment for faster and secure transactions. Users can now scan their palms by imitating prompted gestures to validate the identity. Currently, the scanner will be available at selected Amazon Go stores (two stores in Seattle, South Lake Union at 300 Boren Ave, and more), but the company has plans to provide the device to other organizations as well. Amazon Go can be deployed in retail stores, stadiums, office buildings, among others.
Although it looks like the technology was developed to mitigate challenges caused due to the pandemic, Amazon was working on this novel technology before even the pandemic happened. On December 26, 2019, the largest e-commerce company was granted the patent for Hand ID technology, which was focused toward eliminating the use of cards and applications for transactions. Amazon One uses computer vision technology in real time to authenticate and make the transactions within seconds.
To use Amazon Go, users will have to undergo an initial setup, where they can provide necessary details such as phone number, credit card, and palm scan, to amalgamate for future authentication.
Undoubtedly, Amazon Go has the potential to revolutionize the way we verify identity in our day-to-day lives, but security concerns can be a significant roadblock. Unlike passwords that can be changed in case of hacks to fortify future security attacks, users’ biometrics data at the helm of hackers can haunt forever.
In order to overcome similar security challenges, Amazon will store the data in its cloud instead of leveraging on-device storage. In addition, customers will have control over their data and can request to delete the information from the cloud, thereby enhancing customer experience.
You can also check your usage history and manage the account if you connect the Amazon One with Amazon account, but connecting your Amazon account is not mandatory to leverage Amazon Go.
GitHub Code Scanning is now generally available for users to evaluate their codes for security flaws. The idea behind this initiative is to eliminate vulnerabilities before the code is in production. By enabling GitHub Code Scanning, every git push isscanned for potential security loopholes. The results are displayed directly in pull requests, making the users aware of the imperfection in code.
The open-source–GitHub Code Scanning solution–is powered by CodeQL, which consists of 2,000+ default queries to scan the code. GitHub has doubled down on security enhancements since it acquired Semmle on September 19, 2019. Semmle allowed developers to write queries to search for vulnerabilities.
Now, with Code Scanning, developers can fortify security threats effectively. However, it does not guarantee a 100% secure code since security flaws can vary based on the workflows. Nevertheless, one can eliminate common vulnerabilities in the code or use customized queries to determine security issues.
Since the beta release in May, GitHub Code Scanning developers have scanned 1.4 million times on 12,000 repositories. More than 20,000 security issues such as SQL injection, cross site scripting (XSS), and remote code execution (RCE) vulnerabilities were identified. Of the total flaws, 72% of them were fixed before the pull requests were merged. Such efficiency is way ahead than the industry standards, where it takes more than 30 days to fix at most 30 percent of all the flaws.
“We chose Advanced Security for its out-of-the-box functionality and the custom functionality that we can build off of. Instead of it taking a full day to find and fix one security issue, we were able to find and fix three issues in the same amount of time,” said Charlotte Townsley, Director of Security Engineering of Auth0.
GitHub Code Scanning is open-source and can be used for free with public repositories, but is only available to GitHub Enterprise to scan private repositories. To enable it in public repositories, you can visit here.
D-Wave System Inc, a quantum computing service provider, announces the general availability of its superior quantum computing platform, LeapTM. The advanced system can power the development of deep learning solutions and assist in solving complex material science problems.
LeapTM, with 5000+ qubit and 15-way qubit connectivity, is powered with the AdvantageTM quantum system to enable developers to solve real-world problems. The current iteration of D-Wave’s platform is a leap of 3000+ qubit to empower developers to effectively use hybrid solver service (HSS) for quantum computation with classical resources for application developments.
With HSS, uses can run applications with one million variables, up from 10,000 variables in the previous service. Such capabilities of LeapTM uniquely places it in the computing landscape to support a wide range of applications.
Along with the 5000+ qubit quantum computing platform, D-Wave released a slew of solutions: D-Wave LaunchTM and discrete quadratic model (DQM) solver. While LaunchTM helps businesses get started with hybrid quantum applications, DQM enables developers to solve a new range of problem classes. Along with accepting 0 and 1 as variables, DQM solver works with integers (1 to 500), or red, yellow, and blue. However, DQM will be available in general availability on October 8, 2020.
Companies like Menten AI have been benefiting from D-Wave hybrid quantum solutions. Menten AI has determined protein structure for de novo protein design, outperforming other classical solvers. In addition, Accenture and Volkswagen have been early adopters of the hybrid quantum service of D-Wave for developing applications for banking and paint shop scheduling, respectively.
According to a report, 39% of the surveyed enterprises are experimenting with quantum computing, and 81% of fortune 1000 decision-makers have a quantum computing use-case in mind for the next three years. As quantum computing is gaining momentum, D-Wave is striving at the right time to capture huge market share with its advanced computing services and helping organizations solve complex problems.
This year has delivered a completely different AI conference experience as they have moved virtually due to the pandemic. Many machine learning conferences were rescheduled to the end of the year in order to still host physical events. Unfortunately, things are not under control yet, resulting in the adoption of virtual conferences. Although networking in remote AI conferences is strenuous, it eliminates the geographic barrier of physical events. One can gain interesting insights from top researchers while being at home in 2020.
Here are the top 4 AI conferences that will be organized before the year ends.
RAISE (Responsible AI For Social Empowerment)
Organized by the Government of India, RAISE is a global five-day artificial intelligence summit. Many tech leaders like Arvind Krishna, CEO of IBM, Mukesh Ambani, chairman, MD of Reliance Industries Ltd, N Chandrasekaran, chairman of Tata Sons, and more, will be speaking at the event. Researchers and developers from across the world will join RAISE and talk about a wide range of topics related to artificial intelligence.
Yann LeCun, Chief of AI, Facebook, Dr Milind Tambe, Director “AI for Social Good” at Google Research India, Ms Nivruti Rai, Country Head of Intel, among others, will share their views on topics such as Explainable AI, Responsible AI, NLP, and more.
Starting on October 5, 2020, RAISE is a free AI conference that everyone should sign up for.
NeurIPS (Neural Information Processing Systems)
NeurIPS is one of the oldest machine learning conferences catering to ML enthusiasts’ needs since 1987. Due to COVID-19, NeurIPS 2020 will be a virtual event, which will be held from December 6, 2020, to December 12, 2020. After ‘industry expo’ on the opening day, the rest of the days will be filled with tutorials, talks, and workshops. You can register for the summit here, which has varying prices for regular attendees and students.
GTC is organized by NVIDIA, which will start on October 5, 2020, and end on October 9, 2020. With over 600+ live and on-demand sessions, you will have an opportunity to learn from some of the best minds in the industry. Although the name gives a sense of a hardware event, it is a deep learning conference, covering topics like autonomous machines, conversational AI, AR/VR, and computing.
GTC also includes a full-day workshop from Deep Learning Institute (DLI) to teach learners working with neural networks, deep learning tools, frameworks, and SDKs. However, you will have to pay an additional $99 for attending this remote workshop.
EmTech will be held virtually from October 20, 2020, to October 22 by MIT Technology Review. The AI conference will feature Geoffrey Hinton, Professor Emeritus, University of Toronto, VP and Engineering Fellow, Google, Marc Benioff, Chair, CEO, and Cofounder of Salesforce, Parag Agrawal, CTO of Twitter, and more. The three-day event has been segmented into ‘leading with innovation,’ ‘forces of change,’ and ‘the path forward,’ where researchers and tech leaders will shed light on the possible direction for embracing AI.
Aspirants often think data science skills like programming and fundamental analysis techniques are enough to land data science jobs. Although these skills are essential, it does not guarantee you job offers as every other applicant is equipped with similar skills. You cannot become a data scientist by just learning a programming language and some lucrative machine learning libraries. The key to getting a data science job is to differentiate yourself from the rest. Consequently, you need more than just programming languages to increase your chances of getting a job in a highly competitive market.
We list down data science skills that will set you apart from other freshers applying for entry-level data science jobs.
Programming Language
It is a no-brainer to discuss Python vs R, as data science is not only about the tools. You can use any tool as long as you can get the desired output. Learn a programming language of your choice and start analyzing data by getting familiar with popular libraries. A data scientist is a problem solver, not only a python programmer,” says Chiranjiv Roy, chief of data science & AI products of Access2Justice Technologies.
Data Analysis and Machine Learning
Learn data analysis and machine learning processes from any free or paid course to strengthen your basics. You cannot learn everything in machine learning at once, but go one full stretch to understand the concepts. Eventually, you can come back and learn in-depth in any subdomain of machine learning such as computer vision or NLP or others. Trying to learn everything in data science can make you a jack of all trades and master of none, which will not help you during interviews.
Inferential Statistics and Mathematics
Statistics and mathematics are the foundation of data science. Even if you do not know programming skills, you can still analyze data with no-code platforms. However, beginners mostly focus on programming languages, ignoring the foundation of data science–statistics and mathematics. This does not mean that you should ignore programming languages. But, at the same time, you should have in-depth knowledge of statistics and mathematics.
Aspirants, to move fast in data science, mostly due to online courses that focus on teaching too many things at once, skim over statistics and mathematics. They rely on importing statistical modules from libraries and carry out analyses. Such practices fail them during interviews because they struggle to explain the statistical procedures behind the code implementation.
Moving too fast in data science is not the way forward. It takes years to become a data scientist. You should ensure you have obtained a strong foundation before you move on to other concepts in data science.
Data Intuition
Learning numerous data science techniques will not make you a data scientist if you do not know how to apply those. You have to contextualize while working with data to implement the best approach to solve problems. Often data intuition is overlooked by beginners because they believe that it is only vital while handling large data. Although it is an essential skill while managing a colossal amount of information, the ability to make out the most from less data is equally important. Given a data (small or big), you should be able to quickly think of approaches that can be best suited to bring business value.
Storytelling
Storytelling is one of the most important data science skills. While working in organizations, you will have to work with decision-makers, who mostly will be unaware of data science terms. But, if you try to communicate with them using data science jargons, decision-makers might not assimilate your data science approaches. If you fail to communicate with business leaders about your analysis in straightforward or simple terms, your machine learning models might never go into production. Such instances will not only negatively afflict the organizations but also your reputation as data scientists.
Storytelling is not only for internal communication. In organizations, you will have to talk to clients who might not be aware of data science terms. To ensure they understand your models and results, you need to communicate in a way that is easy to understand by all.
While learning from data science courses, learners often get tabular or structured data for their projects. In organizations, however, you are not always provided with prepared data. You need to gather unstructured data from various sources. As per a report, 80 to 90 percent of data in organizations is unstructured. Over the years, organizations were focusing on tabular data, but now businesses have realized the essence of unstructured data. Therefore, being proficient in handling unstructured data can differentiate you from other applicants and help you land a data science job.
Software Development Understanding
Usually, beginners fall for deep learning techniques to create appealing projects or use cases. In reality, deep learning is not crucial for most organizations. Making a product out of deep learning models requires huge computation power, which is not always feasible to develop, although it might give the best results.
Most of the business problems can be solved with simple supervised machine learning models. You must try to avoid using fancy deep learning techniques until necessary. This will help software developers to productize their machine learning models. Therefore, never try to bring in deep learning models to demonstrate your data science skills in interviews when the simple machine learning models can, more or less, deliver the same results. Remember, a Jupyter Notebook model running on the cloud will not derive tangible benefits expected to “Solve the problem” hence application development is the key for a data scientist as everyone loves to see an application to play with,” said Chiranjiv.
Data science podcasts are one of the best sources to gain information from some of the best minds in the industry. Machine learning practitioners who have made it big in the field talk about the latest developments, best methodologies, and future of the data science space. Listening to researchers and developers who have furthered machine intelligence technology clarifies confusions in the market as well as brings fresh perspectives to listeners. Whether you are an aspirant or practitioners, you should listen to these artificial intelligence podcasts and stay abreast of the latest trends in the industry. Today, there are numerous machine learning podcasts, making it difficult to follow along. But, you can always be selective while listening according to your area of interest.
We list down 10 data science podcasts that you can subscribe to and stay informed.
Note: This list is in no particular order.
Lex Fridman Podcast
Lex Fridman Podcast, by many, is considered as the best artificial intelligence podcast. Over the years, top researchers, as well as practitioners, have been a part of this podcast. Unlike others, Lex Fridman Podcast host lengthy conversations that can go as long as 4 hours. However, usually, it is 2 hours long. Earlier this podcast was only focused on artificial intelligence, but he is now inviting researchers from other fields like neuroscience, physicists, chemistry, historian, maths, and more. Started in 2018, it is a weekly podcast that quickly gained traction in the data science field; it is a must for any data science enthusiast.
Data Skeptic
Data Skeptic is one of the oldest data science podcasts that has covered a wide range of topics. Since 2014, it has been catering to the curiosity of machine learning practitioners every week. This machine learning podcast is usually 30 minutes long, thereby making it an ideal length of most of the listeners. If you are interested in statistics, critical thinking, and efficiency of approaches in machine learning, this is the go-to podcast for all your needs.
Hosted by Sam Charrington, The TWIML AI Podcast (formerly known as This Week in Machine Learning & Artificial Intelligence) started in 2016 and has over 410 episodes. Charrington hosts top influencers from the data science field to discuss trends and best practices. The episodes are 40 to 60 minutes long, which are just enough for sharing ideas without repeating.
Practical AI
As the name suggests, Practical AI, along with new developments in data science, focuses on real-world implementation. The weekly artificial intelligence podcast has 60 minutes long episodes, featuring technology professionals, researchers, and developers to engage in exciting conversation on machine learning. Started in 2018, this is one of the best data science podcasts that everyone should keep an eye on.
The AI Podcast
The machine learning podcast, The AI Podcast, is produced by NVIDIA–a leading graphic processing unit provider. This podcast was started in 2016 and has over 120 episodes. NVIDIA’s podcasts are top-rated among professionals who are more interested in computing power in artificial intelligence. Top developers, researchers and leaders from NVIDIA share their experience and knowledge about machine learning. Influencers from different organizations like from NASA, Lenovo, Ford and more are invited to bring a fresh perspective to the listeners of this podcast.
Making Data Simple
Hosted by VP of Data and AI Development of IBM, this podcast is another highly recommended content in the data science landscape. Making Data Simple is in many ways different from other data science podcasts; it focuses on demystifying the technologies for the general public. Instead of in-depth research topics, the 30 to 40 minutes of conversation are a perfect starting point for beginners.
Brain Inspired
Brain Inspired is a long-form podcast–around 2hr–that is intensely focused on neuroscience and artificial intelligence. Experts from different walks of life are invited to talk about deep learning and machine learning techniques. This is the best deep learning podcasts for experts or practitioners who have a deep understanding of neural networks. For beginners, the episodes can be overwhelming due to constant bombardment of information about data science techniques.
AI In Business
Although started late last year, AI in Business produces episodes at scale. You may witness 2 to 3 episodes in a week, where Daniel Faggella–the host–talks to the best minds in the data science space. The topics are more general like the evolution of AI chips, trends in machine learning, adoption of facial recognition, and more. This makes it suitable for both beginners and experts of the industry.
Not So Standard Deviation
Recently, Not So Standard Deviation completed its fifth anniversary of the phenomenal data science podcast. Roger Peng and Hilary Parker have been indulging in conversations for the last five years with data science experts to spread knowledge of the latest trends. You can find numerous hour-long episodes to stay informed of the ever-changing data science market.
Talking Machines
Talking Machines is another classical machine learning podcast started in 2015, where the host talks to researchers from several blue-chip companies. However, Since June, the podcast has been paused for a while to reflect on the anti-black racism. Irrespective of the break, you can still listen to the wealth of past talks and gain exciting insights.
Outlook
Several classical podcasts on data science have been either paused or stopped entirely, but you can still access their episodes to obtain knowledge from data science influencers. Although Data Crunch, Data Stories, Partial Derivative, and Linear Digressions are some of the closed or inactive podcasts, you should listen to these artificial intelligence podcasts based on your interests. Besides, you can also listen to other active machine learning podcasts such as SuperDataScience, Eye On AI, and Data Engineering to gain data science knowledge.
Friends App, India’s latest venture in the Digital World, has attracted heaps of people within a few days of its launch. The app is growing at a considerable pace. With almost 5000+ downloads already, the number is increasing with each passing day.
The app is a social networking app that lets you post short videos within India and anywhere across the globe.
The Government of India wants to engage the utilization of Indian apps among its residents to give Indian technology an open door and acknowledgment for making the vision of Digital India a reality. The app can be perfectly categorized as a “Made in India” venture, being founded by Manju Patil and Mrityunjay Patil, from Bangalore.
These days, when all we want is to stay connected to our friends and family, this application will keep you close to your near and dear ones across the world. The best part of the app is that you can watch and share viral videos in your language within seconds. The languages supported include Hindi, Malayalam, Tamil, Telugu, Kannada, Marathi, and English.
Now, when TikTok has been banned in various countries, this app has turned out to be a savior for youth. You can capture fascinating moments in the form of videos, and share it with friends with a single tap! Apart from this, the app also allows you to chat and interact with new people, browse through feed, follow the creators you like, among other things.
The application that is available on the Google Play Store is an all-purpose entertainment platform that works with lightning speed.
What makes the Friends app even more certain is that you can easily share your created videos and posts on Facebook or Whatsapp. Even the people who do not have the app can see the videos through the shared link.
The Friends app is a splendid platform for all types of creators. From singers, actors, comedians, dancers, and other talented people, the app is a safe space to showcase your talent. The Friends app users can get innovative with the creation of Whatsapp status, videos, audio clips, GIF, and photos.
The application has been trending on Google Play Store for quite some time now. Not only this, but the Friends app has also got a 4.8 rating on the same store.
It is easy to use and offers data privacy. The application is drafted to understand user preferences, and it tries to filter the content according to the user. It means that the app provides personalized content to all its users to curate their feeds.
You can follow your favorite artists from the glam world to funny and viral videos. See the contents you like and get updated whenever your favored artist posts a video.
With almost 5000+ downloads within a month, the Friends app is already trending on the Google Play Store. It is a secure platform for the creation of content. The number of users is expected to rise inevitably in the coming days, making it one of India’s fastest-growing apps.
Microsoft announces access to GPT-3 with an exclusive license to integrate the largest language model in its products and services. An exclusive license of GPT-3 will allow Microsoft to blaze the trail and develop an advanced Azure platform to further the development of artificial intelligence.
“Our mission at Microsoft is to empower every person and every organization on the planet to achieve more, so we want to make sure that this AI platform is available to everyone–researches, entrepreneurs, hobbyists, businesses–to empower their ambition to create something new and interesting,” wrote Kevin Scott, executive vice president and chief technology officer, Microsoft, in the blog.
But, how did Microsoft get access to the exclusive license of GPT-3?
In June last year, Microsoft invested $1 billion in the AI research lab, OpenAI, to support the development of AGI. During the announcement, they also shed light on their plan of developing a supercomputer for OpenAI researchers to build large-scale AI models. Besides, OpenAI, in return, agreed to provide license of intellectual properties to Microsoft for commercializing its artificial general intelligence (AGI) technologies.
Although Microsoft gets the exclusive license of GPT-3, it will continue to offer access to the models through its Azure-hosted API. The pricing of the GPT-3 was revealed earlier this month, which was notified to the users who received access to OpenAI’s largest language model.
Since the API release for selected researchers, hobbyists, and developers, users have developed solutions that can generate code automatically, write articles, and more. But people were equally critical of such solutions for misleading the general public by making them believe the superiority of GPT-3 that actually does not exist. It’s just a large model that has been trained on a colossal amount of data. It does not have any cognitive.
Even OpenAI CEO, Sam Altman, tweeted about the GPT-3’s hype.
However, with GPT-3 at the helm of Microsoft, the model can provide the company with an edge over its competitors; Microsoft will have access to the in-depth functionality and code of GPT-3, thereby empowering it to innovate better than the ones with just the API.
TikTok has bogged down to the abuse of power by Trump’s administration and has made a clumsy deal with Oracle and Walmart. Both the US firms will collectively own 20% (Oracle’s 12.5% and Walmart’s 7.5%) of the new company–TikTok Global–that will spring out from ByteDance to go public within a year. TikTok Global will be listed in the US stock exchange after raising pre-IPO fund to allow Walmart and other existing investors like Sequoia Capital, General Atlantic, and Coatue Capital to pour in the money and get a lion share of 53% in ByteDance by the US companies, which currently amounts to up from 40%. The deal also includes generating 25,000 jobs at TikTok in the US.
Oracle will not get access to the core algorithm of TikTok that provides an edge to the video-sharing application. However, TikTok will leverage Oracle’s cloud to store US users’ data. Besides, Oracle will monitor the way data is being processed to ensure China does not has a play in manipulating or getting hold of sensitive information. Even though the deal did not happen the way Trump initially expected, but he was ecstatic with this messy agreement because the intention was to disorient the company, not the privacy.
TikTok’s deal with Oracle, in reality, has no significance pertaining to the privacy of users’ data. The US is paranoid by the technology of TikTok, and its potential to drastically reduce the dominance of social media giants like Facebook, Instagram, Twitter, among others. This is apparent with the exponential rise of TikTok’s user base in the past few years; ByteDance is the most valuable private company in the world, with a valuation of $150 billion. The company made a revenue of $17 billion in 2019 and a net profit of $3 billion.
TikTok has an advanced recommender system that no other social media company can replicate. Not only the Indian companies that introduced short-video sharing applications after the ban of TikTok in India failed to cater to users’ demand but also Instagram’s Reels could not deliver. TikTok is head and shoulders ahead of these inutile applications. More recently, in a desperate attempt to fill the void left by TikTok in India post its ban in June, YouTube has released Shorts in India.
With no competitors in sight, the US government assimilated the unprecedented growth of TikTok and ran a narrative of possible threat to the nation. Trump administration just cannot resist the rise of any other tech company, especially if it is Chinese grown. Huawei, another technology giant that outpowers American companies in 5G, is banned from participating in 5G development in the US. While the unfair practices have been proved fruitful for the US and its firms, how long will this strategy work? Undoubtedly, the Chinese started it by using the governments’ power to minimize the penetration of Google and other tech companies in its country, but the US has started walking on its competitor’s path. If we continue to slacken the proliferation of technology to protect bogus national interests, we will kill innovation. Instead of embracing new technology, we are at a stage where jingoism has taken over innovation.
If baseless beliefs of privacy breach hijack our decisions, we in future might witness other countries do the same. What if India does with Facebook, Google, and Microsoft, what the US did with TikTok? It would be an end game for tech innovation. Unlike TikTok, Facebook, Google, and Microsoft have been guilty of failing to protect users’ data. It’s more logical to ban these companies if they do not sell their stake to the organizations of countries they offer their services to.
Unfortunately, TikTok, a company that revamped the way content is delivered and consumed with innovation, became the scapegoat. All of this, while no fraudulent practices have been established. TikTok’s deal with Oracle is the death of tech innovation.