Friday, November 21, 2025
ad
Home Blog Page 190

Free Data Science Courses

Free data science courses
elearningindustry

With approximately 2.5 quintillion bytes generated daily, the need to understand and manage big data is also increasing. Companies collect data, process it, and derive meaningful insights on areas they can improve. Due to this, data science is one of the most popular skills in the present technology-driven world. 97.2% of the organization are currently investing in big data and data science, having data science skills becomes an advantage. Here is a list of seven free data science courses to master data science concepts:

  1. An Introduction to Data Science – Udemy

An Introduction to Data Science is one of the best free data science courses you can take if you are a beginner and want to start your journey in data science. Created by Kumar Rajmain Bapat, this course helps you visualize, play, and manage data efficiently. Through this Udemy course, you will understand the primary data science concepts. The course will go through the history behind data science. This course will give you a detailed and easy-to-follow road map to mastering data science and becoming a full-fledged data scientist. You will also learn about the applications of data science in various fields and the skills required to build a career in the data domain.

The course primarily focuses on essential aspects of data science like the difference between noise and data, types of graphs for data visualization, and more. As this is an introductory course, there are no prerequisites. The total duration of the course is 43 minutes, which is suitable for anyone interested in data science.

Link to the course: An Introduction to Data Science

  1. IBM Data Science Professional Certificate – Coursera

Offered by IBM, the Data Science Professional Certificate has ten data science free courses that will boost your career in data science and machine learning (ML). This certification will guide you in building basic data science skills such as understanding Python and Structured Query Language (SQL) to create advanced machine learning models. There is no prior experience or degree requirement for this course. 

With this professional certification, you will be able to learn various data science skills that are the minimum requirements needed for a job in the data science field. You will work on multiple projects to create an impressive portfolio of your skills to showcase to employers. 

This course is entirely online and will take approximately 11 months if you have a pace of four hours per week to complete. You can start instantly and learn concepts at your own pace and schedule; all the deadlines are flexible. Coursera courses are free to audit, but if you want access to graded assignments or gain a course completion certificate, you will have to pay. 

You will study these modules, data science tools, python programming for data science, databases, SQL, data analysis, data visualization, machine learning, and artificial intelligence (AI) development using python. At the end of the certification course, you will have an applied data science capstone project to give you real-life experience of the role of a data scientist.

Link to the course: IBM Data Science Professional Certificate

Read more: Reddit Launches New NFT Avatar Marketplace for its users

  1. Introduction to Data Science – SkillUp

SkillUp by Simplilearn provides free data science courses for beginners to help them understand the basics of data science and how to become a data scientist in today’s world. The “Introduction to Data Science” course will guide you through the essential data science workflows, tools, and techniques you would need to start your career as a data science professional. Upon completion of the course, you will be able to do exploratory data analysis, descriptive statistics, model building, fine-tuning inferential statistics, ensemble learning, supervised and unsupervised machine learning algorithms.

The prerequisites for this course are a basic understanding of mathematics and programming concepts. However, at the beginning of this course, you will get briefed about the top five python libraries used for data science. The course will also provide details on data science job roles, required skills, salary range, and more. It will guide you on how to make an impressive data science engineer resume and prepare you for an interview.

This course is seven hours long, but you will have only 90 days of access to your free lesson. On completion, you will receive a completion certificate which you can include in your resume to stand out from your competitors.

Link to the course: Introduction to Data Science 

  1. Intro to Data for Data Science – Udemy

Created by Mattew Renze, Intro to Data for Data Science is another good course for absolute beginners in data science. This course will provide insights into data and its importance. You will learn about the data lifecycle, from data collection to processing and analysis. The course is approximately one-hour-long; hence it is excellent for those who want a small introduction to data science to check if this is the right course for them.

There are no requirements for this lesson as it is an introductory course for data science beginners. This lesson will cover nominal, ordinal, interval, and ratio data. With examples, you will learn scalar and composite data types. The course will conclude with tabular data and related concepts like relationships, queries, variables, and observations.

Note that you will only be given the online video content and will not have access to the instructor Q&A. You cannot directly message the instructor if you want to clear your doubts. Once you have completed the course, you will not receive any certificate from Udemy.

Link to the course: Intro to Data for Data Science

  1. Data Science for Everyone – DataCamp

For those without coding experience, this course provided by DataCamp is a good choice. The Data Science for Everyone course is approximately two hours long and will answer all the questions you were afraid to ask about data science. Without writing one line of code, you will get hands-on exercises and go through concepts like A/B testing, machine learning workflows, and time series analysis.


The four chapters in this course will help you learn how data science quickly solves many real-world problems. In the first chapter, you will understand the different data science lifecycle processes and job roles in the data science domain, the workflow, and the data lifecycle. The second chapter will teach data collection, storage, and data pipeline automation. The next lesson is on preparation, exploration, and visualization of data and how to diagnose problems with your data. The last lesson consists of data experimentation, prediction, A/B testing, and forecasting. You will be briefed about machine learning workflows, clustering, and supervised learning.

Link to the course: Data Science for Everyone

  1. Data Science: R Basics – edX

Among the different software languages, R is extensively used for data science. If you want to kick-start your career in data science, then learning R is one of the essential requirements. Data Science: R basics by edX help build a strong foundation in R. Upon completing the course, you will be able to manage, analyze and visualize data efficiently. This introductory course is the first part of their professional certification program in data science. If you dedicate one or two hours a week, you should be able to complete this course approximately in eight weeks.

The course is created by Rafael Irizarry, professor of Biostatistics at Harvard University. This course teaches basic coding features like data types, vectors, indexing, arithmetic, and loop commands to sort, manage and analyze data. Further in the series, you will learn topics like probability, regression, inference, and machine learning algorithms. With R, you will use dplyr for data wrangling, ggplot1 for visualization, and UNIX/Linux for file organization. 

Link to the course: Data Science: R Basics

  1. Become a Data Scientist – LinkedIn Learning

LinkedIn Learning is one of the best platforms for free learning resources on any domain. The data science free course has twelve sessions and consists of 20 hours of content. Learn about all the fundamentals of data science, from statistics to system engineering to data mining and machine learning. With these lessons, you will build a strong foundation in statistics and math, which is essential for any data science-related domain. Through graphs, you will be able to source, explore and interpret data easily.

The twelve courses cater to the non-technical skills and the technical skills you will have to develop to master data science. The lessons cover various fundamental topics of statistics and mathematics like standard deviation, probability distribution, central tendency, variability, and more. You will also learn about the relationship between big data and AI, the Internet of Things (IoT), data science, and social media. One of the free data science courses is dedicated to lessons learned from data scientists, which will give you insights and advice from current data scientists. 

Link to the course: Become a Data Scientist

Conclusion

All these free data science courses are beginner-friendly and easy to follow. You can check out Kaggle, a data science and analysis website, for more resources. It has over 50,000 public datasets which you can practice and analyze. It even has short courses on essential data science skills like python, machine learning, pandas, NumPy, deep learning, and more. If you want to learn more about data science after completing an introductory course, you should do a professional certification course to enhance your knowledge.

Advertisement

What caused crypto exchange platform Vauld to suspend withdrawals?

Vauld suspends crypto withdrawals

Vauld, a Singapore-based cryptocurrency lending and exchange start-up, recently announced that it had suspended withdrawals, deposits, and trading on its platform with immediate effect. The three-year-old start-up cited ‘navigating through financial challenges amid the market downturn’ as the reason for suspension. 

Vauld is a Singapore-based crypto platform that enables customers to lend, borrow, and trade crypto assets with Bitcoin (BTC), Ethereum (ETH), Tether (USDT), and other significant cryptocurrencies from one unified platform. The company counts Valar Ventures, Coinbase Ventures, and Pantera Capital among its backers. 

In a blog post on the company’s website, Darshan Bathija, Vauld’s co-founder and CEO, talked about the financial difficulties of business partners and customer withdrawals. He explained how the circumstances had prompted customer withdrawals of about $198 million since the 12th of June as the cryptocurrency market declined after the collapse of Terraform Lab’s UST stablecoin, followed by Celsius Network pausing withdrawals and Three Arrows Capital defaulting on loans.

Read More: Would Cryptocurrency Play An Influential Role In Ukraine’s Future Amid Russian Invasion?

Bathija said the start-up is considering restructuring options and has reached out to Kroll for financial advice. The company has also consulted Rajah & Tann and Cyril Amarchand Mangaldas for legal advice in Singapore and India, respectively. The start-up has expressed intentions to apply for a moratorium at the Singapore courts. 

While announcing the hiatus, Bhatija said that the company is seeking the understanding of its customers on the Vauld platform as it is not in a position to process any new or further requests. Certain arrangements will be made for customer deposits as necessary to meet margin calls in connection with collateralized loans. The announcement was followed by Vauld cutting its workforce by 30% about two weeks ago. 

While the reason for the suspension is apparent, the move on behalf of Vauld does come as a surprise to the industry. On the 16th of June, after crypto lending platform Celsius announced increasing financial challenges, Bathija assured Vauld’s customers in a tweet that the platform was not headed toward the predicament of Celsius. He also affirmed that Vauld is far from the fate of Three Arrows Capital, another one of the high-profile cryptocurrency platforms that filed for bankruptcy. 

Bhatija had stated earlier that Vauld would remain liquid despite market conditions. He also informed all withdrawals were processed as usual over the last few days and will continue to be the same in the future. However, the withdrawal announcement after such an affirmative statement might be confusing for some to wrap around their heads. 

Recently, FTX’s US-based subsidiary signed an agreement with another financially strained crypto lender BlockFi. The deal gives FTX the option to buy the startup for up to $240 million based on its performance. BlockFi was among those firms that liquidated some positions held by Three Arrows Capital. It was valued at $3 billion in a financing round. 

According to the company website, Vauld enables customers to earn the industry’s highest interest rates on major cryptocurrencies. The site says it offers 6.7% annual yields on staking Bitcoin and Ethereum tokens and 12.68% yearly yields on stablecoins such as USDC and BUSD. It also allows customers to borrow against their tokens and several other trading services. Vauld says, on its website, that it offers users the ability to borrow up to a Loan To Value (LTV) of 66.67% against their tokens. 

Many crypto tokens like ethereum, binance coin, and tether have fallen by over 50% in value in the past six months, like several tech stocks. The major reason for the fall is the panic sell-off by investors, especially whales, amid the increasing fear of inflation. In a recent podcast, Binance founder and chief executive Changpeng Zhao said that Binance has engaged with over fifty firms to strategize funding and bailing out opportunities for companies in recent weeks. Several crypto experts have warned in recent weeks that many more decentralized finance (DeFi) platforms are on the verge of facing a collapse as financial constraints cripple the businesses, same as Vauld. 

While this might seem like an endgame for Vauld, this temporary hiatus is not expected to be the end of this cryptocurrency startup. Recently in a tweet, Bhatija expressed his confidence in the fact that with the advice of their financial and legal advisors, Vauld will be able to reach a solution that will best protect the interests of its stakeholders and customers. 

Advertisement

Reddit Launches New NFT Avatar Marketplace for its users

Reddit NFT Marketplace
Reddit's Collectible Avatars

Reddit is introducing a brand-new NFT-based avatar marketplace that lets you pay a fixed price to buy blockchain-based profile photos. These NFTs will be hosted on Polygon’s blockchain. According to the company, you don’t need a cryptocurrency wallet to buy them; using your credit or debit card should be sufficient. You can also use Reddit’s wallet product to hold them.

According to Reddit’s announcement, 90 different NFT designs are available in the “tens of thousands” during this early-access phase. NFT avatars will initially only be accessible to subscribers of the invite-only r/CollectibleAvatars subreddit, with quoted costs being $9.99, $24.99, $49.99, $74.99, or $99.99.

The company stated that if you buy one of its limited-edition NFTs, you will be granted permission to use it as an avatar both on and off Reddit. These privileges do not equate to those that come with owning an NFT from the Bored Ape Yacht Club collection of Yuga Labs, which permits you to create products or TV shows based on the bored ape you own. These avatars’ appearances can be customized using the avatar builder’s goods. Additionally, user avatars will get a “glow-like effect next to their comments in communities.”

In the upcoming weeks, everyone will be able to purchase these collectible avatars on Reddit’s avatar builder page. Community members will get unique Ask Me Anythings (AMAs) from artists, behind-the-scenes posts regarding the one-off profile pictures, and instructions for setting up a wallet in the meantime.

But only local currencies, like US dollars, will be accepted for the purchase of Collectible Avatars. Neither a cryptocurrency option nor an auction on a secondary market like OpenSea is currently available for them. Reddit also explains that the marketplace is based on Polygon’s blockchain because of its commitment to sustainability and low-cost transactions. Reddit will introduce a Vault, an Ethereum-compatible wallet, along with the Polygon marketplace.

Reddit NFT Marketplace isn’t Reddit’s first stint into NFTs. In January, Reddit began testing a feature that lets users choose any Ethereum-based NFT as their profile picture. This came after Twitter introduced a feature that allowed users to set their NFTs as profit pictures. With this feature, users could set photos of their NFTs that, when clicked, would reveal information about the NFT and would stand out from the default Twitter profile picture by being hexagon-shaped. These social media companies are aware of the demand for crypto-related features like the NFT profile image from users looking to incorporate them as a status indicator of their digital presence.

Many social media companies are working to make their platforms NFT-enabled. Recently, it was revealed that YouTube and Instagram are also exploring NFTs, while Meta Inc. intends to create a new NFT marketplace.

Read More: Another Phishing attack on OpenSea: Are Phishing threats on rise in NFT Marketplaces?

Reddit revealed in a statement, that in the future, blockchain will bring more empowerment and independence to communities on Reddit. The company prides itself on always being a model for what decentralization could look like online. “Our communities are self-built and run, and as part of our mission to better empower them, we are exploring tools to help them be even more self-sustaining and self-governed,” it added. Based on its mascot “Snoo,” Reddit released limited-edition NFTs in 2021 under the name CryptoSnoos.

Advertisement

IBM announces acquisition of Israeli startup Databand.ai 

IBM acquires Databand.ai

IBM has recently announced its acquisition of Databand.ai, a Tel-Aviv-based data observability start-up company. Databand.ai provides services that alleviate data errors, poor data quality, and pipeline failures to prevent a company’s bottom line from being impacted.

IBM hopes to ensure data security at all times by acquiring Databand.ai. The tech giant expects to strengthen its software portfolio across artificial intelligence, automation, and data.

Data observability is a newly emerging prime solution that helps engineers and companies understand the status of their data and efficiently troubleshoot and address issues as they arise.

Databand.ai is IBM’s fifth acquisition in 2022.

Read More: IBM Launches Automation Innovation Centre To Build Automation Solutions

Databand.ai employs an extendable and open approach that enables data engineering teams to integrate and gain observability in their data infrastructure. Through this partnership with IBM, Databand.ai will be able to expand its data integration capabilities to meet the needs of customers with commercial data solutions. 

IBM will also benefit from this bilateral acquisition. Databand.ai has created a unified data pipeline observability solution for data engineers. This software will partner with IBM Watson Studio and IBM Observability by Instana APM to address the full spectrum of observability across information technologies.

With the acquisition of Databand.ai, IBM can offer the most comprehensive set of observability capabilities for IT across machine learning, applications, and data. IBM continues to provide its partners and clients with the technology to deliver reliable data and artificial intelligence at scale.

Advertisement

Soccer’s Governing Body FIFA Aims to Improve Offside Decisions with AI

fifa offside decisions ai

FIFA hopes to enhance one of Soccer’s most disputed rules with artificial intelligence (AI). In the upcoming World Cup, the organization will roll out the application of AI to improve offside decisions and alert the team of video referees to make accurate judgments. 

ESPN FC Editor Dale Johnson elaborated on the need for improved offside decision-making and that to be done with AI. He explained that offside is vital in every penalty, goal, and attacking move. Every time a player goals (or touches the ball), three defensive players must be in front of them. If that is not the case, the player is at an advantage they should not have, leading to incorrect goals. 

FIFA hopes that AI will make the calls for the appropriateness of a goal in a much lesser time. A straightforward offside decision takes around 70 seconds, FIFA says. This duration will now be reduced to 25 seconds with AI., so the decision-making course is cut by almost two-thirds. This also implies that people on the ground and those witnessing the match at home will probably not notice any difference. 

Read More: Meta AI’s New AI Model can Translates 200 Languages with Enhanced Quality

It was not until 2017-18 that FIFA accepted technological assistance in the game. A few that existed before, like the goal-line technology, were massively criticized. Now, the organization seems to have taken the efficacy of technology and AI to produce a 3D animation of whether the ball is over the line or not. It is now open to fixing offside issues with AI assistance. 

Johnson said, “So this won’t be something that is just for the World Cup. We will see offside improve much more in the other leagues next year in 2023. So this is FIFA’s big idea. It’s to improve the game and drive it forward by introducing this AI technology.”

Advertisement

Meta AI’s New AI Model can Translates 200 Languages with Enhanced Quality

AI researchers at Meta have created No Language Left Behind-200 or NLLB-200, an AI model to enhance machine translation capabilities for most of the world's languages.

Language is not just a communication tool but an expression of different cultures, societies, and opinions across the globe. Nonetheless, language is also the barrier separating them. Thanks to translation technologies and artificial intelligence (AI) taking over the linguistic world, people can now read in their preferred languages. The world would lose a significant portion of its cultural treasures if it weren’t for translation and, more recently, technologies for translation. 

Like other technological developments, translation technologies have evolved too. Currently, the most frequently used method of translation is via machines, Machine Translation (MT). Other methods like Computer-Assisted Translation (CAT) Technology are also prominently used. These technologies have undoubtedly offered seamless communication capabilities that people have wanted for ages. Likewise, they still have undeniable limitations. 

All tools and technologies used for translation work on different principles and consequently deliver different results. Some offer more accurate results, while others are compatible with a more significant number of languages. Moreover, all high-end translation tools are not accessible to billions of people and are incompatible with hundreds of languages. People cannot openly participate in online conversations and communities in their regional/native languages. 

Read More: Measuring Weirdness In AI-Based Language-Translations

To remove some of these barriers and make people a part of the future metaverse, AI researchers at Meta have created ‘No Language Left Behind-200’ or NLLB-200, an AI model to enhance machine translation capabilities for most of the world’s languages. The company claims that the model translates 200 languages with higher accuracy by an average of 44%. These languages include lesser-known African languages like Kamba and Lao (55 in total) and languages from other parts of the world. Such languages are incompatible with other existing translation tools. 

No Language Left behind (NLLB) is a part of Meta’s long-term efforts to build language and machine translation tools. Launched in February 2022, the project builds advanced AI models to learn and decipher languages based on fewer examples. 

The NLLB-200 is made to truly serve everyone, as other AI systems are not designed to cater to hundreds of local languages and provide a real-time speech-to-speech translation. Covering 200 languages is a step forward in overcoming data scarcity and acquiring more training data in local/regional languages. The new AI model also aims to overcome some modeling challenges of expansion faced by the company in previous years. 

It is not the first time that Meta has developed a translation model. It released the 100-language M2M-100 translation model in 2020 with improved architectures and data acquiring practices. The AI company has now scaled to another 100 languages in NLLB-200. It can be used to advance other technologies, developing assistants for languages like Uzbek and creating subtitles for movies in Oromo/Swahili. There are endless possibilities to extend its application and democratize access for people in virtual worlds. 

Meta trained NLLB-200 on FLORES-200, a dataset that enables AI’s performance assessment in 40,000 different language directions. The dataset measured NLLB-200’s performance in each of the 200 languages to be highly accurate. 

Adding to the upsides, Meta is open-sourcing the model and the FLORES-200 dataset to all developers. It has also open-sourced the model training code. The company has also provided a demo to show the application of this open-source translator. The sole reason behind providing open-source access is to help researchers improve their work and translation capabilities via machines. Since inaccessibility is a major drawback of other language translation technologies/tools, Meta’s AI would make technology accessible to ordinary people. 

Further, NLLB-200 will aid in promoting native languages and enabling people to read things without an intermediary language. Languages like Mandarin, English, and Spanish dominate the language webspace. Many people from other countries or regions cannot get the sentiments or context of things written in languages other than their own. NLLB-200 will bridge this gap and add meaning to the text, as people can now read in their preferred language.

As an incentive to use the AI model impactfully, Meta is awarding up to US$200,000 grants to researchers and nonprofit organizations. These researchers/organizations are invited to use NLLB-200 to translate underrepresented languages. 

Meta has also collaborated with Wikimedia Foundation, a nonprofit organization, to offer translation services on Wikipedia. The model would help reduce the disparity between English publications on the website and those in other languages, especially those spoken outside of America and Europe. For instance, there are only 3,260 Wiki articles in Lingala, a native language spoken by 45M people in the Democratic Republic of Congo, against 2.5M Wiki articles in a language like Swedish, spoken in Sweden and Finland by much lesser people.

Even though the AI model has enhanced accuracy and meaningful translation of more languages than before, there is an endless scope for improvement. 200 languages cannot cover the entire language space. Additionally, the company faced several challenges in expanding the model from 100 to 200 languages. Since many of these languages are regional, the challenge is to acquire data from low-resource datasets. The model starts overfitting if trained for extended periods due to data scarcity. Such challenges would only scale as the number of languages increases. Long story short, there is a long road ahead for translation technologies, but NLLB-200 takes us one step forward in the right direction. Meta plans to strive for a more inclusive and connected world by breaking down linguistic and technological barriers and empowering people.

Advertisement

Meta sues Chinese tech company over claims of data scraping

Meta sues Chinese company over data scraping

Meta, the American multinational technology conglomerate that owns Facebook, Instagram, and WhatsApp, has recently announced that it is suing the US subsidiary of a Chinese tech company Shenzhen Vision Information Technology Co., on the grounds of data-scraping from Facebook and Instagram.

Legal actions are being taken against Octopus Data, the US offshoot of a Chinese national high-tech enterprise Shenzhen Vision, which the parent website claims to have launched in 2016.

Meta also revealed that it is suing a Turkish individual identified as Ekrem Ateş, who allegedly set up automated accounts on Instagram to scrape data from about 350,000 Instagram profiles. The social media giant alleges that Ateş published the scraped data to their own websites, or so-called ‘clone sites’.

Read More: Meta And Microsoft To Use AI In Their Data Centers

In its most general form, data scraping refers to a technique in which a computer program extracts data from the output generated from another program. Data scraping commonly manifests itself in the form of web scraping, which is the process of using an application or automated tool to extract valuable information en-masse from a website.

Social media giants and internet companies such as Meta are common victims of web-scraping. Despite a potential threat to sensitive private information, data scraping or, in this case, web scraping is, to one’s surprise, legalized in the US. 

Almost three months earlier, a US court reaffirmed in a ruling that web-scraping is legal, which came as a verdict of a long-standing legal battle between LinkedIn, a Microsoft-owned platform, and Hiq Labs, a data science company. The latter had scraped private information from LinkedIn to assist its customers in predicting employee attrition. The court ruled that the action of scraping publicly accessible information does not infringe the Computer Fraud and Abuse Act (CFAA). CFAA is a cybersecurity law governing computer hacking in the US.

The decision by the US court came as a relief for professionals and amateurs across the industrial sectors, including journalists, researchers, and archivists whose jobs regularly involve scraping publicly available data. However, it sparked legitimate privacy and security concerns among users about how their publicly accessible data is harnessed without their permission. 

It seems that in lieu of the decision for the ‘LinkedIn vs. Hiq Labs’ case, Meta is pursuing matters against Octopus Data through the Digital Millennium Copyright Act (DMCA) instead of targeting the entities under the Computer Fraud and Abuse Act (CFAA). The DMCA is more focused on intellectual property and copyright infringements than hacking. 

In its accusations, Meta affirms that Octopus Data charges a fee to its customers to grant access to a software product named ‘Octoparse,’ which can launch scraping attacks. Besides, the customers can pay Octopus Data to scrape websites directly. For the software to work, the customers are required to give access to their accounts, which allows the software to scrape data that is usually only available to logged-in users. The personal data includes email addresses, phone numbers, birthdates, Facebook friends, Instagram followers, and more.

In a blog post, Jessica Romero, director of platform enforcement and litigation at Meta, wrote that their lawsuit alleges Octopus Data violated Meta’s Terms of Service and the Digital Millennium Copyright Act. She added that the accused did so by engaging in unauthorized automated scraping of data and by attempting to conceal their scraping to avoid being detected and blocked from Instagram and Facebook. 

In its court filing, Meta specifically points toward certain parts of Section 3 of its Terms of Service, which state that users own the intellectual property rights such as copyrights and trademarks in any content they may create and share on any Meta company platforms. The terms of service also state that users have rights to their content and are free to share it with anyone or wherever they want.

Further down the section, it also mentions that the user cannot collect other users’ content or information. It also forbids users from accessing Facebook using automated means such as harvesting bots without Meta’s prior permission.

This lawsuit against the Chinese tech company Octopus Data comes in light of Meta’s recent victory in a similar data-scraping case, which was filed about two years ago against BrandTotal, an Israeli company. The latter offered a browser extension that scraped data from Facebook users. Unlike the ‘LinkedIn vs. Hiq Labs’ case, the court agreed with Meta’s claims that BrandTotal breached Facebook’s terms of use. The court also stated that BrandTotal violated the CFAA or Computer Data Access and Fraud Act (CDAFA) for California by hacking password-protected pages using automated user accounts.

Web-scraping, or in broad terms, data scraping, is a practice as old as the internet itself, and the possibility of getting rid of web-scrapers seems almost impossible, considering its prevalence. However, considering the potential threat it poses to the users’ private data, actions are being taken in favor of the victims. By targeting some of the top data scraping offenders at an individual and corporate level, Meta seems to be warning others from following suit. In light of Meta’s recent victory, it is likely that the outcome of the ‘Meta vs. Octopus Data’ case would be in the social media giant’s favor. 

Advertisement

Apollo Hospitals to receive new AI tool to predict risk of cardiovascular diseases

Apollo Hospitals AI tool predict cardiovascular diseases

Apollo Hospitals has partnered with Singapore-based organization ConnectedLife to avail an artificial intelligence (AI) tool to predict the risk of cardiovascular diseases. This new advancement will allow doctors to intervene early in treatment by predicting the disease risk.

The Singaporean organization ConnectedLife provides digital solutions for condition management, wellness, and other health-focused applications. Under this tie-up, ConnectedLife will use the information from the Apollo Hospitals’ database to arrive at a risk score of cardiovascular diseases for its patients.

According to ConnectedLife’s founder Daryl Arnold, the data from the hospitals would be secure, and the company would adhere to the standards set by the Singapore government. He added that the data would help physicians develop a personal care plan to provide preventive care and digitally monitor the patients proactively. 

Read More: AI Tool Allows Clinicians To Make Personalized Chemotherapy Doses For Cancer Patients

The joint managing director of the Apollo Hospitals group, Sangita Reddy, said that the collaboration with ConnectedLife amalgamates artificial intelligence and machine learning technology with reliable and easy-to-use risk prediction tools that provide indications for early action. She added that the partnership would boost research to understand health risk scores and that Apollo would soon expand the collaboration to other non-communicable diseases.

ConnectedLife captures and analyses patient-reported data from wearable devices like Fitbit using AI to provide patient health and wellness insights. The data is then shared with the health care providers to help them to develop a care plan. The technology offers near real-time information, including details such as exercise, sedentary time, heart rate, breathing rate, and sleep, among others.

According to the director of Fitbit Health Solutions International, Steve Morley, the program provides patients with a better view of their health metrics to help them better manage their cardiovascular health. 

Advertisement

US Army TITAN Program Led by Raytheon Technologies Opts for C3 AI to Enhance MLOps

c3 ai for us army titan

C3 AI, an Enterprise AI software firm, announced its selection by Raytheon Technologies to deliver next-gen AI and enhance MLOps. The platform shall be used as a ready-now solution for the US Army’s Tactical Intelligence Targetting Access Node (TITAN) program. 

Raytheon Technologies is competing with others in designing TITAN, a tactical ground solution that serves as the Army’s groundwork solution for multi-domain operations. TITAN tracks and locates threats to aid in precision targeting and advancing the Department of Defence’s strategy for the Joint Force C2 (JADC2). 

To offer enhanced MLOps and AI capabilities, it will use C3 AI’s ML/AI model operations for the most effective third-party models across the TITAN enterprise. It will include both connected edge and cloud-based environments.

Read More: AI Chipmaker Rebellions Raises US$22.8M Series A Expansion

Thomas M. Siebel, Chairman and CEO of C3 AI, said, “This work combines Raytheon Intelligence & Space’s expertise in aerospace and defense with C3 AI’s proven expertise in enterprise AI to support critical national security interests through next-generation technology.”

TITAN will use high-altitude, aerial, space, and terrestrial sensors data to ingest targetable data while providing multi-source intelligence support for commanders. The leveraged AI capabilities will support pattern-of-life sensemaking and automated target recognition to help operators make sense of the available data and be able to prosecute a target.

Advertisement

Looking through the Glass: Key Contributions of NITI Aayog’s former CEO Amitabh Kant

Amitabh Kant NITI Aayog
Image Credit: Analytics Drift

The former CEO of the public policy think tank of the Government of India, National Institution for Transforming India, or NITI Aayog, Amitabh Kant stepped down on June 30, 2022. His tenure was extended three times before the government expressed interest in bringing a new shift in the NITI Aayog’s approach and a greater focus on social sector schemes and redistribution measures. 

Kant was appointed CEO on February 17, 2016, and oversaw several high-profile projects, including the Aspirational Districts Program, National Monetization Pipeline, Aspirational District Program, Production Linked Incentive Scheme, Asset Monetization, and Transformative Mobility, among others. Kant served as the Department of Industrial Policy & Promotion Secretary prior to his transfer to the NITI Aayog. Additionally, he served as the chief executive for the Delhi–Mumbai Industrial Corridor project, and he was the one who came up with the “Incredible India” campaign.

He graduated from St. Stephen’s College with a degree in economics and Jawaharlal Nehru University with a master’s degree in international relations. Kant has also received the Sir Edmund Hillary Fellowship from the Prime Minister of New Zealand. In 1980, he enlisted in the Kerala cadre of the Indian Administrative Service. He has also authored three books: Branding India: An Incredible Story, The Path Ahead: Transformative Ideas for India, and Incredible India 2.0: Synergies for Growth and Governance. 

The Modi administration extended his term for a third time in 2021, this time by one year, till June 2022, after the first extension till June 30, 2019, and the second time till June 2021.

Kant will be succeeded by Parameswaran Iyer, a former IAS officer from the 1980 batch of Uttar Pradesh Cadre. Iyer voluntarily left the Indian Civil Services in 2009. In the same year, Iyer was appointed as the World Bank’s manager of water resources. In 2016, he became the Secretary of the Ministry of Drinking Water and Sanitation. Iyer has also worked for the UN as a senior specialist in rural water sanitation. He became Secretary of the Ministry of Drinking Water and Sanitation in 2016 and oversaw the nationally launched Swachh Bharat Mission and the Jal Jeevan Mission, which aims to provide piped water supply to all households by 2024 through integrated grassroots water supply management.

Digital Payment Ecosystem

Considering Kant’s legacy and contributions as head of NITI Aayog, Iyer has pretty big shoes to fill in. For instance, Kant led a high-level panel to examine all potential digital payment methods across sectors after being sworn in to transition to a more cashless economy. According to NITI Aayog, this committee would find and implement user-friendly digital payment methods in all economic sectors as soon as possible.

In order to promote the quick adoption of digital payment systems and aid in the quick transition to the cashless, digital payments economy across all states and sectors, the committee would also regularly engage with central ministries, regulators, state governments, district administration, local bodies, trade, and industry associations, etc.

To make sure that about 80% of Indian transactions transfer to the digital-only platform, a framework for implementation was established and monitored with stringent deadlines.

A year later, at the NDTV-Mastercard Cashless Bano India, Kant stated that India’s digital payments infrastructure was at the time five years ahead of the United States. He was astounded by the fin-tech industry’s pace of innovation as well as the number of digital options available to Indians today, like UPI. 

Due to the increasing use of smartphones due to network penetration, the fairly widespread availability of biometrics among Indians, and the likelihood that it was the only nation with a unified payments interface, this initiative alone transformed India from one of the largest informal economies in the world to a country supporting digital payments.

India’s Unified Payments Interface (UPI) had 330 banks participating as of June 2022, and it had recorded 5.86 billion monthly transactions totaling INR10,14,384 crore.

Fostering the Startup Culture

Under Kant’s leadership, NITI Aayog has also given the Indian startup ecosystem greater momentum, which has led to a notable expansion of the startup ecosystem in India over the past several years, particularly with the introduction of the government’s “Startup India” project. The number of these startups has also clearly shifted from Tier 1 cities to Tier 2 cities, indicating that innovation is increasingly taking a pan-Indian approach.

A growing young population that embraces fast-paced technology, an ambitious consumer market, the expansion of the investment-active middle-class and upper-middle-class segments, and India’s continued support for “Frugal Innovation” are some of the key drivers of the Indian startup ecosystem.

The government has also devised an action plan for startups, including establishing a new portal that allowed private organizations, particularly startups, to leverage public data from multiple ministries for innovation and the development of sector-specific solutions. Startups can employ artificial intelligence (AI) and data to address problems unique to India. Moreover, in order to help entrepreneurs obtain patents, NITI Aayog also assembled a team of attorneys from several patent offices. A startup hub has also been set up to help, support, and guide startups. NITI Aayog had also organized multiple funding events targeted at renewing funding for the Modi government’s Startup India scheme, thus enhancing the ease of doing business in India.

The country’s startup figure rose from double digits to tens of thousands as a result of Kant’s radical reforms. Even the development of unicorns at the height of the COVID-19 pandemic has been quite impressive. Speaking at a FICCI Ladies Organization (FLO) event in March, Kant added that there are currently 81 unicorns and more than 61,000 startups in India. In May, after receiving US$50 million from IIFL, the Bengaluru-based neo-banking startup Open became the nation’s 100th unicorn.

Kant also took a keen interest in ensuring women-based startups grew in prominence in recent years, to promote women’s empowerment as well as act as a catalyst for socio-economic transformations. He asserted that women-owned firms and enterprises are rapidly becoming the next major disruption in the Indian startup ecosystem and are already playing a significant role in society. 

Read More: Women in AI: 8 Women-led Indian-based companies Transforming AI Industry

Kant was also a proponent of the notion that innovation stems from the nurtured minds of young students. Therefore, as part of the Atal Innovation Mission (AIM), the government, in collaboration with NITI Aayog built 500 tinkering labs in schools in 2016 and a considerable number of incubators at the college level in an effort to encourage the spirit of innovation in the youngest citizens. Another aspect of the project was making sure the IITs (Indian Institutes of Technology) and IIMs (Indian Institutes of Management) have research labs.

In the following year, an additional 1,500 schools were chosen by NITI Aayog to implement the Atal Tinkering Labs (ATLs) program. Last December, with the goal of empowering innovators and entrepreneurs across the country, NITI Aayog launched a first-of-its-kind Vernacular Innovation Program (VIP), which will provide innovators and entrepreneurs in India with access to the innovation ecosystem in 22 scheduled languages. AIM will train a Vernacular Task Force (VTF) in each of the 22 scheduled languages to develop the required capability for the VIP. Each task force is led by a regional Atal Incubation Center and includes vernacular language instructors, subject matter experts, technical writers, and subject matter experts (AICs). With the help of VIP, Kant plans to minimize the language barrier in the fields of innovation and entrepreneurship.

Roadmap to AI Dominance

Under Kant’s supervision, NITI Aayog published National Strategy for Artificial Intelligence (NSAI) as early as June 2018, making India one of the first nations to consider harnessing the emerging technology for social good by addressing inclusion and societal issues. In the document, it was noted that the vast geographic and cultural diversity presents specific developmental problems in the areas of agriculture, smart mobility, and healthcare, all of which might benefit from AI. Therefore, the Indian government anticipated that if the AI solutions were applied to the diverse demographics, they would be applicable elsewhere as well.

The NITI Aayog also mentioned the cloud platform AIRAWAT, which stands for AI Research, Analytics and knoWledge Assimilation plaTform, in their AI Strategy Report.

With a sizable, power-optimized AI computing infrastructure and cutting-edge AI processing, the AIRAWAT is deemed as a cloud platform for big data analytics and assimilation. The Indian government intends to address the issues brought on by limited access to computing resources through AIRAWAT. The government will soon start developing compute infrastructure that is specifically designed for AI to support the computing requirements of Innovation Hubs, International Centers for Transformational AI, and Centres of Research Excellence (COREs).

According to a recent study by Microsoft and the Internet and Mobile Association of India (IAMAI), the artificial intelligence (AI) market in India is anticipated to grow by 20% over the next five years. The nation is also one of the top three talent markets, producing 16% of the global AI talent pool. These milestones can be thought of as a result of the domino effect that started with the announcement of strategic plans to boost the AI-backed domestic industries.

The incubative environment for startups and AI technologies built by NITI Aayog reforms also helped India strengthen its healthcare industry. With Covid-19 triggering the need for pharmaceutical research, on-spot testing, mental health bots, and more, the healthcare sector was pushed to its limits to cater to the new demands and urgencies. This resulted in numerous success stories for this industry. For instance, a Bengaluru-based Software-as-a-service (SaaS) platform for doctors recently released the mobile version of its AI-powered electronic medical records (EMR) app called “EMR on Mobile.” The app is developed using the same architecture as the company’s premier desktop EMR powered by AI. It can be used to gain access to patients’ information in real-time anywhere.

Doctors in more than 350 cities, including Tier II and Tier III cities, have used the mobile EMR. Further, they can use “EMR on Mobile” to manage both in-person and online consultations across the same pool of patients in the wake of the Covid-19 outbreak.

Boosting the EV Market

Under Kant’s leadership, Aayog has been at the forefront of the government’s effort to encourage electric vehicles (EVs) in order to reduce pollution. Aayog was also a major force behind the FAME (Faster Adoption and Manufacturing of Electric Vehicles in India) program, which announced a number of incentives for the EV industry. 

One of the major achievements of FAME was the launching of the e-Sawaari India Electric Bus Coalition in December 2021. This was possible due to NITI Aayog’s collaboration with Convergence Energy Service Limited (CESL), World Resources Institute, India (WRI India), and funding from the Transformative Urban Mobility Initiative (TUMI). 

The central, state, and city-level government organizations transit service providers, original equipment manufacturers (OEMs), financing institutions, and ancillary service providers will be able to share knowledge and their lessons learned on e-bus adoption in India through the e-Sawaari India Electric Bus Coalition. Thus embarking on a new chapter in the electric vehicles market in India along with adherence to India’s decarbonization strategy. 

In December last year, Kant stated that the government was attempting to lower the 18% GST on EV batteries at the time. In the end, the GST Council decided that EVs, battery packs and all, will henceforth be taxed at 5%. On June 28 and 29, the Union Finance Minister Sitharaman presided over the 47th meeting of the GST Council in Chandigarh, where the decision was announced.

Kant had previously claimed in February that the government was interested in Tesla manufacturing its cars in India. He explained that there are two different rates of duties in India: one is around 110% for luxury cars, and the other is about 60% for cars made in India. While Tesla is welcome to 110% duty, it will be beneficial for the company if it sets up a manufacturing and assembly plant in India. 

Encouraging Data Interoperability

The NITI Aayog introduced the National Data and Analytics Platform (NDAP) for free public usage in the first half of this year. Kant claims that it will house fundamental datasets from multiple governmental organizations and offer analytics and visualization capabilities. Last August, a beta version was made available to a select group of people for testing and feedback.

To ensure that the datasets stored on the platform are curated to the needs of data consumers from different sectors like government, academia, journalism, civil society, and the corporate sector—NDAP will adopt a use-case-based approach. Because all datasets adhere to a uniform schema, NDAP makes integrating datasets and conducting cross-sectoral analysis simple.

Miscellaneous

On September 1 last year, NITI Aayog officially unveiled Ernakulam Karayogam’s “Clinic on Wheels,” a mobile medical unit. Bharat Petroleum Corporation Ltd has funded this project as part of its CSR initiatives. The Mobile Medical Facility unit, which is outfitted with cutting-edge medical facilities, would travel to the isolated and coastal villages in the districts of Ernakulam, Alleppey, and Idukki in diagnosing and treating the poor and rural residents right where they live.

Kant has also endorsed cryptocurrencies, stating that they are simply another asset class like bonds, gold, and mutual funds. He argued that because it is a new asset class and a large number of people are using it for transactions, the government would lose money if it did not tax it. He lauded the government’s move to tax cryptocurrency revenues in this year’s budget session.

Read More: Top 12 NFT Marketplaces in India 2022

Advertisement