Home Blog Page 39

AWS, Udacity Announce Free ‘AI & ML Scholarship Program’ 

AWS has collaborated with Udacity to award 2,500 students with scholarships in 2023. Through the ‘AWS AI & ML Scholarship Program’ students will be able to learn crucial skills to avail new career opportunities in the demanding field of artificial intelligence.

The scholarship program aims to help underrepresented and underserved students to learn foundational machine learning concepts as they prepare for a fruitful career in one of today’s fastest-growing fields.

All the participants will be able to access more than 20 hours of free training modules and tutorials on the basics of ML through the AWS DeepRacer Student platform. All 2,000 students in the AWS AI & ML Scholarship Program can join monthly “Ask Me Anything” seminars with industry’s leading practitioners from Amazon, in addition to tutoring from Udacity program mentors and instructors. The top 500 participants will also have 1:1 mentoring from advisors from AWS, Udacity, and other institutions through Udacity Connect.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

AWS AI & ML Scholarship applications started on Udacity’s website on June 1 and will stay open till September 30. Last date to complete prerequisites on the AWS DeepRacer Student platform to earn scholarship access on Udacity is September 30. The scholarship recipients will be notified on October 9. The AWS AI & ML Scholarship Nanodegree program begins on October 16. 

There is no application fee for the program. Moreover, all aspects of AWS DeepRacer Student, the service used to meet the application requisites, are free for students to use. Students over the age of 16 that identify as underserved or underrepresented in the tech industry and are currently enrolled in high school or college can apply.

Students can begin the application process by signing up for AWS DeepRacer Student and selecting the AI & ML Scholarship program. They can track progress for the scholarship program requirements on AWS DeepRacer Student. Students will get a unique code via email that can be used to submit the application once prerequisites are completed. 

Advertisement

Indian Army Secures Patent for AI-based Accident Prevention System

The Indian Army has secured a patent for an artificial intelligence-driven accident prevention system. This system, which was created by the Army’s research and development division (R&D), has the potential to improve traffic safety and reduce collisions.

The Indian Army tweeted about this accomplishment and emphasized how the system can reduce accidents. The official Twitter account of Digital India, a flagship program of Government of India also shared the news.

This AI-driven accident avoidance system is credited to Colonel Kuldeep Yadav. The patent application was formally filed on February 2, 2021, and according to the Indian Army’s disclosure of the patent certificate, it will be valid for 20 years. On July 11, the patent was formally granted.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

By using artificial intelligence, the system’s primary purpose is to proactively prevent accidents. It primarily focuses on situations in which drivers run the risk of dozing off while operating a car or other big vehicles. The Accident Prevention System has the ability to save lives and dramatically increase road safety by instantly warning sleepy drivers and reducing the risk of accidents brought on by weariness.

The AI-based accident prevention system underwent extensive testing in the buses of Andhra Pradesh and Telangana state transportation corporations before obtaining the patent. The device’s suitability for usage in trucks further suggests that it has the potential to make a significant contribution to road safety outside of the realm of military use.

The device was developed by Colonel Kuldeep Yadav as a result of his supervision of a military unit in Manipur. He invented this novel approach after seeing the increased dangers of driver weariness and accidents, particularly in hilly terrains. Over 1.54 lakh people nationwide lost their lives in traffic accidents in 2021. The cause of more than half of truck-related collisions was driver inattention caused by sleepiness.

Advertisement

DPDP Bill to be Implemented Through Graded Approach, Says Rajeev Chandrasekhar

The Digital Personal Data Protection Bill, which is about to become law, will be implemented by the Centre for various organizations in a graded approach. According to Rajeev Chandrasekhar, minister of state for electronics and IT, the government would initially implement the rule for major tech firms like Google, Microsoft, Amazon, and Apple and then provide a longer transition period for smaller organizations and start-ups.

Rajeev Chandrasekhar, minister of state for electronics and IT spoke to journalist Soumyarendra Barik who took to Twitter to say, “With the data protection bill on verge of becoming law, the next big question on everyone’s mind is how much transition period it will allow entities. Rajeev Chandrasekhar tells me the approach will be graded, with Big Tech first in line.”

After receiving the President’s assent, the Digital Personal Data Protection Bill, 2023, which was approved by the Rajya Sabha on Wednesday, would become a law. This is the country’s second attempt at framing a privacy legislation, as it comes after three previous iterations of a data protection law that were first considered, and then shelved, by the government.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

The Bill does not, however, provide a particular date for when its rules will become operational for organizations that collect users’ personal data. Start-ups and less digitized companies, like MSMEs, will most likely have to comply with the rule after big IT companies, according to Chandrasekhar.

The timelines will be decided after consultation with the industry, he said. The timelines, according to him, will be chosen so as not to interfere with companies’ continuing activities.

General Data Protection Regulation (GDPR), a privacy regulation of the European Union, gave organizations two years to prepare before it began to apply to them. The European Parliament passed the GDPR in 2016, and it went into effect the following year. 

The minister said, “We expect the industry to ask for a long time period, but the government will negotiate with them. GDPR was designed when the world’s knowledge about privacy laws was low, but that is not the case today. So, it is unlikely that we will allow the industry a two year transition window.” 

Advertisement

Rajya Sabha Passes Digital Personal Data Protection Bill 2023

Rajya Sabha passes Digital Personal Data Protection Bill 2023
Image Credits: NDTV

The Digital Personal Data Protection Bill, 2023 was adopted by the Lok Sabha on August 7. Now the DPDP bill has been passed by the Rajya Sabha. 

The goal of the DPDP bill, according to the document, is “to provide for the processing of digital personal data in a manner that recognises both the right of individuals to protect their personal data and the need to process such personal data for lawful purposes.” It is applicable to the processing of digital personal data on Indian soil, whether the data was initially obtained in non-digital form and afterwards converted to digital form.

It also applies to the processing of digital personal data that takes place outside of India’s borders if that processing is necessary to engage in any activity that involves providing goods or services “to Data Principals within the Indian territory.” The person to whom the personal data refers is referred to as the “Data Principal.”

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

The Bill states that only with consent and for specific “legitimate uses” may personal data be processed. It gives the central government the authority to exempt government agencies from the Bill’s obligations in the sake of predetermined factors such as national security, public order, and the prevention of crimes.

According to the Bill, a Data Protection Board of India must be established by the central government. In the event of a data breach, it will direct data fiduciaries to take the required precautions and oversee the administration of sanctions. It will also monitor compliance and hear complaints from impacted parties.

Additionally, there are fines up to Rs. 250 billion for failing to take security precautions to avoid data breaches and up to Rs. 200 billion for failing to fulfill commitments to children.

Jayadev Galla, a member of the Telugu Desam Party, expressed worry regarding a potential control of the data protection body by the central government. Syed Imtiaz Jaleel, an AIMM member of parliament, opposed the bill and stated, “The Bill poses major problems, one of which is the excessive centralization of power.”

The bill has been passed despite the oppositions’ claims and assertions about how the bill violates the right to privacy of the citizens of India. Only time can tell whether the apprehensions articulated by the opposition will in fact turn into reality. 

Advertisement

Tesla Appoints Vaibhav Taneja as its New Chief Financial Officer 

Tesla appoints Vaibhav Taneja as new Chief Financial Officer
Image Credits: Tesla

Tesla, the world’s largest manufacturer of electric vehicles, has announced the appointment of Vaibhan Taneja as its Chief Financial Officer (CFO). Zachary Kirkhorn, who recently announced his decision to resign from the role, will be replaced by Taneja. 

45-year-old Taneja is of Indian descent and is not new to the organization. Prior to his new position, he served as Chief Accounting Officer (CAO) at Tesla. The changeover, which happened on Friday, signifies a substantial change in Tesla’s financial management.

Taneja’s tenure at Tesla has been characterized by a string of increasing levels of responsibility. He started off in the position of Corporate Controller in May 2018, and due to his dedication and skill, he was promoted to CAO in March 2019. His responsibility for maintaining Tesla’s financial integrity was highlighted by the fact that his portfolio included the crucial areas of financial reporting, tax compliance, and internal controls.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

Taneja’s connection to Tesla goes back much farther, since he provided his financial expertise to SolarCity Corporation, a maker of solar panels that Tesla purchased in 2016. Taneja handled a variety of positions in accounting and finance during his time at SolarCity, showcasing his adaptability and versatility in a fast-paced corporate environment.

Taneja made significant contributions while working for PricewaterhouseCoopers (PwC) in both India and the US prior to joining Tesla. His time with PwC, which spanned from July 1999 to March 2016, gave him a solid foundation in financial practices and concepts, which he now uses in his present leadership position.

With Taneja in charge of the company’s financial operations, Tesla is well-positioned to seize opportunities, tackle problems, and maintain its status as a pioneer in the field of electric mobility.

Advertisement

OpenAI Offers $395,000 Grant to NYU Ethical Journalism Project 

OpenAI Offers $395,000 Grant to NYU Ethical Journalism Project
Image Credits: NYU

A $395,000 grant will be given to Arthur L. Carter Media Institute at New York University by the Sam Altman-run OpenAI to support a new media ethics programme. The announcement is a part of a larger initiative by OpenAI to link itself with journalism, an effort that seeks to train its LLMs with accurate and ethical data. 

The Journalism Venture Capital Fund of the Carter Journalism Institute, which provides seed money for faculty projects that tackle problems in journalism, democracy, freedom of expression, and allied fields, has also contributed $50,000 to the programme.

Stephen Adler, a former executive in charge of Reuters, will head the effort. According to Stephen Adler, the initiative will provide workshops and discussions on existing and emerging journalism ethics issues.

From its recent agreements with organizations like Associated Press (AP), one of the largest US news agencies, and the $5 million in funding it received from the National Science Foundation, OpenAI appears to be one step ahead of its rivals like Google in terms of gathering clean data. According to reports, the collaboration with AP will look into ways to build AI to promote local journalism,

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

OpenAI will also link up with the 41 news organizations that the American Journalism Project supports. According to Sarabeth Berman, CEO of AJP, the money will also help establish a new product studio within the company, which will assist regional news organizations as they test out OpenAI’s technologies.

OpenAI has been secretive about where it obtained the data it used to train its most recent GPT model, despite the fact that the startup has been trying to address the complexities of ethical journalism amid the generative AI revolution. 

Recently, news regarding Google’s Genesis initiative, which aims to properly produce news content using true facts, was announced. The Times, The Washington Post, and News Corp. executives participated in this demo of the same. People had different opinions about it, as some said it seemed to take for granted the effort that went into producing accurate and artful news stories, while others compared the technology to a personal assistant.

Advertisement

Nvidia Introduces AI Workbench Toolkit for Simplified Generative AI Model Tuning and Deployment

nvidia ai workbench for generative AI
Image Credits; wccftech

A unified, user-friendly toolkit called NVIDIA AI Workbench, which the company just unveiled, enables developers to swiftly build, test, and customize pre-trained generative AI models on a workstation or PC before scaling them to almost any data center, public cloud, or NVIDIA DGX Cloud.

With the aid of AI Workbench, starting an enterprise AI project is no longer difficult. Developers can use a streamlined interface running on a local system to access models from well-known sources like Hugging Face, GitHub, and NVIDIA NGC and modify them using unique data. The models can then be simply shared between various other platforms.

Manuvir Das, vice president of enterprise computing at NVIDIA said, “Enterprises around the world are racing to find the right infrastructure and build generative AI models and applications. NVIDIA AI Workbench offers a streamlined path for cross-organizational teams to develop the AI-based applications that are increasingly crucial in modern business.”

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

Although there are now hundreds of thousands of pretrained models accessible, customizing them using the many open-source tools may require searching through numerous internet repositories for the appropriate framework, tools, and containers as well as using the appropriate skills to customize a model for a particular use case.

Developers may quickly customize and execute generative AI with NVIDIA AI Workbench. As a result, they are able to compile into a single developer toolkit all essential enterprise-grade models, frameworks, software development kits, and libraries from open-source sources and the NVIDIA AI platform.

Leading providers of AI infrastructure, such as Dell Technologies, Hewlett Packard Enterprise, HP, Lambda, Lenovo, and Supermicro, are embracing AI Workbench for its capacity to enhance their most recent lineup of multi-GPU capable desktop workstations, high-end mobile workstations, and virtual workstations.

Advertisement

Nvidia, Hugging Face to Make Generative AI Supercomputing Available to Developers

Nvidia Hugging Face generative AI supercomputing available developers
Image Credits; Nvidia

Hugging Face and Nvidia are working together to increase access to AI compute. Nvidia announced this week that it will offer a new Hugging Face service called Training Cluster as a Service to make the development of fresh and unique generative AI models for the workplace simpler. The announcement was timed to coincide with the annual SIGGRAPH conference.

The all-encompassing cloud-based AI “supercomputer” from Nvidia, DGX Cloud, will power Training Cluster as a Service when it launches in the upcoming months. The DGX Cloud offers access to a cloud instance with eight Nvidia H100 or A100 GPUs, 640GB of GPU memory, Nvidia’s AI Enterprise software for building big language models and AI applications, as well as consultations with Nvidia experts.

Companies can sign up for DGX Cloud on their own. The monthly price per instance starts at $36,999. However, Training Cluster as a Service combines the DGX Cloud infrastructure with the Hugging Face platform’s more than 250,000 models and over 50,000 datasets, making it a useful starting point for any AI project.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

Hugging Face co-founder and CEO Clément Delangue said, “Our collaboration will bring Nvidia’s most advanced AI supercomputing to Hugging Face to enable companies to take their AI destiny into their own hands with open source to help the open source community easily access the software and speed they need to contribute to what’s coming next.”

The partnership between Hugging Face and Nvidia comes as the business is apparently seeking new funding at a $4 billion valuation. Hugging Face, which was founded in 2014 by Delangue, Julien Chaumond, and Thomas Wolf, has grown quickly over the previous nine years, transitioning from a consumer app to a hub for all things AI model-related. 

Hugging Face is becoming the go-to place for AI developers to exchange ideas these days. Hugging Face has evolved into the GitHub equivalent for developers looking to learn more about the most recent models and APIs in order to avoid being rendered obsolete by the generative AI technology, as it gains popularity.

Advertisement

Nvidia Introduces New AI Chip GH200 Grace Hopper

Image Credits: Nvidia

On Tuesday, Nvidia unveiled GH200 Grace Hopper superchip, which is made to run AI models, as it tries to fend off AMD, Google, and Amazon competitors in the budding AI market.

The 72-core Grace CPU and 141 GB of HBM3e memory, which is organized into six 24 GB stacks and has a 6,144-bit memory interface, are the foundation of the new GH200 Grace Hopper superchip. Although Nvidia installs 144 GB of memory physically, only 141 GB are usable for higher yields.

The California-based company said that the HBM3e processor, which is 50% faster than current HBM3 technology, will power its next-generation GH200 Grace Hopper platform.

Developers will be able to run expanded Large Language Models (LLMs) as a result of its dual configuration, which will provide up to 3.5x more memory capacity and 3x more bandwidth than the currently available chips on the market.

CEO Jensen Huang stated at a presentation on Tuesday that the new technology would help “scale-out of the world’s data centers.” In addition, he predicted that “the inference cost of large language models will drop significantly,” alluding to the generative phase of AI computing that comes after LLM training.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

The latest product launch by Nvidia comes after the hype surrounding AI technology propelled the company’s value above $1 trillion in May. Nvidia became one of the market’s brightest stars in 2023 due to soaring demand for its GPU chips and a forecasted shift in data center infrastructure.

According to estimations, Nvidia currently holds a market share of over 80% for AI chips. Graphics processing units, or GPUs, are the company’s area of expertise. These processors are now the one of choices for the large AI models that support generative AI applications, including Google’s Bard and OpenAI’s ChatGPT. 

Despite all this, Nvidia’s chips are hard to come by as tech behemoths, cloud service providers, and startups compete for GPU power to create their own AI models.

Advertisement

OpenAI Introduces New Web Crawler GPTBot to Consume more Open Web

To increase its dataset for training its upcoming generation of AI systems, OpenAI has introduced a new web crawling bot called GPTBot. According to OpenAI, the web crawler will gather information from websites that are freely accessible to the public while avoiding content that is paywalled, sensitive, or illegal. 

However, the system is opt-out. GPTBot will presume available information is open for use by default, similar to other search engines like Google, Bing, and Yandex. The owner of a website must include a “disallow” rule in a common server file in order to stop the OpenAI web crawler from digesting that webpage.

Additionally, according to OpenAI, GPTBot will check scrapped material in advance to weed out personally identifiable information (PII) and anything that contravenes its rules. However, some technological ethicists believe that the opt-out strategy still poses consent-related concerns.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

Some commenters on Hacker News defended OpenAI’s action by arguing that it needs to amass as much information as possible if people want to have a powerful generative AI tool in the future. Another person who was more concerned with privacy complained that “OpenAI isn’t even quoting in moderation. It obscures the original by creating a derivative work without citing it.”

The launch of GPTBot comes in response to recent criticism of OpenAI for previously illegally collecting data to train Large Language Models (LLMs) like ChatGPT. The business changed its privacy policy in April to address these issues.

Meanwhile, a recent GPT-5 trademark filing appears to hint that OpenAI might be working on its next version of the GPT AI model. Large-scale web scraping would probably be used by the new system to refresh and increase its training data. However, there is no official announcement concerning GPT-5 as of yet. 

Advertisement