Thursday, January 15, 2026
ad
Home Blog Page 39

Rajya Sabha Passes Digital Personal Data Protection Bill 2023

Rajya Sabha passes Digital Personal Data Protection Bill 2023
Image Credits: NDTV

The Digital Personal Data Protection Bill, 2023 was adopted by the Lok Sabha on August 7. Now the DPDP bill has been passed by the Rajya Sabha. 

The goal of the DPDP bill, according to the document, is “to provide for the processing of digital personal data in a manner that recognises both the right of individuals to protect their personal data and the need to process such personal data for lawful purposes.” It is applicable to the processing of digital personal data on Indian soil, whether the data was initially obtained in non-digital form and afterwards converted to digital form.

It also applies to the processing of digital personal data that takes place outside of India’s borders if that processing is necessary to engage in any activity that involves providing goods or services “to Data Principals within the Indian territory.” The person to whom the personal data refers is referred to as the “Data Principal.”

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

The Bill states that only with consent and for specific “legitimate uses” may personal data be processed. It gives the central government the authority to exempt government agencies from the Bill’s obligations in the sake of predetermined factors such as national security, public order, and the prevention of crimes.

According to the Bill, a Data Protection Board of India must be established by the central government. In the event of a data breach, it will direct data fiduciaries to take the required precautions and oversee the administration of sanctions. It will also monitor compliance and hear complaints from impacted parties.

Additionally, there are fines up to Rs. 250 billion for failing to take security precautions to avoid data breaches and up to Rs. 200 billion for failing to fulfill commitments to children.

Jayadev Galla, a member of the Telugu Desam Party, expressed worry regarding a potential control of the data protection body by the central government. Syed Imtiaz Jaleel, an AIMM member of parliament, opposed the bill and stated, “The Bill poses major problems, one of which is the excessive centralization of power.”

The bill has been passed despite the oppositions’ claims and assertions about how the bill violates the right to privacy of the citizens of India. Only time can tell whether the apprehensions articulated by the opposition will in fact turn into reality. 

Advertisement

Tesla Appoints Vaibhav Taneja as its New Chief Financial Officer 

Tesla appoints Vaibhav Taneja as new Chief Financial Officer
Image Credits: Tesla

Tesla, the world’s largest manufacturer of electric vehicles, has announced the appointment of Vaibhan Taneja as its Chief Financial Officer (CFO). Zachary Kirkhorn, who recently announced his decision to resign from the role, will be replaced by Taneja. 

45-year-old Taneja is of Indian descent and is not new to the organization. Prior to his new position, he served as Chief Accounting Officer (CAO) at Tesla. The changeover, which happened on Friday, signifies a substantial change in Tesla’s financial management.

Taneja’s tenure at Tesla has been characterized by a string of increasing levels of responsibility. He started off in the position of Corporate Controller in May 2018, and due to his dedication and skill, he was promoted to CAO in March 2019. His responsibility for maintaining Tesla’s financial integrity was highlighted by the fact that his portfolio included the crucial areas of financial reporting, tax compliance, and internal controls.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

Taneja’s connection to Tesla goes back much farther, since he provided his financial expertise to SolarCity Corporation, a maker of solar panels that Tesla purchased in 2016. Taneja handled a variety of positions in accounting and finance during his time at SolarCity, showcasing his adaptability and versatility in a fast-paced corporate environment.

Taneja made significant contributions while working for PricewaterhouseCoopers (PwC) in both India and the US prior to joining Tesla. His time with PwC, which spanned from July 1999 to March 2016, gave him a solid foundation in financial practices and concepts, which he now uses in his present leadership position.

With Taneja in charge of the company’s financial operations, Tesla is well-positioned to seize opportunities, tackle problems, and maintain its status as a pioneer in the field of electric mobility.

Advertisement

OpenAI Offers $395,000 Grant to NYU Ethical Journalism Project 

OpenAI Offers $395,000 Grant to NYU Ethical Journalism Project
Image Credits: NYU

A $395,000 grant will be given to Arthur L. Carter Media Institute at New York University by the Sam Altman-run OpenAI to support a new media ethics programme. The announcement is a part of a larger initiative by OpenAI to link itself with journalism, an effort that seeks to train its LLMs with accurate and ethical data. 

The Journalism Venture Capital Fund of the Carter Journalism Institute, which provides seed money for faculty projects that tackle problems in journalism, democracy, freedom of expression, and allied fields, has also contributed $50,000 to the programme.

Stephen Adler, a former executive in charge of Reuters, will head the effort. According to Stephen Adler, the initiative will provide workshops and discussions on existing and emerging journalism ethics issues.

From its recent agreements with organizations like Associated Press (AP), one of the largest US news agencies, and the $5 million in funding it received from the National Science Foundation, OpenAI appears to be one step ahead of its rivals like Google in terms of gathering clean data. According to reports, the collaboration with AP will look into ways to build AI to promote local journalism,

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

OpenAI will also link up with the 41 news organizations that the American Journalism Project supports. According to Sarabeth Berman, CEO of AJP, the money will also help establish a new product studio within the company, which will assist regional news organizations as they test out OpenAI’s technologies.

OpenAI has been secretive about where it obtained the data it used to train its most recent GPT model, despite the fact that the startup has been trying to address the complexities of ethical journalism amid the generative AI revolution. 

Recently, news regarding Google’s Genesis initiative, which aims to properly produce news content using true facts, was announced. The Times, The Washington Post, and News Corp. executives participated in this demo of the same. People had different opinions about it, as some said it seemed to take for granted the effort that went into producing accurate and artful news stories, while others compared the technology to a personal assistant.

Advertisement

Nvidia Introduces AI Workbench Toolkit for Simplified Generative AI Model Tuning and Deployment

nvidia ai workbench for generative AI
Image Credits; wccftech

A unified, user-friendly toolkit called NVIDIA AI Workbench, which the company just unveiled, enables developers to swiftly build, test, and customize pre-trained generative AI models on a workstation or PC before scaling them to almost any data center, public cloud, or NVIDIA DGX Cloud.

With the aid of AI Workbench, starting an enterprise AI project is no longer difficult. Developers can use a streamlined interface running on a local system to access models from well-known sources like Hugging Face, GitHub, and NVIDIA NGC and modify them using unique data. The models can then be simply shared between various other platforms.

Manuvir Das, vice president of enterprise computing at NVIDIA said, “Enterprises around the world are racing to find the right infrastructure and build generative AI models and applications. NVIDIA AI Workbench offers a streamlined path for cross-organizational teams to develop the AI-based applications that are increasingly crucial in modern business.”

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

Although there are now hundreds of thousands of pretrained models accessible, customizing them using the many open-source tools may require searching through numerous internet repositories for the appropriate framework, tools, and containers as well as using the appropriate skills to customize a model for a particular use case.

Developers may quickly customize and execute generative AI with NVIDIA AI Workbench. As a result, they are able to compile into a single developer toolkit all essential enterprise-grade models, frameworks, software development kits, and libraries from open-source sources and the NVIDIA AI platform.

Leading providers of AI infrastructure, such as Dell Technologies, Hewlett Packard Enterprise, HP, Lambda, Lenovo, and Supermicro, are embracing AI Workbench for its capacity to enhance their most recent lineup of multi-GPU capable desktop workstations, high-end mobile workstations, and virtual workstations.

Advertisement

Nvidia, Hugging Face to Make Generative AI Supercomputing Available to Developers

Nvidia Hugging Face generative AI supercomputing available developers
Image Credits; Nvidia

Hugging Face and Nvidia are working together to increase access to AI compute. Nvidia announced this week that it will offer a new Hugging Face service called Training Cluster as a Service to make the development of fresh and unique generative AI models for the workplace simpler. The announcement was timed to coincide with the annual SIGGRAPH conference.

The all-encompassing cloud-based AI “supercomputer” from Nvidia, DGX Cloud, will power Training Cluster as a Service when it launches in the upcoming months. The DGX Cloud offers access to a cloud instance with eight Nvidia H100 or A100 GPUs, 640GB of GPU memory, Nvidia’s AI Enterprise software for building big language models and AI applications, as well as consultations with Nvidia experts.

Companies can sign up for DGX Cloud on their own. The monthly price per instance starts at $36,999. However, Training Cluster as a Service combines the DGX Cloud infrastructure with the Hugging Face platform’s more than 250,000 models and over 50,000 datasets, making it a useful starting point for any AI project.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

Hugging Face co-founder and CEO Clément Delangue said, “Our collaboration will bring Nvidia’s most advanced AI supercomputing to Hugging Face to enable companies to take their AI destiny into their own hands with open source to help the open source community easily access the software and speed they need to contribute to what’s coming next.”

The partnership between Hugging Face and Nvidia comes as the business is apparently seeking new funding at a $4 billion valuation. Hugging Face, which was founded in 2014 by Delangue, Julien Chaumond, and Thomas Wolf, has grown quickly over the previous nine years, transitioning from a consumer app to a hub for all things AI model-related. 

Hugging Face is becoming the go-to place for AI developers to exchange ideas these days. Hugging Face has evolved into the GitHub equivalent for developers looking to learn more about the most recent models and APIs in order to avoid being rendered obsolete by the generative AI technology, as it gains popularity.

Advertisement

Nvidia Introduces New AI Chip GH200 Grace Hopper

Image Credits: Nvidia

On Tuesday, Nvidia unveiled GH200 Grace Hopper superchip, which is made to run AI models, as it tries to fend off AMD, Google, and Amazon competitors in the budding AI market.

The 72-core Grace CPU and 141 GB of HBM3e memory, which is organized into six 24 GB stacks and has a 6,144-bit memory interface, are the foundation of the new GH200 Grace Hopper superchip. Although Nvidia installs 144 GB of memory physically, only 141 GB are usable for higher yields.

The California-based company said that the HBM3e processor, which is 50% faster than current HBM3 technology, will power its next-generation GH200 Grace Hopper platform.

Developers will be able to run expanded Large Language Models (LLMs) as a result of its dual configuration, which will provide up to 3.5x more memory capacity and 3x more bandwidth than the currently available chips on the market.

CEO Jensen Huang stated at a presentation on Tuesday that the new technology would help “scale-out of the world’s data centers.” In addition, he predicted that “the inference cost of large language models will drop significantly,” alluding to the generative phase of AI computing that comes after LLM training.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

The latest product launch by Nvidia comes after the hype surrounding AI technology propelled the company’s value above $1 trillion in May. Nvidia became one of the market’s brightest stars in 2023 due to soaring demand for its GPU chips and a forecasted shift in data center infrastructure.

According to estimations, Nvidia currently holds a market share of over 80% for AI chips. Graphics processing units, or GPUs, are the company’s area of expertise. These processors are now the one of choices for the large AI models that support generative AI applications, including Google’s Bard and OpenAI’s ChatGPT. 

Despite all this, Nvidia’s chips are hard to come by as tech behemoths, cloud service providers, and startups compete for GPU power to create their own AI models.

Advertisement

OpenAI Introduces New Web Crawler GPTBot to Consume more Open Web

To increase its dataset for training its upcoming generation of AI systems, OpenAI has introduced a new web crawling bot called GPTBot. According to OpenAI, the web crawler will gather information from websites that are freely accessible to the public while avoiding content that is paywalled, sensitive, or illegal. 

However, the system is opt-out. GPTBot will presume available information is open for use by default, similar to other search engines like Google, Bing, and Yandex. The owner of a website must include a “disallow” rule in a common server file in order to stop the OpenAI web crawler from digesting that webpage.

Additionally, according to OpenAI, GPTBot will check scrapped material in advance to weed out personally identifiable information (PII) and anything that contravenes its rules. However, some technological ethicists believe that the opt-out strategy still poses consent-related concerns.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

Some commenters on Hacker News defended OpenAI’s action by arguing that it needs to amass as much information as possible if people want to have a powerful generative AI tool in the future. Another person who was more concerned with privacy complained that “OpenAI isn’t even quoting in moderation. It obscures the original by creating a derivative work without citing it.”

The launch of GPTBot comes in response to recent criticism of OpenAI for previously illegally collecting data to train Large Language Models (LLMs) like ChatGPT. The business changed its privacy policy in April to address these issues.

Meanwhile, a recent GPT-5 trademark filing appears to hint that OpenAI might be working on its next version of the GPT AI model. Large-scale web scraping would probably be used by the new system to refresh and increase its training data. However, there is no official announcement concerning GPT-5 as of yet. 

Advertisement

IIM Lucknow Introduces Executive Programme in AI for Business

An executive programme in AI for Business has been launched in partnership between the Indian Institute of Management (IIM) Lucknow and Imarticus Learning. It seeks to provide graduates with at least five years of relevant work experience with the knowledge and skill sets required for AI and machine learning.

Classes for this executive curriculum will be offered on weekends, either Sunday or Saturday, and will last for six months. It will be entirely online. The training will conclude with a three-day campus immersion.

The start date for the course is October 1. The course fee is Rs. 2.35 lakh + GST (Including the registration price of Rs. 47,000 + GST). There are 50 seats for students in total.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

The goal of this programme is to give ambitious professionals a solid foundation in AI. The programme offers a pedagogy that mixes project-based learning with the case-based methodology used by the IIM and focuses on real-world business outcomes. 

This will aid in developing abilities including teamwork, critical thinking, and problem solving. The curriculum also provides a chance to network with influential businesspeople and subject matter experts. There are eight modules in the curriculum.

Candidates must have earned a bachelor’s or master’s degree in computer science, engineering, mathematics, statistics, economics, or another related field with a minimum of 50% on their final exam to be eligible for the programme.

An offer letter will be given to shortlisted candidates. Candidates will obtain a certificate from IIM Lucknow once the course is finished. Candidates who meet the requirements can apply here.

Advertisement

Zoom’s Updated Terms Say It can now use Customer Data to Train AI 

Zoom’s updated terms can now use customer data train AI
Image Credits: Zoom

The most recent revisions to the terms of service state that Zoom intends to use some of customer data to train its artificial intelligence models. If you read through the conditions on software licensing, beta services, and compliance in the most recent revision to the video platform’s terms of service, the small print appears to indicate a significant choice Zoom made regarding its AI strategy. 

The modification, which became effective on July 27, gives Zoom the ability to use specific consumer data for developing and fine-tuning its AI or machine learning models. Customer information on product usage, telemetry and diagnostic data, and other comparable material or data gathered by the company are all examples of the “service-generated data” Zoom may now employ to train its AI. 

In accordance with the terms of Zoom, “You consent to Zoom’s access, use, collection, creation, modification, distribution, processing, sharing, maintenance, and storage of Service Generated Data for any purpose, to the extent and in the manner permitted by applicable Law, including for the purpose of machine learning or artificial intelligence (including for the purpose of training and tuning of algorithms and models).”

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

Messages, files, and documents from customers do not appear to fall under this category. The company Zoom stated in a subsequent blog post that “for AI, we do not use audio, video, or chat content for training our models without customer consent.” 
The upgrade comes amid a heated public discussion over how much personal data, no matter how aggregated or anonymized, should be used to train AI systems. A large portion of online text or photos are used to train chatbots like OpenAI’s ChatGPT, Google’s Bard, and Microsoft’s Bing as well as image-generation programmes like Midjourney and Stable Diffusion. Recent months have seen a rise in legal actions brought by authors or creatives who claim to see their own work reflected in the results of generative AI tools.

Advertisement

Alibaba Open Sources Two LLM Models

Alibaba Cloud open sources its LLM Models

Alibaba Cloud, the digital technology backbone of the Chinese tech giant, Alibaba Group Holding, has open-sourced two of its large language models (LLMs). With this move, Alibaba intends to expand its influence in the generative AI field.

The two open-source models, Qwen-7B and Qwen-7B-Chat, are smaller versions of Tongyi Qianwen, which is Alibaba’s largest language model. Roughly translated to “seeking truth by asking a thousand questions,” Tongyi Qianwen is the LLM launched by Alibaba’s cloud computing service unit in April.

Both open-source models have each been trained on 7 billion parameters. Qwen-7B-Chat is a fine-tuned version of Qwen-7B and can conduct human-like conversations.

Read More: Alibaba to Roll Out its Generative AI Tech Tongyi Qianwen in All Apps

As per the company’s statement, the models’ internal mechanisms, including the codes and documentation, will be made freely accessible to scholars, researchers, and commercial institutions worldwide. They can access it through Alibaba Cloud’s AI model repository ModelScope, and the US collaborative AI platform Hugging Face.

This development comes after Meta released its open-source LLM—Llama 2—with Microsoft on July 16.

While companies with fewer than 100 million monthly active users can deploy the open-source models for commercial use free of charge, those with more users will have to request a license from Alibaba Cloud. This is similar to Meta’s Llama 2, which requires a license from companies with more than 700 million users.

Set to be spun off from its parent company next year to become a publicly listed company, Alibaba Cloud has been doubling down on generative AI development and commercialization amid the global frenzy around ChatGPT.

Zhou Jingren, chief technology officer of Alibaba Cloud Intelligence, said, “We aim to promote inclusive technologies and enable more developers and small and medium-sized enterprises to reap the benefits of generative AI.”

Advertisement