Wednesday, November 12, 2025
ad
Home Blog Page 40

Digital Personal Data Protection Bill 2023 Tabled in Lok Sabha

The controversial Digital Personal Data Protection Bill 2023 was introduced in the Lok Sabha yesterday by Minister of Communications, Electronics, and Information Technology Ashwini Vaishnaw. The bill aims to control how digital personal data is processed while upholding people’s rights to privacy and the condition to use that data only for legal purposes.

The opposition, however, clapped back strongly against the bill’s introduction, expressing worries that it might violate people’s fundamental right to privacy. They argued that the bill should be submitted to the standing committee for thorough analysis in light of the government’s withdrawal of a related data privacy bill the previous year.

Minister Vaishnaw responded by stating that the bill is not a money bill and assuring the opposition that any concerns highlighted will be addressed during the bill’s debate. Recently, MoS IT Rajeev Chandrasekhar said the DPDP bill can’t be referred to any committee unless done by parliament, after claims from fellow member John Brittas. 

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

Through his official Twitter account, MoS IT Rajeev Chandrasekhar discussed the importance of digital personal data protection. According to him, the Bill is an important step towards realizing the goal of having international cyber regulations for India’s $1 trillion digital economy and India Techade.

Chandrasekhar claims that the Ministry of Electronics and Information Technology oversaw significant stakeholder consultations that led to the creation of the law. According to Chandrasekhar, if the bill is approved by Parliament, it will safeguard the rights of all people, promote economic innovation, and give the government legal authority to act in cases of pandemics, earthquakes, and other catastrophes.

He characterized the Digital Personal Data Protection Bill as an international standard that is modern, prepared for the future, yet straightforward and simple, securing India’s position at the forefront of the digital world.

Advertisement

Artificial Intelligence Software Flies Air Force XQ-58A Valkyrie Drone

The Air Force Research Laboratory said on August 2 that a drone, the XQ-58A Valkyrie, was successfully flown by artificial intelligence software.

On July 25, the US lab conducted the three-hour sortie with test units at the Florida Eglin Test and Training Complex. The autonomous fighter aircraft was created over the course of two years by a team from the lab and the Air Force Life Cycle Management Centre called Skyborg Vanguard.

“This sortie officially enables the ability to develop artificial intelligence agents that will execute modern air-to-air and air-to-surface skills that are immediately transferrable to the CCA programme,” said Colonel Tucker Hamilton, Air Force’s chief of AI test and operations. 

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

According to the Air Force, the lab’s Autonomous Air Combat Operations team developed algorithms for the flight that accumulated millions of hours of development time through simulations, sorties with the X-62 VISTA experimental aircraft, work with the XQ-58A, and ground test operations.The Air Force’s efforts to conduct study on loyal wingmen have been aided by earlier flights of the XQ-58A Valkyrie. 

The Air Force Research Lab is the organization’s main scientific research and development facility and is in charge of discovering, developing, and integrating low-cost warfighting technologies for the nation’s air, space, and cyberspace forces.

The head of the lab, Brig. Gen. Scott Cain, stated in the announcement, “AI will be a critical element to future warfighting and the speed at which we’re going to have to understand the operational picture and make decisions. We require the concerted efforts of our government, academic, and business partners to stay up with the exceptional rate of evolution of AI, autonomous operations, and human-machine teaming.”

Advertisement

AI Cameras Catch 19 MLAs, 10 MPs for Traffic Violations in Kerala 

AI cameras catch 19 MLAs, 10 MPs for traffic violations in Kerala
Image Credits: ANI

After a review meeting on Thursday, Transport Minister Antony Raju of Kerala announced that the AI-enabled video surveillance system established to detect traffic violations has caught the vehicles of 19 MLAs and 10 MPs.

The Minister denied the claim that the Motor Vehicles Department (MVD) gave VIPs special treatment, noting that one MP and MLA had six and seven traffic violations, respectively.

The traffic surveillance system has identified 328 government vehicles, including those driven by MLAs and MPs, as having violations of traffic laws. Raju refused to give any information, but said that the MLAs and MPs had been pulled over for offenses like speeding and driving without a seatbelt. All traffic violators received e-challans, he claimed.

Read More: Vimeo Introduces AI-Powered Script Generator And Text-Based Video Editor

Since the project’s launch on June 5, traffic offenses totaling 32.42 lakh have been found as of Wednesday. 15.83 lakh violations out of the total had been validated by the department. 5.89 lakh of these violations were uploaded to the Integrated Transport Monitoring System, which has already issued 3.82 lakh challans.

Despite having challans filed for a total of 25.81 crore, the department has only been able to collect 3.37 crore. The agency was considering a suggestion to not renew insurance for anyone who did not pay the fine. The Minister stated that the department would shortly have a conversation with insurance providers about this.

After the traffic surveillance system was put in place, there was a significant decrease in traffic accidents. 313 persons lost their lives in 3,316 traffic incidents in July 2022, but that number fell to 67 in 1,201 accidents in July 2023. In comparison to the 3,992 injured in July 2022, only 1,329 persons were hurt at this time, according to Mr. Raju.

Advertisement

Ministry of Skill Development Partners with AWS to Provide Free Courses in Emerging Technologies 

Ministry of Skill Development partners with AWS provide free courses
Image Credits: GoI

In order to provide students with the fundamental knowledge of cutting-edge technologies, the Ministry for Skill Development and Entrepreneurship (MSDE) has partnered with Amazon Web Services (AWS) India. The pact aims to provide students enrolled in DGT institutions with valuable self-paced online learning programs at no cost. 

Through the Bharat Skills Platform of the Directorate General of Training, the collaboration will provide free online courses in cloud computing, data annotation, artificial intelligence (AI), and machine learning (ML).

Under the terms of the agreement, selected educational institutions will receive AWS’ cloud computing curriculum, enabling students to get ready for certification exams and positions that are relevant to cloud technology.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

Students from Industrial Training Institutes (ITIs) and National Skill Training Institutes (NSTIs) across India are the demographic that is being targeted for this programme, which intends to enhance their skill set. 

The collaboration’s importance was highlighted by Atul Kumar Tiwari, Secretary of MSDE, who said that students would get practical knowledge and in-demand skills in cloud computing, data annotation, AI, and ML.

Amazon Web Services is still making investments in education and the growth of the digital workforce in India. The dedication to offering educational tools to youngsters and instructors was stressed by Sunil PP, Lead for Education, Space, Non-profits, Channels, and Alliances at AWS India, during the announcements. 

Advertisement

Google Unveils AI Model RT-2 to Help Robots Interpret Visual and Linguistic Patterns

Google unveils RT-2 AI model
Image Credits: Google

With the launch of the Robotic Transformer (RT-2), a cutting-edge AI learning model, Google is making a big advancement in improving the intelligence of its robots. By improving on the preceding vision-language-action (VLA) model, RT-2 gives robots a better understanding of visual and linguistic patterns. This helps them to accurately read instructions and determine the best objects to meet particular needs.

In recent tests, researchers used a robotic arm to test RT-2 in a mock kitchen office environment. The robot was given instructions to recognise a handmade hammer, which turned out to be a rock, and to select a beverage to give a fatigued person, where it selected Red Bull. The researchers also gave the robot instructions to carry a Coke can to a photo of Taylor Swift, which revealed the robot’s unexpected preference for the well-known singer.

RT-2 was trained by Google utilizing a combination of web and robotics data, taking advantage of developments in large language models like Bard, Google’s language model. This combination of linguistic data with robotic expertise, particularly an understanding of how robotic joints should move, turned out to be a successful strategy. Additionally, RT-2 shows competence in comprehending instructions delivered in languages other than English, representing a significant advancement in the cross-lingual capabilities of AI-driven robots.

Read More: Vimeo Introduces AI-Powered Script Generator And Text-Based Video Editor

Teaching robots required laborious and time-consuming individual programming for each distinct activity prior to the development of VLA models like the RT-2. Robots can now draw from a massive database of data, thanks to the strength of these advanced models, allowing them to quickly draw conclusions and make judgements.

Not everything about the new robot is ideal, however. The robot struggled to correctly identify soda flavors in a live presentation that The New York Times covered, and it frequently mistakenly labeled fruit as the color white. These flaws emphasize the continued difficulties in enhancing AI technology for practical applications.

Advertisement

More Than 200,000 OpenAI Compromised Credentials Available for Sale on Dark Web 

200,000 OpenAI compromised credentials available sale dark web
Image Credits: Shutterstock

Over 200,000 OpenAI credentials were found by security researchers for sale on the dark web as stolen logs. Buyers may access chats containing private information including trade secrets, source code, and business plans, as well as use ChatGPT’s premium services for free, through the compromised credentials.

400,000 business credentials from various online accounts, including Google Cloud Platform, AWS, Salesforce, QuickBooks, and Hubspot, were found, according to a closed study by Flare that studied 19.6 million leaked logs.

The business also found 205,447 compromised OpenAI account credentials that were stolen via commodity malware log harvesting. It is still not apparent whether Flare’s discovery corresponds to Group IB’s.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

The finding comes after the threat intelligence team of cybersecurity company Group-IB reported that over 100,000 ChatGPT account credentials were sold on dark web markets between June 2022 and May 2023. The malware variants Raccoon Infostealer (78,348), Vidar (12,984), and RedLine (6,773) were used to steal the OpenAI credentials.

The Middle East and Africa (24.6%), Asia-Pacific (40.5%), and Europe (16.8%) were the regions where the most OpenAI credentials were offered for sale on dark web marketplaces. With 12,632 OpenAI credentials revealed on the dark web, India took the top spot, followed by Pakistan (9,217), Brazil (6,531), Vietnam (4,771), and Egypt (4,588), with the United States coming in sixth with 2,995 compromised accounts.

However, OpenAI later clarified that the compromised login credentials were not the result of any OpenAI data breach. Instead, they were the by-product of commodity malware-based log harvesting.

Advertisement

OpenAI Files Trademark Application for GPT-5 with USPTO

OpenAI files trademark application GPT-5 USPTO
Image Credits: AD

OpenAI has filed a trademark application for “GPT-5” with the United States Patent and Trademark Office (USPTO). This move suggests the potential development of a new version of their language model is underway. The news was shared by trademark attorney Josh Gerben in a tweet on July 31.

According to the trademark application, GPT-5 relates to computer programmes that produce human speech and writing as well as those that process, generate, understand, and analyze natural language. After the March release of GPT-4, it is believed to be the next potent version of OpenAI’s generative chatbot.

Despite the trademark filing, there is no proof that GPT-5 is currently under development. Although an advanced language model is probably in OpenAI’s future plans, the main goal of the trademark application might be to protect the name “GPT-5” and stop unauthorized use by others.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

The following version of OpenAI’s large language model is planned to be GPT-5. Sam Altman made the decision to delay training the GPT-4 replacement for some time following the pause letter signed by luminaries like Elon Musk and Steve Wozniak, citing the fact that the company still has a lot of work to do before launching the model. 

In June, Altman declared that OpenAI would only be concentrating on developing new concepts and would not yet have begun training GPT-5. OpenAI has not, however, formally confirmed the precise features and improvements in GPT-5.

Additionally, in December 2022, OpenAI applied for a trademark on the term “GPT” with the USPTO. In April, OpenAI petitioned the USPTO to expedite the process since a lot of apps with the name GPT started appearing. The application is still pending, and it could take another 4-5 months to be accepted, according to Jefferson Scher, a partner at Carr & Ferrell intellectual property practice. 

Advertisement

Meta Launches AudioCraft AI Tool to Generate Audio and Music from Text

Meta launches AudioCraft AI tool generate audio music from text
Image Credits: Meta AI

A new open-source AI tool called AudioCraft has been made available by Meta. According to the Meta AI, this programme is made to let both experienced artists and non-professionals generate audio and music using straightforward text prompts.

MusicGen, AudioGen, and EnCodec are the three models that make up AudioCraft. MusicGen can create music from text inputs and was trained using Meta‘s own music library. On the other hand, AudioGen is skilled at creating sound effects for the general public and can produce audio from text inputs. The EnCodec decoder has also been upgraded, enabling the creation of music with higher quality and less undesired artefacts.

Users will be able to create environmental noises and sound effects like dogs barking, automobiles honking, or footsteps on a wooden floor, thanks to Meta’s pre-trained AudioGen models. In addition, Meta is giving the code and all of the model weights for the AudioCraft tool. Applications for this new tool include audio production, sound effect creation, compression methods, and composition of music. 

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

Meta wants to make it possible for researchers and professionals to train their own models using their own datasets by making these models open-source. According to Meta, generative AI has advanced significantly in the areas of graphics, video, and text but not as much in audio. By offering a more approachable and user-friendly framework for producing high-quality audio, AudioCraft fills this gap.

According to Meta’s official blog, modeling complicated signals and patterns at many scales makes producing realistic and high-fidelity music particularly difficult. Since music is made up of both local and long-range patterns, it provides a special problem for audio generation.

Extended high-quality audio production is possible using AudioCraft. The company argues that by making the building of generative models for audio simpler, users will find it easier to play with the models that are already in place.

Advertisement

Ministry of Education Selects Oracle to Revamp National Education Platform DIKSHA

Ministry of Education Oracle revamp DIKSHA
Image Credits: Oracle

Oracle Cloud Infrastructure announced on August 2 that the Ministry of Education has chosen it to revamp the Digital Infrastructure for Knowledge Sharing (DIKSHA) national education technology platform. The move will help DIKSHA become more accessible and cut its IT expenses.

Shailender Kumar, senior vice-president and regional managing director, Oracle India and NetSuite Asia Pacific and Japan, stated during a briefing held today, “We have transitioned and migrated DIKSHA onto Oracle Cloud Infrastructure.”

Oracle Cloud Infrastructure will assist the ministry in using DIKSHA to make educational resources available to millions more students, teachers, and collaborators around the nation as part of the seven-year collaboration agreement.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

Since it recently switched to OCI, DIKSHA has improved its scalability, security, cost-effectiveness, and flexibility to adapt to demand-based capacity, according to the press release. This has allowed it to produce more material and serve more students and teachers as the platform grows.

Indu Kumar, head of department, ICT and Training, Central Institute of Educational Technology (CIET), Ministry of Education, said, “We need to embrace modern tools and technology to make education more easily available and securely accessible to everyone.”

One of India’s largest and most successful Digital Public Infrastructure (DPI) initiatives, DIKSHA was created for school education and foundational learning courses. Through the use of the open-source Sunbird platform, created by the EkStep Foundation, DIKSHA assists teachers in promoting inclusive learning for communities that are underprivileged and children with disabilities across the nation.

More than 11,000 contributors’ content is accessed by more than 200 million students and 7 million teachers from public and private institutions on the platform. 

Advertisement

Neuralink uses Realistic Brain Models to Ensure Patient-Specific Surgeries 

Neuralink uses realistic brain models patient specific surgeries
Image Credits: AD

Neuralink, Elon Musk’s neurotechnology company that is developing implantable brain to computer interfaces, recently took to Twitter to demonstrate in a video how it is using realistic head and brain models to ensure patient specific surgeries. 

In a tweet, Neuralink said, “Our surgical team enhances their skills by training on realistic, patient-specific head and brain models, ensuring surgeries are tailored to each individual for safety and success.” In the video, a scientist can be seen holding a head model. Through a rectangular opening on the top of the model, the scientist shows the pulsating lifelike brain inside.

Elon Musk’s Neuralink recently announced that US regulators had given them permission to test their brain implants on human beings. The US Food and Drug Administration’s (FDA) approval of Neuralink’s first in-human clinical research marked an important first step for its chip technology.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

“We’ve been working hard to be ready for our first human (implant), and obviously we want to be extremely careful and certain that it will work well before putting a device in a human,” Elon Musk stated while discussing the implant. The company hasn’t announced a human trial yet. 

In 2021, Neuralink demonstrated coin-sized product prototypes being implanted in the skulls of monkeys. Several monkeys could be seen playing simple video games or navigating a cursor on a screen during the Neuralink presentation. It is important to note that the trial also killed 15 out of 23 monkeys that year.

Advertisement