Home Blog Page 41

Google Unveils AI Model RT-2 to Help Robots Interpret Visual and Linguistic Patterns

Google unveils RT-2 AI model
Image Credits: Google

With the launch of the Robotic Transformer (RT-2), a cutting-edge AI learning model, Google is making a big advancement in improving the intelligence of its robots. By improving on the preceding vision-language-action (VLA) model, RT-2 gives robots a better understanding of visual and linguistic patterns. This helps them to accurately read instructions and determine the best objects to meet particular needs.

In recent tests, researchers used a robotic arm to test RT-2 in a mock kitchen office environment. The robot was given instructions to recognise a handmade hammer, which turned out to be a rock, and to select a beverage to give a fatigued person, where it selected Red Bull. The researchers also gave the robot instructions to carry a Coke can to a photo of Taylor Swift, which revealed the robot’s unexpected preference for the well-known singer.

RT-2 was trained by Google utilizing a combination of web and robotics data, taking advantage of developments in large language models like Bard, Google’s language model. This combination of linguistic data with robotic expertise, particularly an understanding of how robotic joints should move, turned out to be a successful strategy. Additionally, RT-2 shows competence in comprehending instructions delivered in languages other than English, representing a significant advancement in the cross-lingual capabilities of AI-driven robots.

Read More: Vimeo Introduces AI-Powered Script Generator And Text-Based Video Editor

Teaching robots required laborious and time-consuming individual programming for each distinct activity prior to the development of VLA models like the RT-2. Robots can now draw from a massive database of data, thanks to the strength of these advanced models, allowing them to quickly draw conclusions and make judgements.

Not everything about the new robot is ideal, however. The robot struggled to correctly identify soda flavors in a live presentation that The New York Times covered, and it frequently mistakenly labeled fruit as the color white. These flaws emphasize the continued difficulties in enhancing AI technology for practical applications.

Advertisement

More Than 200,000 OpenAI Compromised Credentials Available for Sale on Dark Web 

200,000 OpenAI compromised credentials available sale dark web
Image Credits: Shutterstock

Over 200,000 OpenAI credentials were found by security researchers for sale on the dark web as stolen logs. Buyers may access chats containing private information including trade secrets, source code, and business plans, as well as use ChatGPT’s premium services for free, through the compromised credentials.

400,000 business credentials from various online accounts, including Google Cloud Platform, AWS, Salesforce, QuickBooks, and Hubspot, were found, according to a closed study by Flare that studied 19.6 million leaked logs.

The business also found 205,447 compromised OpenAI account credentials that were stolen via commodity malware log harvesting. It is still not apparent whether Flare’s discovery corresponds to Group IB’s.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

The finding comes after the threat intelligence team of cybersecurity company Group-IB reported that over 100,000 ChatGPT account credentials were sold on dark web markets between June 2022 and May 2023. The malware variants Raccoon Infostealer (78,348), Vidar (12,984), and RedLine (6,773) were used to steal the OpenAI credentials.

The Middle East and Africa (24.6%), Asia-Pacific (40.5%), and Europe (16.8%) were the regions where the most OpenAI credentials were offered for sale on dark web marketplaces. With 12,632 OpenAI credentials revealed on the dark web, India took the top spot, followed by Pakistan (9,217), Brazil (6,531), Vietnam (4,771), and Egypt (4,588), with the United States coming in sixth with 2,995 compromised accounts.

However, OpenAI later clarified that the compromised login credentials were not the result of any OpenAI data breach. Instead, they were the by-product of commodity malware-based log harvesting.

Advertisement

OpenAI Files Trademark Application for GPT-5 with USPTO

OpenAI files trademark application GPT-5 USPTO
Image Credits: AD

OpenAI has filed a trademark application for “GPT-5” with the United States Patent and Trademark Office (USPTO). This move suggests the potential development of a new version of their language model is underway. The news was shared by trademark attorney Josh Gerben in a tweet on July 31.

According to the trademark application, GPT-5 relates to computer programmes that produce human speech and writing as well as those that process, generate, understand, and analyze natural language. After the March release of GPT-4, it is believed to be the next potent version of OpenAI’s generative chatbot.

Despite the trademark filing, there is no proof that GPT-5 is currently under development. Although an advanced language model is probably in OpenAI’s future plans, the main goal of the trademark application might be to protect the name “GPT-5” and stop unauthorized use by others.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

The following version of OpenAI’s large language model is planned to be GPT-5. Sam Altman made the decision to delay training the GPT-4 replacement for some time following the pause letter signed by luminaries like Elon Musk and Steve Wozniak, citing the fact that the company still has a lot of work to do before launching the model. 

In June, Altman declared that OpenAI would only be concentrating on developing new concepts and would not yet have begun training GPT-5. OpenAI has not, however, formally confirmed the precise features and improvements in GPT-5.

Additionally, in December 2022, OpenAI applied for a trademark on the term “GPT” with the USPTO. In April, OpenAI petitioned the USPTO to expedite the process since a lot of apps with the name GPT started appearing. The application is still pending, and it could take another 4-5 months to be accepted, according to Jefferson Scher, a partner at Carr & Ferrell intellectual property practice. 

Advertisement

Meta Launches AudioCraft AI Tool to Generate Audio and Music from Text

Meta launches AudioCraft AI tool generate audio music from text
Image Credits: Meta AI

A new open-source AI tool called AudioCraft has been made available by Meta. According to the Meta AI, this programme is made to let both experienced artists and non-professionals generate audio and music using straightforward text prompts.

MusicGen, AudioGen, and EnCodec are the three models that make up AudioCraft. MusicGen can create music from text inputs and was trained using Meta‘s own music library. On the other hand, AudioGen is skilled at creating sound effects for the general public and can produce audio from text inputs. The EnCodec decoder has also been upgraded, enabling the creation of music with higher quality and less undesired artefacts.

Users will be able to create environmental noises and sound effects like dogs barking, automobiles honking, or footsteps on a wooden floor, thanks to Meta’s pre-trained AudioGen models. In addition, Meta is giving the code and all of the model weights for the AudioCraft tool. Applications for this new tool include audio production, sound effect creation, compression methods, and composition of music. 

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

Meta wants to make it possible for researchers and professionals to train their own models using their own datasets by making these models open-source. According to Meta, generative AI has advanced significantly in the areas of graphics, video, and text but not as much in audio. By offering a more approachable and user-friendly framework for producing high-quality audio, AudioCraft fills this gap.

According to Meta’s official blog, modeling complicated signals and patterns at many scales makes producing realistic and high-fidelity music particularly difficult. Since music is made up of both local and long-range patterns, it provides a special problem for audio generation.

Extended high-quality audio production is possible using AudioCraft. The company argues that by making the building of generative models for audio simpler, users will find it easier to play with the models that are already in place.

Advertisement

Ministry of Education Selects Oracle to Revamp National Education Platform DIKSHA

Ministry of Education Oracle revamp DIKSHA
Image Credits: Oracle

Oracle Cloud Infrastructure announced on August 2 that the Ministry of Education has chosen it to revamp the Digital Infrastructure for Knowledge Sharing (DIKSHA) national education technology platform. The move will help DIKSHA become more accessible and cut its IT expenses.

Shailender Kumar, senior vice-president and regional managing director, Oracle India and NetSuite Asia Pacific and Japan, stated during a briefing held today, “We have transitioned and migrated DIKSHA onto Oracle Cloud Infrastructure.”

Oracle Cloud Infrastructure will assist the ministry in using DIKSHA to make educational resources available to millions more students, teachers, and collaborators around the nation as part of the seven-year collaboration agreement.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

Since it recently switched to OCI, DIKSHA has improved its scalability, security, cost-effectiveness, and flexibility to adapt to demand-based capacity, according to the press release. This has allowed it to produce more material and serve more students and teachers as the platform grows.

Indu Kumar, head of department, ICT and Training, Central Institute of Educational Technology (CIET), Ministry of Education, said, “We need to embrace modern tools and technology to make education more easily available and securely accessible to everyone.”

One of India’s largest and most successful Digital Public Infrastructure (DPI) initiatives, DIKSHA was created for school education and foundational learning courses. Through the use of the open-source Sunbird platform, created by the EkStep Foundation, DIKSHA assists teachers in promoting inclusive learning for communities that are underprivileged and children with disabilities across the nation.

More than 11,000 contributors’ content is accessed by more than 200 million students and 7 million teachers from public and private institutions on the platform. 

Advertisement

Neuralink uses Realistic Brain Models to Ensure Patient-Specific Surgeries 

Neuralink uses realistic brain models patient specific surgeries
Image Credits: AD

Neuralink, Elon Musk’s neurotechnology company that is developing implantable brain to computer interfaces, recently took to Twitter to demonstrate in a video how it is using realistic head and brain models to ensure patient specific surgeries. 

In a tweet, Neuralink said, “Our surgical team enhances their skills by training on realistic, patient-specific head and brain models, ensuring surgeries are tailored to each individual for safety and success.” In the video, a scientist can be seen holding a head model. Through a rectangular opening on the top of the model, the scientist shows the pulsating lifelike brain inside.

Elon Musk’s Neuralink recently announced that US regulators had given them permission to test their brain implants on human beings. The US Food and Drug Administration’s (FDA) approval of Neuralink’s first in-human clinical research marked an important first step for its chip technology.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

“We’ve been working hard to be ready for our first human (implant), and obviously we want to be extremely careful and certain that it will work well before putting a device in a human,” Elon Musk stated while discussing the implant. The company hasn’t announced a human trial yet. 

In 2021, Neuralink demonstrated coin-sized product prototypes being implanted in the skulls of monkeys. Several monkeys could be seen playing simple video games or navigating a cursor on a screen during the Neuralink presentation. It is important to note that the trial also killed 15 out of 23 monkeys that year.

Advertisement

Microsoft Appoints Former AWS Executive Puneet Chandok as VP India

Microsoft appoints former AWS executive Puneet Chandok VP India
Image Credits: Business Standard

Microsoft has announced that former AWS India President Puneet Chandok has been named Corporate Vice President of Microsoft India and South Asia. The announcement comes at a time when the two major international providers of cloud services AWS and Microsoft are increasingly competing for market share in India.

Chandok would take over the operational responsibilities from outgoing president of India Anant Maheshwari on September 1, 2023, who resigned last month

According to the company, Chandok would be in charge of integrating Microsoft’s operations in South Asia, including Bangladesh, Bhutan, the Maldives, Nepal, and Sri Lanka. Additionally, he will be in charge of the company’s focus on important industries using a customer-centric strategy with generative AI at its foundation.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

Ahmed Mazhari, President Microsoft Asia said, “Puneet has a proven track record of starting, expanding, and using technology-based enterprises to create influence and change. Puneet’s leadership will be crucial in ensuring Microsoft’s continued success in South Asia as we embrace an AI-driven future. I also want to thank Anant Maheshwari for setting us on a growth path.”

Chandok oversaw the business operations for AWS in South Asia and India in his prior position there. He collaborated closely with large corporations, startups, small and medium-sized businesses (SMBs), and digital businesses to help them innovate, decrease technological debt, and add agility. A partner at McKinsey for more than ten years in India and Asia, he has also held senior regional and global positions at IBM.

Chandok has a bachelor’s degree in commerce, an MBA from the Indian Institute of Management in Calcutta, as well as diplomas in high-level computer systems, networking, and computer programming.

Advertisement

AWS, Accel Announce Accelerator Programme to Support Indian Generative AI Startups

A six-week accelerator programme called ML Elevate 2023 has been announced by venture capital firm Accel and Amazon Web Services (AWS) to support Indian businesses creating generative AI solutions. 

By giving them access to useful AI models and tools, commercial and technical mentoring, curated resources, the AWS Activate programme, and up to USD 200,000 in AWS Credits, ML Elevate aims to promote generative AI businesses. 

Other advantages include the chance to scale production-ready generative AI apps on Amazon SageMaker JumpStart and peer support from a network of top AI and ML startup founders.

Startups in generative AI that have created a Minimum Viable Product (MVP) and plan to apply for funding in the ensuing 12 to 18 months are eligible to apply. Selected companies will participate in live virtual masterclasses, which will include fireside chats and panel discussions with investors, business titans, and AWS specialists. 

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

Poonacha Kongetira, SVP Engineering at SambaNova, Anupam Datta, co-founder, President, and Chief Scientist at TruEra, Tom Mason, Chief Technology Officer at Stability AI, Vishal Dhupar, Managing Director, Asia-South of NVIDIA Graphics, Apurva Kalia, Senior Researcher at Tufts University, and others will be on the panel of speakers. 

Through a dedicated Demo Week, the cohort will also have the chance to pitch to top VC funds, angel investors, and business titans in order to raise money.

“Our goal with ML Elevate is to help generative AI startups create solutions tailored to specific industries and invent to advance the digital economy,” Vaishali Kasture, Director of AWS India and South Asia at Amazon Web Services India.

Advertisement

New AI Model FraudGPT Assist Cyber Criminals With Illicit Activities 

Rakesh Krishnan, a security researcher for Netenrich, recently reported in a blog that he had discovered evidence of a model known as FraudGPT. 

Under the nickname CanadianKingpin, the culprit of this programme asserts that FraudGPT may be used to generate malicious code, produce undetectable malware, locate leaks, and pinpoint vulnerabilities. The undetectable malware can circumvent conventional security measures and make it challenging for antivirus software to find and eliminate threats.

Identification of Non-Verified by Visa (Non-VBV) cards, which enables hackers to execute unauthorized transactions without additional security checks, is another feature of FraudGPT. Additionally, the programme has the ability to automatically create convincing phishing pages that look like authentic websites, which raises the success rate of phishing attempts.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

FraudGPT may design various hacking tools that are suited to particular targets or exploits in addition to creating phishing pages. Furthermore, it can search the internet for secret hacker organizations, illicit websites, and markets where stolen data is sold.

Additionally, the application can create phishing emails to trick people into falling for scams. FraudGPT can also produce content to teach coding and hacking methods, giving cybercriminals resources to advance their skills. Finally, it helps in identifying cardable sites where credit card information can be stolen and used for unauthorized purchases.

Since July 22, 2023, FraudGPT has been making the rounds in darknet forums and Telegram channels. It may be obtained through membership for $200 per month, $1,000 for six months, or $1,700 for a year. Although the large language model upon which this model is based is unknown, the author asserts that it has amassed more than 3,000 verified sales and reviews. 

Advertisement

DPDP Bill Can’t Be Referred To Any Committee Unless Done By Parliament, Says Rajeev Chandrasekhar

The Digital Personal Data Protection bill has not been referred to any committee, according to Minister of State for Electronics and Information Technology Rajeev Chandrasekhar, and it can only be done if it is introduced in the Parliament.

Chandrasekhar made his remarks in response to a letter submitted by Rajya Sabha member John Brittas on July 28 asking the speaker of the Lok Sabha and the head of the Rajya Sabha to prevent the introduction of a report from the Parliamentary Standing Committee on Communications and IT.

Curiously, Chandrasekhar also charged Brittas, a fellow committee member, with spreading false information about the bill that was, according to Birittas, “adopted” by the committee. A few days ago, Brittas and other opposition lawmakers from the IT committee left the meeting when the committee, presided over by Shiv Sena MP Jadhav Prataprao Ganpatrao, adopted a report backing the bill.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

The members who opposed the report said that they were not provided with the updated version of the DPDP law, which had been approved by the Union Cabinet, and as a result were not aware of the report’s creation. The committee demanded that the DPDP Bill be passed into law right now, in the report. 

According to a tweet from Chandrasekhar, no bill including the proposed DPDP can be referred to any committee unless it is done by Parliament. In turn, the bill can be only referred to committee only after the Cabinet-approved bill is introduced in Parliament. He added that DPDP has not been introduced into Parliament and so question of considering it in committee does not arise.

However, earlier, Ashwini Vaishnaw, the minister of electronics and information technology, courted controversy when he said, in a specific day, that the IT Committee had “approved” the bill. The minister’s assertions have been deemed untrue by committee members like Karti Chidambaram, Jawhar Sircar, and Brittas.

Advertisement