Thursday, November 13, 2025
ad
Home Blog Page 50

Andrew Ng introduces ‘Generative AI with LLMs’ with AWS

Andrew Ng Introduces 3 Gen AI courses
Image Source: Analytics Drift

Collaborating with Amazon Web Services (AWS), Andrew Ng, a pioneer in Machine Learning, founder of DeepLearning.AI, Landing AI, and co-founder of Coursera, has unveiled a new course on ‘Generative AI with Large Language Models’. 

It is a three-week course based on generative AI with Large Language Models (LLMs) created by AWS and DeepLearning.AI. The course will provide an in-depth understanding of LLM architecture and how LLM works. In addition, the course will also guide how to effectively use LLMs in applications by determining the most appropriate model and implementing suitable training techniques.

The course also covers the full life-cycle of a generative AI project, and it focuses on specific techniques like Reinforcement Learning with Human Feedback (RLHF), zero-shot, one-shot, few-shot concepts with LLM, advanced prompting frameworks like ReAct, and even fine-tuning LLMs. The course is available on Coursera and is especially designed for data scientists, ML engineers, research engineers, and anyone interested in generative AI.

Read More: Three New Generative Al Short Courses Available for Free for Limited Time by DeepLearning.AI

Apart from Andrew Ng, the course will be instructed by Chris Fregly, principal solutions architect at AWS; Antje Barth, principal developer advocate at AWS, and Mike Chambers, developer advocate at AWS. Andrew Ng announced the courses in a post on LinkedIn.

Andrew Ng is an AI and ML expert who is always prompt regarding people learning and adapting to generative AI. In the previous month, he introduced three new Generative Al courses on Building Systems with the ChatGPT API, LangChain for LLM Application Development, and How Diffusion Models Work. In collaboration with OpenAI, he also offered a free short course, ‘ChatGPT Prompt Engineering for Developers,’ designed to help developers effectively utilize LLMs.

Advertisement

Twitter Announces Daily Reading Limits Due to Excessive Data Scraping 

Twitter daily reading limits due data scraping
Image Credits: AD

Elon Musk recently announced additional “temporary” limits on the number of posts users can view. He continued to blame these restrictions on AI businesses extracting vast amounts of data for Twitter. Now, only 600 posts can be viewed daily by unverified accounts, and only 300 can be viewed daily by “new” unverified accounts. 

The restrictions for verified accounts still only allow reading up to 6,000 posts per day. This applies accounts presumably whether they’re purchased as a part of the Twitter Blue subscription, granted through an organization, or verification Elon forced on people like Stephen King, LeBron James, and anyone else with more than a million followers.

The rate restrictions would “soon” rise to 8,000 tweets for verified users, 800 for unverified users, and 400 for brand-new unverified accounts, Musk tweeted shortly after that.

Read More: Vimeo Introduces AI-Powered Script Generator And Text-Based Video Editor

The restrictions came into effect a day after Twitter abruptly began denying access to users who aren’t logged in. Musk claimed that this was necessary because a few hundred organizations (possibly more) were scraping Twitter data extensively, to the point where it was affecting the real user experience.”

Musk has tried to monetize Twitter in a number of different ways over the past few months, and the change is only one of them. Just three months after launching the redesigned $8 per month Twitter Blue pay-for-verification programme, the company announced a three-tier API update in March that will start charging for the use of its API. 

Advertisement

TSRTC Buses to Get AI-powered Driver Monitoring Systems 

TSRTC Buses AI-powered Driver Monitoring Systems
Image Credits: TSRTC

In an effort to improve road safety, public transport buses in Telangana will soon be equipped with driver monitoring systems (DMS), which include CCTV cameras that can identify inattentive or fatigued drivers.

In up to 190 Telangana State Road Transport Corporation (TSRTC) buses, DMS with CCTV cameras are likely to be installed, either right away or gradually. This initiative is a part of Project iRASTE, which makes use of artificial intelligence to lessen crash rates and mishaps on roadways. Tenders for this have been floated.

The system will track the driver’s facial expressions and eye movements in real-time, analyze them, and issue voice alerts within a few seconds in the event of driver distraction, or instances where the driver is frequently seen looking outside the bus. 

Read More: Vimeo Introduces AI-Powered Script Generator And Text-Based Video Editor

Alerts will be either in Telugu, English or Hindi. Additionally, the system will transmit alarms if the driver is not wearing a seatbelt, is sleepy, or is smoking. The DMS will identify the driver using facial recognition technology and trace the journeys he has taken. 

The technology will examine both the top safest and most reckless drivers using the data at hand. Motion is captured by CCTV cameras that are mounted on the windscreen.

There are 200 buses equipped withAdvanced Driver Assistance System (ADAS) along the corridors connecting Hyderabad to Vijayawada, Hyderabad to Bengaluru, and Hyderabad to Nizamabad. The plan calls for adding DMS to the current ADAS. Buses that operate on long-distance routes are likely to have the DMS fitted.

Advertisement

Meta Gives a Glimpse of Its Twitter Rival Threads on Google Play

Meta gives glimpse Twitter rival Threads Google Play
Image Credits: AD

Early this morning, Alessandro Paluzzi, a developer who frequently examines app code to discover hidden functionality, tweeted that Meta’s Twitter rival, Threads, has been made available on the Google Play store. But given that the app is no longer accessible, it appears it was a mistake.

The login screen, which allows users to sign in with their Instagram accounts, and another screen with a list of the accounts they follow on Instagram, so that users can choose who to follow on Threads, were among the screenshots Paluzzi provided.

Threads resembles Twitter a lot. According to the screenshots, a new post will display character counts along with a little paperclip for attaching anything else that Threads will allow you to attach to posts. There are icons for liking, reposting, reacting to, and sharing posts when viewing them, and user photos appear in little circles. Even the blue checkmarks that Twitter uses for Instagram are present.

Read More: Vimeo Introduces AI-Powered Script Generator And Text-Based Video Editor

Since January, Threads has been in development at Meta under the title of “Project 92.” Chris Cox, chief product officer at Meta, who was introducing Threads at an internal company meeting said, “We’ve been hearing from creators and public figures who are interested in having a platform that is sanely run, as opposed to Twitter.”

Although Meta has not given the app an official release date, the fact that it has appeared in the Google Play store suggests that one is about to come. There is a lot of discussion regarding what this development would entail.

Advertisement

Goa to Install AI Cameras in 70 Accident-prone Zones to Enhance Road Safety

Goa install AI cameras 70 accident-prone zones
Image Credits; Stock Images

Authorities in Goa have selected 70 priority areas as accident-prone zones around the state in an effort to minimize traffic accidents and improve road safety. By December, artificial intelligence (AI) cameras will be installed at these areas to track traffic and catch drivers breaking the law. This was decided during a State Road Safety Council meeting on Friday at the Altinho ITMS control center.

Rajan Satardekar, Director of Transportation, emphasized the necessity for technological intervention in these accident-prone areas because it is getting harder for law enforcement to personally monitor them. In these regions, erratic driving and damaged traffic medians, which allow two-wheelers to shift lanes carelessly, have been recognised as two of the main causes of safety problems and accidents.

The transport department intends to deploy more staff to the backend to remotely monitor these areas in order to solve this issue. The best locations and camera positioning angles are being discussed in collaboration with the traffic cell of the Goa police. Additionally, finalization is being made to the associated expenditures for placing cameras at these high-priority areas.

Read More: Vimeo Introduces AI-Powered Script Generator And Text-Based Video Editor

To further punish drivers who violate the law, the list of violations recorded by AI cameras will be circulated. Since June 1, the traffic cell and the transport department have been using AI cameras to detect offenses and issue challans.

Due to this, common traffic violations including speeding, riding without a helmet, operating a vehicle without a seatbelt, and using a phone while driving have come under more scrutiny.

According to earlier reports, these high-resolution cameras have the ability to follow automobiles, spot speeding cars, and even find out whether a car has been reported stolen. It can run on Ethernet IP-based systems. Goa seeks to strengthen its Intelligent Traffic Management Systems (ITMS) and make considerable headway in reducing road accidents by installing AI cameras in accident-prone regions and broadening the scope of identified breaches.

Advertisement

Meta Releases Clues on How AI is Used for Instagram and Facebook

Meta AI
Image Source: Analytics Drift

Meta, the parent company of Instagram and Facebook, released tools and information to help users understand how AI influences what they see on its apps.

The introduction of nearly two dozen explainers, focused on features like Instagram Stories and Facebook’s news feed, explains how Meta selects the content to recommend to users. Meta’s move is part of its “wider ethos of openness, transparency, and accountability,” as per the blog post of Meta’s President of Global Affairs, Nick Clegg.

In the blog post, Clegg also writes, “With rapid advances taking place with powerful technologies like generative AI, it’s understandable that people are both excited by the possibilities and concerned about the risks.”

Read More: Meta to stop sharing news on Facebook, Instagram in Canada to comply with Bill C-18

With European lawmakers’ swift advancement of legislation, there could be requirements for explanation and transparency from companies that use AI. US lawmakers hope to begin working on similar legislation later this year.

Meta’s newly-released 22 system cards for Facebook and Instagram will provide information on how its AI systems rank content and predict what content might be most relevant to its users. They cover feeds, reels, stories, and other surfaces that people visit to find content from the people or accounts they follow. The system cards also cover AI systems that recommend “unconnected” content from accounts, people, or groups they don’t follow. 

Now, users can customize their feeds, on both Facebook and Instagram, by accessing a menu from individual posts. While Meta had only offered users the ability to tell Instagram to show less, not more, the release of its new tools provides the ability to ask Instagram to supply more of a certain content type. 

Meta will also provide a content library and an application programming interface (API) featuring a variety of content from Facebook and Instagram for researchers to study its platforms.

Additionally, the platform is expanding the Why Am I Seeing This? feature to Facebook Reels and Instagram Reels tab and Explore over the coming weeks. The feature was previously found in some ads and Feed content on both Facebook and Instagram.

Advertisement

“Godfather of AI” Urges Governments to Safeguard Humanity

Godfather of AI Geoffrey Hinton
Image Credits: Pindula

Speaking at the Collision tech conference, AI scientist Geoffrey Hinton, widely known as the “Godfather of AI,” urged governments to intervene and ensure that machines do not seize control of society. Hinton, who recently left Google after a decade of service, emphasized the need to address the potential risks associated with AI, especially after the release of the captivating ChatGPT.

Addressing a packed audience of more than 30,000 industry professionals in Toronto, Hinton highlighted the importance of understanding how AI could potentially try to take control. He expressed his concerns by stating, “Before AI is smarter than us, I think the people developing it should be encouraged to put a lot of work into understanding how it might try and take control away. Right now, there are 99 very smart people trying to make AI better and one very smart person trying to figure out how to stop it taking over, and maybe you want to be more balanced.”

Hinton’s concerns regarding AI were not without substance, as he emphasized the need to address the risks seriously. He stated, “I think it’s important that people understand this is not science fiction; this is not just fear-mongering. It is a real risk that we must think about, and we need to figure out in advance how to deal with it.”

Read More: OpenAI Sued for Stealing Massive Amounts of Personal Data

In addition to potential control issues, Hinton also expressed concerns about how AI could worsen inequality. He expressed his fear that the benefits of AI adoption would primarily flow to the wealthy, leaving workers behind. “The wealth isn’t going to the people doing the work. It will go into making the rich richer and not the poorer, and that’s very bad for society,” he added.

Hinton also addressed the dangers of fake news generated by AI-powered bots like ChatGPT. He expressed hope that AI-generated information could be watermarked, similar to how central banks watermark actual money. The European Union is currently exploring such a technique as part of its AI Act, which is being negotiated by legislators to establish AI standards in Europe.

Advertisement

IIT Jammu Introduces B.Tech in Mathematics and Computing to Signify Their Interdependence 

IIT Jammu B.Tech Mathematics Computing
Image Credits: IIT Jammu

The Indian Institute of Technology (IIT) Jammu will be having a cutting-edge undergraduate programme, B.Tech in Mathematics & Computing, in response to the rising demand for workers with both math and computer science skills. 

Realizing the complex relationship between the booming fields of artificial intelligence, machine learning, and data science, and mathematics is the aim of this course, said the spokesman. The primary goal of the course, according to the IIT Jammu spokeswoman, is to empower young brains to contribute to its groundbreaking developments.

The four-year B.Tech. in Mathematics & Computing programme is created to give students a thorough knowledge of how mathematics and computer science are interrelated. The range of topics covered by the students includes statistics, real and complicated analysis, and optimisation methods. They will also acquire a solid education in the foundational computer science concepts necessary for their chosen field.

Read More: Vimeo Introduces AI-Powered Script Generator And Text-Based Video Editor

The selection process will be based on the rank obtained in the JEE (Advanced) National Entrance Exam. Candidates must have passed their Class XII or equivalent examinations from a recognised board with Physics, Chemistry, Mathematics, a language, and an additional subject in addition to the aforementioned four. 

The spokesperson added that this programme serves as a launching pad for young minds to build the future of AI and contribute to its revolutionary potential across numerous industries. Therefore the job prospects for graduates are broad and exciting for this field. 

Advertisement

Google and Startup India Launches Startup School’s 2023

Google and Startup India to launch Startup School
Image Credits: Google for Startups

Google has established the second edition of the virtual Startup School, 2023, supported by Startup India and the Department for Industrial Policy and Promotion (DPIIT). The second edition of the program will commence on July 11, 2023.

The program aims to empower the growth of startups, especially in India’s non-metropolitan areas. With a span of eight weeks, Startup School 2023 plans to reach out to 30,000 Indian startups by providing them with expertise-tailored knowledge and mentorship.

The fully-packed comprehensive program designed by Google covers a wide range of essential technical topics, including AI, funding, leadership, global growth, government support to entrepreneurs, product and tech strategy, and many more. 

Read More: AI Startup Cohere Raises $270 Million in Series C Funding, Values at $2.2 Billion

Startup School 2023 will feature over 30 Google and industry experts via fireside chats with transformative entrepreneurs and VCs who will contribute their insights on various topics by leading informative sessions to promote and support start-ups in India.

Some of the distinguished speakers leading these programs include Bikram Bedi, Managing Director of Google Cloud India; Nithin Kamat, Founder & CEO of Zerodha; Shuvi Shrivastava, Partner of Lightspeed, and Aastha Grover, Vice President of Invest India and Head of Startup India. Other speakers include Gayatri Yadav, Chief Marketing Officer of Peak XV Partners; Ashish Kashyap, Founder & CEO of Indmoney; Afsar Ahmad, Co-founder of Gameberry Labs; Vani Kola, Managing Director of Kalaari Capital, and Sanjeev Barnwal, Founder & CTO of Meesho.

Manmeet K Nanda, Joint Secretary, DPIIT Ministry of Commerce and Industry, expressed her gratitude and pleasure in the collaboration, emphasizing on objectives of scaling up startups through this program. 

Startup School 2023 is open to all DPIIT-recognised startups. If you’re a leading startup team interested in learning from experienced entrepreneurs, you can quickly register through Google’s registration process.

Overall, the collaboratives present an exciting opportunity for startups in India to gain valuable mentorship and guidance on the necessary resources required for thriving in the dynamic digital startup ecosystem.

Advertisement

OpenAI Sued for Stealing Massive Amounts of Personal Data

OpenAI Sued Stealing Personal Data
Image Credits: Analytics Drift

OpenAI, the company behind the widely-used ChatGPT tool, is facing a lawsuit claiming it unlawfully collected and utilized large quantities of personal data from the internet to train its AI models. 

According to the 160-page complaint filed in a California federal court, OpenAI allegedly scraped “massive amounts of personal data from the internet,” obtaining nearly every piece of exchanged information without authorization. The scale of this alleged data scraping is described as unprecedented. OpenAI and its major investor, Microsoft, have not provided immediate comments regarding the lawsuit.

The lawsuit further alleges that OpenAI’s products utilized stolen private information, including personally identifiable data, from millions of internet users, including children, without their knowledge or informed consent. Such uninformed data usage raises ethical concerns and highlights the potential risks of exploiting personal data.

Read More: Microsoft and Nvidia Invest in Inflection AI’s $1.3 Billion Funding.

The legal action seeks injunctive relief, calling for a temporary halt on the commercial use of OpenAI’s products. It also demands the payment of “data dividends” to compensate individuals whose information was used to develop and train OpenAI’s AI tools. These demands reflect the increasing importance of safeguarding individuals’ data rights and promoting responsible data practices within AI development.

Timothy K. Giordano, a partner at Clarkson, the law firm behind the suit, expressed his concerns, stating, “By collecting previously obscure personal data of millions and misappropriating it to develop a volatile, untested technology, OpenAI put everyone at an unacceptable level of risk in terms of responsible data protection and use.”

OpenAI gained significant attention following the launch of ChatGPT, a tool known for generating human-like responses to user prompts. Its success ignited an AI arms race, with tech companies, large and small, scrambling to integrate AI tools into their products. However, the outcome of this legal battle will have significant implications not only for OpenAI but also for the broader AI community. This lawsuit highlights the need for reliable data protection and usage to ensure an ethical AI ecosystem

Advertisement