Saturday, January 17, 2026
ad
Home Blog Page 74

AI-Generated Drake and The Weeknd Song Goes Viral

AI-generated Drake The Weeknd song goes viral
Image Credits: Song Charts

Universal Music Group (UMG) has removed a song with AI-generated vocals from streaming sites that falsely claimed to be from Drake and the Weeknd after it went viral. The song, Heart on My Sleeve, was denounced by the record company for “infringing content created with generative AI“.

The song was first uploaded under the artist name Ghostwriter on streaming sites after being posted on TikTok by a user going by the handle Ghostwriter977. It had 600,000 Spotify streams, 15 million TikTok views, and 275,000 YouTube views by the time it was taken down yesterday afternoon.

The viral postings, according to UMG, show why platforms have a fundamental moral and legal obligation to stop people from abusing their services and protect artists from harm. UMG made this statement to Billboard magazine.

Read More: How Students Can Make The Best Use Of Technology To Enhance Learning Capacities

UMG declined to say whether it had formally requested takedowns from social media platforms and streaming providers. UMG also urged streaming platforms to stop AI companies from accessing the label’s songs.

The Financial Times quoted UMG saying that it had learned some services had been trained on copyrighted music “without obtaining the required consents” and warned the platforms that it will not hesitate to take steps to protect rights of their artists.

Advertisement

Microsoft’s New AI Tool PeopleLens Describes Surroundings to Visually Impaired

Microsoft’s PeopleLens for visually impaired
Image Credits: Microsoft

Microsoft has announced PeopleLens, a computer vision system that uses machine learning techniques to assist blind people interact with their social surroundings. The project aims to increase the independence and social engagement of people with visual impairments.

A group of engineers and computer scientists from Microsoft spent two years creating PeopleLens. The goal was to develop a machine learning system that would recognise individuals and things in images to aid blind people in navigating their social environments.

The group made use of a dataset of pictures that were labeled with the presence of persons and objects. They next trained a computer vision model that could recognise these labels in fresh photos using deep learning algorithms.

Read More: How Students Can Make The Best Use Of Technology To Enhance Learning Capacities

Two components make up the PeopleLens platform: a wearable gadget and a cloud-based service. The device takes pictures of its surroundings and uploads them to a cloud-based service where machine learning algorithms process them.

Following the utilization of this data, descriptions of the immediate surroundings are generated and delivered back to the wearable device. Additionally, the system has the ability to recognise various items in a scene, such a chair or a table. The blind person is subsequently given this information in Braille or audio form.

The fact that PeopleLens is an open-ended AI system is also advantageous. This indicates that it has a wide range of applications. Additionally, it is frequently updated with new and enhanced features. Additionally, this technology might contribute to the development of more accessible buildings, goods, and surroundings.

Advertisement

CoreWeave Secures $221M Investment in Series B Funding Round

CoreWeave secures $221M Series B funding
Image Credits: Coreweave

CoreWeave revealed that it has raised $221 million in a Series B fundraising round, which was led by Magnetar Capital and included Nvidia, the former CEO of GitHub Nat Friedman, and ex-Apple executive Daniel Gross. 

CoreWeave is a New York City-based firm that started out as an Ethereum mining operation and is now working to become a general-purpose cloud computing platform. A representative for Nvidia stated that the investment signifies a “deepening” relationship with CoreWeave.

The funding round, which values CoreWeave at $2 billion pre-money and brings the total amount raised by the business to $371 million, will help CoreWeave expand its network of data centers in the United States by enabling the establishment of two new facilities this year, according to CEO Mike Intrator. 

Read More: How Students Can Make The Best Use Of Technology To Enhance Learning Capacities

In order to fill what Brian Venturo and Brannin McBee perceived as “a void” in the cloud industry, they developed CoreWeave in 2017. Venturo, a casual Ethereum miner, purchased GPUs for a bargain price from bankrupt cryptocurrency mining operations, opting for Nvidia hardware due to the higher memory.

CoreWeave’s initial focus was solely on applications for cryptocurrencies. However, it has recently shifted to generative AI technologies, such as text-generating AI models, as well as general-purpose computers.

For use cases including AI and machine learning, visual effects and rendering, batch processing, and pixel streaming, CoreWeave today offers access to more than a dozen SKUs of Nvidia GPUs in the cloud, including H100s, A100s, A40s, and RTX A6000s.

Advertisement

India Today Launches India’s First AI Anchor Sana 

India Today launches India’s first AI Anchor Sana
Image Credits: Take Mitra

On March 30, India Today debuted the country’s first-ever AI news anchor. Known as “Sana,” the anchor made her Aaj Tak debut on the programme Black & White with journalist Sudhir Chaudhary.

Chaudhary presented the anchor powered by artificial intelligence to the audience, saying, “Now, Sana is with me.” He subsequently added in a conversation with Sana, “And for the first time today, you’ll be presenting the news.” The AI then assumes control until the end of the show’s runtime.

“Welcome to the future of news broadcasting! India’s first AI virtual news anchor, Sana, just made her debut on Black&White and it’s truly revolutionary. With advanced AI technology Sana is changing the game in journalism. Exciting times ahead!” Sudhir wrote on Twitter

Read More: How Students Can Make The Best Use Of Technology To Enhance Learning Capacities

Sana displayed her linguistic prowess by addressing the visitors in around three languages, including Gujarati, in the launch video. PM Modi and other dignitaries were present at the ceremony.

The AI news anchor took on the role of a weather reporter in another on-screen presence video. She was pleading for the charm of journalists who make extra efforts to bring liveliness and feel to the news rather than simply spewing out statistical facts at this time. Sana spoke English that sounded too mechanical.

Advertisement

Berkeley Researcher Deploy Robots to Speed Up Research by 100 Times

Berkeley researcher robots speed up research 100 times
Image Credits: Berkeley Lab

A new material research laboratory has been created by a team of researchers lead by Yan Zeng, a scientist at the Lawrence Berkeley National Laboratory (Berkeley Lab), where robots do the labour and artificial intelligence (AI) can make regular judgements. This makes it possible to work continuously and quickens the speed of research.

Although research facilities and equipment have advanced significantly over time, the fundamentals of research have not changed. Each experiment has a human at its core who does the measurements, interprets the results, and chooses the subsequent actions. By utilizing robotics and AI, the researchers under Zeng’s direction at Berkeley’s A-Lab want to speed up research.

Three robotic arms and eight furnaces are part of the 600 square foot A-Lab. With funding from the Department of Energy, construction got under way in 2022 and was finished in little over a year after concept work started in 2020.

Read More: How Students Can Make The Best Use Of Technology To Enhance Learning Capacities

The A-Lab’s function is to create novel materials that can be utilized to create a variety of new products, including thermoelectrics, fuel cells, solar cells, and other clean energy technologies that use temperature differences to produce energy.

For many years, scientists have used computational methods to anticipate new materials, but testing the materials has been a significant bottleneck because it is a time-consuming process. The process can be hastened up to 100 times faster at the A-Lab than it would be with a human.

Advertisement

German Editor Fired for AI-Generated Interview with Michael Schumacher

German editor fired AI-Generated interview Michael Schumacher
Image Credits: NDTV

According to the BBC, the chief editor of a German magazine who published an artificial intelligence-generated ‘interview’ with Michael Schumacher has been fired. The tabloid Die Aktuelle has also apologized to the family of the legendary Formula One racer.

This offensive and false piece had no business being published, said Bianca Pohlmann, managing director of Funke magazines. She claimed, “It in no way corresponds to the standards of journalism that our readers expect from a publisher like Funke.” 

The release of this piece will have immediate negative effects on personnel, she added. Editor-in-chief of Die Aktuelle Anne Hoffmann, who has been in charge of the publication’s journalism since 2009, will be relieved of their duties as of today.

Read More: How Students Can Make The Best Use Of Technology To Enhance Learning Capacities

On the front cover of Die Aktuelle, a smiling photo of the 54-year-old was published on April 15 with the heading, “Michael Schumacher, the first interview.” The subtitle next to the headline said, “It sounded deceptively real.”  

The phony interview, which bears the headline “My life has changed completely,” appears on page eight of the publication. It contains made-up remarks that Mr. Schumacher is said to have spoken about his family life following the accident and his health. However, it becomes clear later in the piece that the interview was generated by AI.

The Schumacher family has made it known that they want to sue the German publication. Seven-time Formula One champion Michael Schumacher has not been seen in public since sustaining a brain injury in a skiing accident in December of 2013.

Advertisement

Kerala Government Installs 726 AI Cameras to Monitor Traffic 

Kerala Government Installs 726 AI Cameras Monitor Traffic
Image Credits: NDTV

Starting on April 20, the ‘Safe Kerala’ project will use artificial intelligence (AI) cameras to identify traffic infractions and impose fines. To track down traffic violations, the Kerala Motor Vehicle Department has installed 726 AI cameras.

The violations that the AI camera will pick up on include riding a two-wheeler without a helmet, doing so while moving, riding with more than two passengers, using a phone while driving, and jumping red lights. 

Because roadside checking causes problems for the general public, the Motor Vehicle Department has chosen to adopt the “Fully Automated Traffic Enforcement System” as part of the “Safe Kerala” initiative to detect legal offenses through cameras.

Read More: How Students Can Make The Best Use Of Technology To Enhance Learning Capacities

The AI cameras are solar-powered, and data is sent through a 4G LTE SIM. Every vehicle will be examined by the visual processing unit in the camera box. Photos of the offending vehicles and their drivers will be sent to the control room of the motor vehicles department.

A mechanism is in place to collect video of infractions over the course of six months. The Motor Vehicles Department states that up to 30,000 penalty notices may be sent in a single day. Motor vehicle inspectors would meticulously inspect them for traffic violations before alerting offenders.

Recently, the Safe Kerala project, which aims to increase safety on the roads, received an approval from the state cabinet. The Kerala Road Safety Authority (KRSA) and the department of motor vehicles (MVD) would be in charge of leading the project.

Advertisement

Microsoft Releases Copilot for Microsoft Viva

Microsoft Releases Copilot for Microsoft Viva
Image Credits: Microsoft

Microsoft on Thursday unveiled Copilot for Microsoft Viva, an employee-experience platform based on Microsoft 365 that will go live in February 2021, one month after introducing Copilot for Microsoft 365.  

Microsoft 365 Copilot is a generative AI system built on GPT-4, a sizable language model developed by OpenAI, the AI lab behind the ChatGPT chatbot.

The Redmond business unveiled a Copilot in March to automate a range of activities across several Microsoft office programmes, such as summing important points of discussion in a Teams chat or including pertinent information in a PowerPoint presentation based on user-created documents. 

Read More: How Students Can Make The Best Use Of Technology To Enhance Learning Capacities

In Viva Copilot, which is based on the Microsoft 365 Copilot System, data from the Viva apps Goals, Engage, Learning, Topics, and Answers are combined with the strength of big language models. According to Corporate Vice President for Microsoft 365, Kirk Koenigsbauer, it will “assist users in leveraging next-generation AI to boost productivity and drive enhanced business outcomes.”

“We believe this next generation of AI will enable a new wave of productivity growth,” Satya Nadella, Microsoft’s CEO, said in a live-streamed presentation.

By combining the tools and applications businesses need for communication and feedback, analytics, objectives, and learning, Microsoft Viva equips organizations with next-generation AI and insights, enabling them to continuously enhance employee engagement and business performance.

Advertisement

Srinivas Narayanan Appointed as VP Engineering at OpenAI

Srinivas Narayanan appointed VP Engineering OpenAI
Image Credits: TechCircle

Srinivas Narayanan was most recently appointed vice president of engineering at OpenAI. At Meta (formerly known as Facebook), Narayanan held the position of vice president of engineering.

“Over the last year, I’ve gotten to know the fantastic staff at OpenAI and the amazing job they’re doing. I am beyond thrilled to be joining them,” he wrote in a LinkedIn post.

Narayanan worked with Meta for more than 13 years, conducting in-depth research in a variety of fields—including computer vision, natural language processing, speech, and personalization—to improve the field of artificial intelligence and promote the company’s line of high-tech goods.

Read More: How Students Can Make The Best Use Of Technology To Enhance Learning Capacities

He also developed the interest graph, launched the location product, and oversaw the engineering for photographs, among other important projects. He was also instrumental in starting Meta’s forays into computer vision and deep learning during his time there.

He co-founded Viralizr, a firm that specialized in creating consumer products with a social collaboration focus, before joining Meta. Additionally, Narayanan spent almost three years as a software engineer at IBM. 

It’s interesting to note that Narayanan is not the only top IT professional to join OpenAI. In the past, researchers from Google’s AI division like Hyung Won Chung, Jason Wei, Shane Gu, and others have joined OpenAI.

Advertisement

Disney Set for Another Round of Job Cuts After 7,000 Layoffs in October

Disney another round of job cuts
Image Credits: Disney

Starting the following week, the Walt Disney Company is anticipated to fire thousands of employees. 15% of Disney’s employees who work for the company’s entertainment business will be affected by these job cutbacks.

The job losses would affect people employed by Disney’s TV and film divisions as well as those working at theme parks and corporate positions, according to persons familiar with the issue.

According to the Bloomberg story, the most recent employment losses would have an effect on every region where Disney operates. Notifications to affected staff will begin to be sent as early as April 24. Earlier in February, Disney announced 7,000 job losses as part of its cost-cutting measures to lower its yearly spending by $5.5 billion.

Read More: How Students Can Make The Best Use Of Technology To Enhance Learning Capacities

The number of workers at Disney Entertainment, which includes the company’s streaming, distribution, and film and television production activities, will significantly decline as a result of the layoffs.

As the streaming industry focuses on managing online video platforms, major businesses including Paramount Global, NBC Universal, and Warner Bros, have also reduced employment. Bob Iger, the current CEO of Disney, took over after Bob Chapek was ousted in November as a result of the company’s $1.47 billion quarterly loss in its streaming sector

Advertisement