Home Blog Page 57

Vimeo Introduces AI-powered Script Generator and Text-based Video Editor

Vimeo, a provider of online video content, today announced plans to assist creators with their video production needs by integrating new AI-powered tools and services, such as a script generator and a text-based video editor.

Vimeo’s AI-powered script generator frees users from having to write their own scripts before they begin to create a film. The new functionality was developed by Vimeo using a generative AI model from OpenAI, the provider of the well-known ChatGPT AI chatbot.

From a brief description and drop-down menu, it may generate a whole video screenplay with a preset duration of one, five, or ten minutes and tones including inspirational, formal, or humorous. The AI then generates a script that can be changed or regenerated after the user has finished.

Read More: Microsoft Announces AI Personal Assistant Windows Copilot for Windows 11

Once the user is satisfied with the script, a video recording can be made, which will display the script in a teleprompter on the screen that remains at eye level directly below where the camera is positioned so the user need not look away.

Secondly, the video editor now includes a text-based feature that enables users to edit their video alongside the timeline-based editor and view a transcript of what they really said in the video. With a text-based editor, users may identify the exact moments when certain words were spoken and have them deleted, unlike timeline-based editing, which allows users to cut and clip portions of a video based on where it is in time.

Beginning on July 17, the new AI-powered feature set will be accessible and priced according to the company’s Standard Plan. On the official website, users can also sign up to get invitations to be among the first to sample the new services as part of early access.

Advertisement

Coca-Cola Appoints Pratik Thakar as Global Head of Generative AI 

Coca-Cola Pratik Thakar global head generative AI
Image Credits: Everything Exxperimental

Pratik Thakar has been promoted by The Coca-Cola Company to serve as the global head of generative AI. He announced his appointment to the position on his LinkedIn  post. He wrote, “I’m happy to share that I’m starting a new position as Global Head of Generative AI // Marketing Transformation Office at The Coca-Cola Company.” 

Thakar will be in charge of creating innovative platforms that make use of AI technology to enhance the consumer experience across the whole portfolio of brands and product categories owned by The Coca-Cola Company. 

At organizations like the World Federation of Advertisers, McCann WorldGroup, Grey Group, and Saatchi & Saatchi Asia, he had held a number of executive positions. Before being given the position of head of generative AI at Coca-Cola, Thakar was the global head of creative strategy and integrated content for the Coca-Cola portfolio. 

Read More: Microsoft Announces AI Personal Assistant Windows Copilot for Windows 11

The Coca-Cola Company has forayed into the field of  generative AI by using the technology to create its latest advertisement. Coca-Cola launched an AI-powered campaign, Masterpiece, that has taken the world by storm, showing some of history’s most renowned artworks while using breakthrough artificial intelligence. 

The VFX team at Electric Theatre Collective and creative firm Blitzworks collaborated to produce a magnificent commercial that flawlessly transitions between several forms of artwork using a mix of live-action images, digital effects, and AI. 

In a recent blog post, Coca-Cola introduced “Create Real Magic,” an artificial intelligence (AI) platform that combines OpenAI’s GPT-4 and DALL-E technologies.

Advertisement

Chinese Vendors Sell Nvidia AI Chips at Double the Regular Cost Due to High Demand

Chinese vendors sell Nvidia AI chips double regular cost high demand
Image Credits: NVIDIA

Despite export limitations put in place by the United States, Chinese sellers are profiting from the soaring demand for high-end Nvidia chips, particularly the A100 artificial intelligence (AI) chips. People looking for the sought-after chips have started gathering in Shenzhen’s well-known Huaqiangbei electronics district, which is well-known for its extensive selection of electronic products, including drones and camera parts.

Although the chips are not advertised in public, they can be found with careful research. The A100 chips, created by US chip company Nvidia, are reportedly offered for a stunning $20,000 per unit, which is double the standard price. 

High-end US chips can be purchased or sold in China legally, but because of US export restrictions, a black market has emerged. Vendors take care to avoid drawing the attention of Chinese or US authorities.

Read More: Microsoft Announces AI Personal Assistant Windows Copilot for Windows 11

In an effort to halt Chinese advancements in artificial intelligence (AI) and supercomputing in the face of escalating political and trade tensions, President Joe Biden’s administration introduced export restrictions. Thought to be the best for machine learning tasks, Nvidia’s microprocessors are now in high demand because of the worldwide AI boom, which was spurred by the popularity of OpenAI’s ChatGPT.

The precise number of A100 and H100 chips entering China is still unknown, but Reuters discovered that several dealers in Hong Kong and mainland China have easy access to small amounts of A100 chips. Buyers often include entrepreneurs, gamers, researchers, and app makers. Notably, a few municipal governments in China are also among the buyers.

In reaction to the circumstance, Nvidia declared that they do not allow the export of the A100 or H100 to China and instead provide lower-capability alternatives that adhere to US laws. The business also stated that it will punish consumers who were discovered to be breaking the terms of their agreement.

Advertisement

RadioGPT Introduces World’s First AI DJ

RadioGPT introduces World's first AI DJ
Image Credits: ANI

A radio station in the US made history by hiring the first full-time DJ powered by artificial intelligence in a ground-breaking move. For their midday broadcast, KBFF Live 95.5 FM, owned by Alpha Media, has used Futuri Media’s RadioGPT software to develop an AI version of their host Ashley Elzinga. 

The creative project aims to improve the station’s capacity for content creation and provide audiences and clients with more timely and thorough information.

Phil Becker, EVP of Content at Alpha Media, expressed his excitement about the possibilities of AI in radio broadcasting and claimed that it allows the station to be more adaptable than ever. They are now able to feature their content creators more frequently and in more situations, thanks to the deployment of RadioGPT.

Read More: Microsoft Announces AI Personal Assistant Windows Copilot for Windows 11

Elzinga may be seen making a joke about the AI host and hinting that she is off work in a recent video that the radio station posted on Twitter. To allay worries among regional DJs, Alpha Media emphasizes that the AI DJ will not take the place of the actual Ashley Elzinga. While working with her AI counterpart, Elzinga will continue to be paid as usual. 

In the video, the AI presenter, who was inspired by Elzinga, manages to replicate her voice very well, sounding kind and relaxed when speaking with viewers. A lucky caller is told in the video they have won tickets to a Taylor Swift concert in one video by the AI Ashley.

Advertisement

8 Lakh Jobs in Hong Kong Would Be Lost to AI Technologies by 2028, Says Report

8 lakh jobs Hong Kong lost AI technologies 2028
Image Credits: Stock Images

According to the 2023 Hong Kong Salary Guide published, artificial intelligence (AI) technology would eliminate almost 800,000 jobs by 2028, or 25% of the workforce in Hong Kong. The report names customer service agents, administrative workers, and data entry clerks as the most vulnerable job categories to AI impact. 

Beyond these industries, the impact of AI is anticipated to affect careers including law, translation, illustration, and content creation. Concerns about significant employment losses have been raised by the rising popularity of AI models like ChatGPT. According to sources, several organizations in Hong Kong are now encouraging people in roles that previously did not require IT skills to become proficient in ChatGPT.

According to a study by the major investment bank Goldman Sachs, AI has the potential to automate 25% of all vocations, with administrative jobs being the most automatable at 46%, followed by legal jobs at 44%, and professionals in architecture and engineering at 37%.

Read More: Microsoft Announces AI Personal Assistant Windows Copilot for Windows 11

Researchers from Nanyang Technological University in Singapore and the Damo Academy of Alibaba Group discovered that employing large language models (LLM) for data analysis, such as GPT-4 can be done for less than 1% of the price of hiring a human analyst, while providing equivalent results. 

The report emphasizes the possible risk to employment security posed by the growing use of generative AI. Additionally, GPT-4 completes jobs substantially faster than people do, and in some circumstances, it outperforms human data analysts in terms of accuracy and analysis.

Advertisement

OpenAI Releases New Feature Called Function Calling for AI Models GPT-3.5-turbo and GPT-4

OpenAI feature function calling GPT-3.5-turbo GPT-4
Image Credits: Analytics Drift

The text-generating AI models GPT-3.5-turbo and GPT-4 have been updated by OpenAI to include a feature called function calling. Developers can now specify functions for GPT-4-0613 and GPT-3.5-turbo-0613, and the model will intelligently decide which JSON object to output that has the arguments needed to invoke those functions. 

This is a novel approach to more securely link GPT’s capabilities with external tools and APIs, according to a blog post published by OpenAI on Tuesday. Developers can build chatbots that respond to queries by calling external tools (such as ChatGPT Plugins) using the function calling feature.

For instance, it can convert queries such as “What’s the weather like in Boston?” to get_current_weather(location: string, unit: ‘celsius’ – ‘fahrenheit’) or “Email Anya to see if she wants to get coffee next Friday” to a function call like send_email(to: string, body: string). OpenAI indicated that it will start the process of upgrading and deprecating the initial releases of GPT-4 and GPT-3.5-turbo, which it announced in March.

Read More: Microsoft Announces AI Personal Assistant Windows Copilot for Windows 11

On June 27, all software utilizing the stable model names (GPT-3.5-turbo, GPT-4, and GPT-4-32k) will be automatically updated to the new models. Additionally, the business announced a 25% price reduction for the GPT-3.5-turbo.  

Developers can use GPT-3.5-turbo for $0.0015 per 1,000 input tokens and $0.002 per 1,000 output tokens, yielding about 700 pages per dollar. A pricing reduction is also being applied to Text-embedding-ada-002, one of OpenAI’s most popular text embedding models.

Advertisement

Grammys to Accept Music with AI-generated Elements as Entries  

Grammys accept music AI-generated elements entries
Image Credits: Variety

Major changes have been revealed by The Recording Academy for the biggest award show in music. The Grammys will now accept entries for music created by artificial intelligence. 

However, there is a catch. According to the New York Post, AI-generated music must also include meaningful and relevant human elements. Only human creators are qualified to be nominated for or awarded a Grammy Award, according to the most recent regulations. 

According to the guidelines, AI-based content without human authorship is not eligible for the competition, while AI-based content with human authorship is. The latest adjustments coincide with an increase in deep-fake tracks and AI-generated music. This year, David Guetta surprised his audience by performing an Eminem song, even though the latter had never recorded it. 

Read More: Microsoft Announces AI Personal Assistant Windows Copilot for Windows 11

Recently, Universal Music Group has removed a song with AI-generated vocals from streaming sites that falsely claimed to be from Drake and the Weeknd after it went viral. The song, Heart on My Sleeve, was denounced by the record company for “infringing content created with generative AI“.

Spotify, the biggest player in the audio streaming industry, recently removed tens of thousands of songs, or roughly 7% of the recordings that the platform Boomy had uploaded. According to an individual familiar with the incident, the major record label Universal Music had alerted all of the major streaming services of unusual streaming activity on Boomy recordings.

Advertisement

Meta Introduces AI Model Voicebox to Revolutionize Speech Synthesis 

Meta introduces AI model Voicebox
Image Credits: Meta

Meta has announced a ground-breaking generative AI model called “Voicebox” that has the power to transform speech synthesis. Voicebox, according to a blog post by Meta, is the first model that can perform well for speech-generation tasks, even without specialized training for such tasks.

Voicebox is an expert in creating high-quality audio snippets, unlike conventional models that generate visuals or text. It has the ability to create speech in a variety of styles, either from scratch or by adjusting existing samples. Six languages, including German, Spanish, English, French, Polish, and Portuguese are supported by the model for speech synthesis. Voicebox also provides functions including noise reduction, content editing, style conversion, and varied sample production.

Voicebox is distinguished by its distinctive learning methodology. Voicebox learns directly from the untranscribed audio and the corresponding transcriptions rather than using autoregressive models. As a result, the model is more flexible and versatile because it can alter any portion of a given sample.

Read More: Microsoft Announces AI Personal Assistant Windows Copilot for Windows 11

According to Meta, when given the surrounding speech and its associated transcript, Voicebox can be trained to predict a speech segment. Once the model has mastered the capacity to complete speech depending on context, it can be used for a variety of speech production tasks, enabling it to produce only the necessary parts of an audio recording rather than the entire recording.

Voicebox excels in a variety of applications because of its adaptability, such as in-context text-to-speech synthesis, cross-lingual style transfer, voice denoising and editing, and diversified speech sampling. Performance and adaptability of the model open up new avenues for creative audio generation and advanced speech manipulation.

Advertisement

Google Introduces New Shopping Feature Using AI to Foresee Clothing on Different Body Types

Google shopping feature AI foresee clothing different body types
Image Credits: Stock Images

Google unveiled a virtual try-on function earlier this week, powered by generative AI. The new function displays how clothing appears on a variety of body types. Google claims that the new function would enable customers to tweak products until they discover the ideal one.

Machine learning and new visual matching algorithms that enable users to fine-tune inputs like color, pattern, and style make this possible. Users will be able to see selections from stores all across the internet, which is the main benefit in this case.

Customers can now view how a piece of clothing will drape, stretch, cling, fold, and create wrinkles and shadows on a variety of models, thanks to the virtual try-on. Additionally, the tool will assist customers in locating complementary apparel items in various sizes, styles, or colors. 

Read More: Microsoft Announces AI Personal Assistant Windows Copilot for Windows 11

Google, in the company’s blog post, said, “We chose persons ranging in sizes XXS-4XL, representing varied skin tones (using the Monk Skin Tone Scale as a guide), body forms, ethnicities, and hair types.”

Google used multiple sets of photos of more than 80 models in a variety of stances, representing a range of sizes, skin tones, body shapes, and ethnicities, to create the virtual try-on function. In order to produce accurate photographs of the subject from all angles, the AI-powered tool learned from all the models how to match the shape of certain apparel in various postures.

According to Google, the feature would initially be compatible only for women’s clothing with retailers like H&M, Everlane, Anthropology, etc. It will eventually diversify into men’s clothes, mostly shirts. Over time, it’s anticipated that the tool will grow more exact and accurate.

Advertisement

Google’s New AI Feature DermAssist to Help Identify Skin Conditions

Google AI feature DermAssist identify skin conditions
Image Credits; Google

This week, Google introduced a Lens image search function called DermAssist that can identify skin conditions such as odd mole or rash. Users can get more information concerning a bump on the lip, a line on a nail, or hair loss from the scalp. The tool functions on all areas of the body. 

“Just take a picture or upload a photo through Lens, and you’ll find visual matches to inform your search,” the blog post says. Importantly, Google makes it clear that the material provided is informational only and not a diagnosis. The blog advises users to consult medical professionals  for accurate diagnosis.

According to Google spokesperson Craig Ewer, the feature is accessible to everyone in the US and supports all major languages. For years, Google has been investigating the application of AI picture identification to skin diseases. 

Read More: Microsoft Announces AI Personal Assistant Windows Copilot for Windows 11

A project that aimed to diagnose skin, hair, and nail issues using a combination of photographs and survey responses was showcased by Google during its I/O developer conference in 2021. At the time, Google claimed that the technology could distinguish between 288 different conditions and would, 84% of the time, present the proper condition in the top three possibilities.

According to Google’s DermAssist tool page, it is “currently undergoing further market testing through a limited release.” It also states that despite being CE-marked as a Class 1 Medical Device (a label for products in the European Economic Area), it has not undergone FDA review.

Advertisement