Tuesday, November 11, 2025
ad
Home Blog Page 22

Amazon’s New Generative AI Tool for Advertisers

Amazon Generative AI advertisers
Source: Amazon

Amazon Ads has launched a new generative AI tool in beta to help advertisers improve their ads performance. 

According to Amazon executives, creating captivating and authentic advertising content might be costly, often needing additional expertise, something that small-scale advertisers need help with. Keeping this in mind, Amazon wants to help advertisers with their ad campaigns by incorporating the new image generation tool within their system. 

Colleen Aubrey, senior vice president of Amazon Ads products and technology, says their new image generator is simple and easy to use and, at the same time, gives better outcomes. He further states that not only would advertisers benefit from it, but customers would also see more “engaging and visually rich” ads. 

Read More: Amazon Launches New Generative AI Service Bedrock 

All an advertiser needs to do is go to the Amazon Ad Console, choose their product, and click “Generate.” Using generative AI, this tool promptly produces a collection of lifestyle and brand-focused images tailored to the product specifications. 

These images can be further enhanced by inputting concise text prompts, allowing for the rapid creation and testing of multiple versions to fine-tune performance optimization. 

Amazon also claims that products showcased in a lifestyle context can achieve a 40% increase in click-through rates compared to advertisements featuring typical product images.


Advertisement

Carry Your AI Everywhere With Humane’s AI Pin

Humane AI Pin
Source: Humane

Humane has announced that their first intelligent clothing-based wearable device, the “Humane AI Pin,” is launching on November 9th

Coming straight from the horse’s mouth, the Humane AI Pin is engineered to integrate seamlessly with one’s unique style and clothing choices. The wearable device also employs a variety of sensors to facilitate natural and instinctive computing.

The AI pin, which operates independently without requiring connection to a smartphone or any other companion device, features AI-driven optical recognition and a laser-protected display, both powered by the advanced Snapdragon platform from Qualcomm technologies

Read More: AI startup Humane raises another $100 million

According to Humane, privacy is of utmost importance to the device. Features like the absence of a wake word eliminating an “always on listening” align with Humane’s commitment to creating products prioritizing trust and security. Humane AI Pin also has a “trust light,” which lights up whenever the wearable device records data. 

The device, which made its way to the Time Magazines “Best Inventions of 2023,” also debuted at the Paris Fashion Week as Humane AI Pin collaborated with Paris-based fashion house Coperni. 

Advertisement

Meta Announces Tools to Advance Socially Intelligent Robots

Meta Announces AI tools to advance socially intelligent robots
Image source: meta

Facebook AI Research (FAIR) announced the launch of Habitat 3.0, Habitat Synthetic Scenes Dataset (HSSD-200), and HomeRobot, giving AI-powered assistants a whole new level of meaning. Meta aims to develop socially intelligent robots that adapt to human preferences and assist with everyday tasks, underscoring the critical role of embedded systems in shaping the future of AR and VR experiences.

These tools are launched to overcome all the challenges that usual AI-powered assistants face, including scalability, standardized benchmarking, collaborative robotics, and safety concerns.

FAIR’s Habitat 3.0 enables large-scale training for human-robot interaction in realistic indoor environments. AI agents trained using these simulations can demonstrate collaborative behaviors, from navigating narrow corridors to efficiently dividing tasks.

Read more: Deepmind’s New ML Model, Unisim, Simulates Reality to Train Robots.

And, the introduction of the HSSD-200 addresses critical issues in training robots in simulated environments. It offers highly detailed 3D environments, fine-grained semantic categorization, and efficient asset compression. FAIR’s experiments have shown that HSSD-200’s smaller yet higher-quality dataset can lead to AI agents with comparable or superior performance in real-world scenes.

To help things up in collaborative robotics, FAIR built HomeRobots, a software stack for physical and simulated autonomous manipulation. It consists of benchmarks, baseline behaviors, and interfaces that focus on tasks like delivering requested objects. 

Habitat 3.0, HSSD-200, and HomeRobots represent a significant step toward developing socially embodied AI agents to collaborate with and assist humans in their everyday lives. The work of FAIR can potentially bridge the gap between simulation and the physical world, opening new frontiers in human-robot collaboration and interaction.

Advertisement

NVIDIA Introduces New AI Agent to Revolutionize Robot Learning

NVIDIA Introduces New AI Agent to Revolutionize Robot Learning
Image Source: multiplatform.ai

NVIDIA Research has made a new AI tool called Eureka that teaches robots to do complex tasks. It taught a robot how to do quick pen spin like a human for the first time. 

According to the official blog post of NVIDIA, it is just one of the 30 tasks the robots have learned to perform expertly. Besides pen spinning, it has taught many tasks to robots, such as opening drawers, manipulating scissors, and tossing and catching balls. 

Eureka teaches robots by writing reward algorithms to train bots autonomously. The tool leverages the cutting-edge capabilities of GPT-4’s large language model (LLM) and NVIDIA’s own Isaac Gym, a development platform for building 3D tools and applications. 

Read more: “AI Factories” the New Vision Plan of Nvidia and Foxconn

Importantly, this tool works automatically and does not need human prompting or pre-defined templates. It teaches robots by formulating a reward algorithm specifically tailored to the task and morphology of the robot. The reward scheme allows robots to learn by trial and error. 

When tested against human specialists in 29 different environments using ten different robot platforms, Eureka produced superior reward programs. The official paper of NVIDIA mentions this tool leads to an average performance improvement of more than 50% for the bots. 

The code and benchmarks of Eureka will be open-source. This will allow developers to use resources to expand on the research and leverage the community. Recently, it released a video that shows the workings of this tool. 


The introduction of Eureka by NVIDIA has the potential to revolutionize robot learning by providing a platform for robots to learn complex skills with high efficiency. As the technology develops more, we expect an increasing integration of these AI-driven robots into various industries.

Advertisement

Mount Sinai Developed the HistoAge Model, an Algorithm that can Predict Age at Death

Mount Sinai HistoAge Model Age at Death
Source: Mount Sinai

Researchers at Mount Sinai Hospital have developed an algorithm called the HistoAge Model, which can predict age at death and assess neurodegeneration. 

Mount Sinai researchers trained a machine learning model with almost 700 digitalized images of human hippocampal from aged brain donors to develop the histological brain age estimation algorithm to predict a person’s age at death.  

The research focused on the hippocampal region, as the hippocampus is involved in both brain aging and age-related neurodegenerative disorders. 

Read More:  Google DeepMind’s AlphaMissense Predicts Harmful Genetic Mutations 

Age acceleration based on HistoAge exhibits robust correlations with cognitive impairment, cerebrovascular disease, and the aggregation of Alzheimer’s-related proteins, surpassing existing age acceleration metrics such as DNA methylation.

According to researchers, the HistoAge model, with other subsequent similar algorithms, introduces a fresh paradigm for evaluating aging and neurodegeneration in human samples, with seamless scalability for implementation in clinical and translational research facilities. Moreover, this methodology offers a more rigorous, unbiased, and robust metric of cellular changes underlying degenerative diseases.

Advertisement

“AI Factories” the New Vision Plan of Nvidia and Foxconn

Nvidia Foxconn AI factories
Source: Infosys

Foxconn and Nvidia collaborate to establish “AI Factories,” also known as “advanced data centers.” These AI factories will underpin the progress of self-driving vehicles, autonomous machinery, and other pioneering technologies. 

The partnership between the two electronic giants was announced at Foxconn’s annual tech showcase in Taipei. Nvidia CEO Jansen Huang noted the emergence of a new manufacturing paradigm, one focused on creating “intelligence,” with AI factories being the data center responsible for this production. 

Huang further mentioned that Foxconn possesses the requisite expertise and global scalability to construct such facilities. Nvidia’s role will be to provide the factories with their AI chips and software containing vast amounts of data. 

Read More: Deepmind’s New ML Model, Unisim, Simulates Reality to Train Robots

These AI factories will depend on Nvidia’s GPU computing infrastructure, which is purpose-built for processing, refining, and transforming data into AI models and information. 

According to Huang, the AI factories could continuously receive and analyze vehicle data, consequently updating and enhancing their software and the entire AI fleet. 

This is not the first time Nvidia and Foxconn have come together; back in January, Nvidia announced its partnership with Foxconn to build automated EVs. 

Advertisement

Honda, GM, Cruise to Bring Driverless Taxi Service in Japan

Honda, GM, Cruise to Bring Driverless Taxi Service in Japan
Image Source: Cruise

A joint venture between three leading auto companies, Honda, General Motors (GM), and Cruise, is set to revolutionize the taxi services in Japan. The three companies aim to bring driverless ride services in Japan by 2026, and preparations have already started. 

Passengers will be able to request a cab using a smartphone app. The taxi service will pick up passengers from their specified location and drive to their destination by entirely self-driving. For this service, Honda will be using Cruise Origin, which is a self-driving car with no steering wheel or even a driver’s seat. 

Cruise Origin is jointly developed by Honda and General Motors. It is a fully autonomous taxi or a robotaxi designed for cab services with a comfortable interior and seats for over six people sitting face-to-face. The vehicle also includes a lot of amenities, including Wi-Fi access and entertainment devices.

Read more: AMD Prioritizes AI by Acquiring Nod.ai

Honda stated that the joint venture of the three companies for self-driving cars will be established by the first half of 2024, subject to regulatory approval. The driverless service is expected to launch in early 2026 around central Tokyo. Firstly, it’ll start with a few Cruise Origin and then expand to more vehicles and places in Japan.

This venture between GM and Honda represents an extension of their longstanding partnership, which spans a decade of collaboration. In 2013, GM and Honda joined forces to develop cutting-edge hydrogen fuel cell systems. Then, in 2018, Honda committed a $2 billion investment in Cruise over a 12-year period, solidifying their dedication to innovation and the future of mobility.

The launch of driverless taxi services in Japan signifies a transformative shift in how we envision transportation. It is expected to redefine the transportation industry in Japan and the world. 

Advertisement

Theft, not Innovation, says UMG in their Copyright Lawsuit Against Anthropic

Anthropic UMG Copyright Lawsuits
Source: Peta Pixel

Universal Record Group (UMG), along with other music publishing houses such as Concord and ABKCO, have filed a lawsuit against artificial intelligence company Anthropic for “unlawful” coping and “disseminating” a “vast amount of copyrighted works” without permission.

In their lawsuit against Anthropic, UMG claims that the Amazon-backed American startup company, through their works, has been more inclined towards “theft” and not “innovation.” 

The complaint, filed in Tennessee federal court on Wednesday, 18th October, states that Anthropic trains its AI model named Claude by gathering and incorporating extensive volumes of text from the internet and other sources. This text includes lyrics from innumerable musical compositions for which publishers hold or manage the copyrights. 

Read More: Google to Protect its Generative AI Users against Copyright Lawsuits

The lawsuit alleges that Anthropic infringes upon the publisher’s rights through its use of lyrics from at least 500 songs, which includes classics like “God Only Knows,” “Gimme Shelter,” “Sweet Home Alabama” to contemporary hits like “Uptown Funk,” “Somewhere only we know,” “Halo.” 

To be more precise, the exact point of infringement happens when a user prompts Anthropic’s Claude to provide lyrics to songs. Consequently, the AI chatbot produces response that contains significant or even, in some cases, the whole lyrics. 

UMG, through their complaint, also highlights that there are a number of lyrics aggregators and websites performing the same function, but with proper licensing of publisher’s copyrighted works.

Advertisement

LLaVa 1.5 an Alternative to Open-AI GPT-4 Vision 

Lla Va 1.5 an alternative to GPT 4
Image Source: Encord

GPT-4 vision of OpenAI has revolutionized our engagement with AI systems by taking text and images as inputs. However, its closed-source and subscription model limits its use in scale. As a result, an open-source community is rising with the LLaVa 1.5 (Large Language and Vision Assistant) tool as an alternative for GPT-4 Vision. 

LLaVa 1.5 is an open-source Large Language Model (LLM) tool. It accepts text or images as input and provides surprisingly accurate answers in return. At its core, it is an AI model that combines a vision encoder and large language models for visual and language understanding. 

As a visual encoder, the officials of LLaVa 1.5 have used the CLIP (Contrastive Language-Image Pre-training) model. This model was developed by OpenAI in 2021 for training on massive datasets of image-description pairs to learn the correlation between images and text. The other LLM model for LLaVa 1.5 is Vicuna, a modified version of Meta’s open-source LLAMA for instruction following.

Read more: Meta Launched AI Chatbots Embodied by Celebrities.

While LLaVa 1.5 is known to mimic GPT-4, it shows robust performance results. It shows impressive results for the open-source small model. However, it is important to note that it is trained on ChatGPT data, so developers are restricted from commercial usage by the terms.

With all its cutting-edge technology, LLaVa 1.5 is not yet ready to compete with GPT-4 Vision. It lacks many features like ease of use, external plugins, and integration with other OpenAI tools. That said, this is just the start of innovation for open-source generative AI tools. If the growth continues at this pace, we expect to see a revolution in generative AI technologies. 

Advertisement

IBM Signs Three MoUs with the Indian Government to bolster the Country’s Technological Sector

IBM Indian Government MoUs

On Wednesday, October 18, IBM announced signing three MoUs with Indian Government Officials, aiming to expedite advancements in artificial intelligence (AI), semiconductors, and quantum computing. 

Under the MoUs, the American multinational company will collaborate with IndiaAI-Digital India Corporation to establish a national AI Innovation Platform (AIIP), focusing on AI skill development, ecosystem development, and integrating advanced foundational models and generative AI capabilities to advance India’s progress in the technology center.  

It is also reported that the Indian Government will have access to IBM’s newly unveiled WatsonX. This platform will be employed to harness models in language, code, and geospatial science, with the goal of training models for various other domains when required. 

Read More: IBM Collaborates with Indian Ministries to Empower Youth with Future-Ready Skills

IBM also expressed its commitment to collaborate with ISM (Indian SemiConductor Mission). The objective is to drive innovation in semiconductor technologies, including logic, advanced packaging, heterogeneous integration, and advanced chip design.

IBM will also collaborate with the Centre for Development of Advance Computing (C-DAC). The company is providing its support to the FutureSkills program in partnership with the National Institute of Electronics and Information Technology. IBM will partner with emerging startups in the fields of Quantum and AI within the futureDesign initiative. 

Advertisement