Home Blog Page 70

AI Startup UVeye Raises $100M in Series D Funding Round

UVeye raises $100M series D funding round
Image Credits: UVeye

Startup UVeye, which develops automated vehicle inspection systems, announced today that it has closed a $100 million fundraising round headed by Hanaco VC. 

The company has now collected about $200 million in total, thanks to the Series D round, which also included current investors W.R. Berkley Corporation, GM Ventures, CarMax,  F.I.T. Ventures L.P., and numerous Israeli institutional investors.

A drive-through system developed by UVeye uses artificial intelligence and sensor fusion technologies to quickly identify mechanical and exterior defects below or on the sides of any car. Additionally, it may spot changes and foreign factors that might be problematic for the vehicle.

Read More: OpenAI Closes $300 Million Funding Round Between $27-$29 billion Valuation

According to the firm, as electronic and driverless vehicles get more complex, a system like this is required. The capacity to undertake low-cost and high-frequency predictive maintenance will become crucial for businesses that run fleets of these cars, it claims.

The co-founder and CEO of UVeye, Amir Hever, stated that the company’s objective is to standardize how the car industry identifies damage and mechanical problems on automobiles and to develop new quality standards. He said, “Our patent-protected technology delivers unmatched solutions for swiftly and precisely identifying vehicle problems to automakers, dealers, and fleet operators.”

Advertisement

Spotify Removes Thousands of AI-made Songs Uploaded by Boomy Platform

Spotify removes thousands of AI-generated songs
Image Credits: Spotify

According to sources, Spotify, the biggest player in the audio streaming industry, recently removed tens of thousands of songs, or roughly 7% of the recordings that the platform Boomy had uploaded.

According to an individual familiar with the incident, the major record label Universal Music had alerted all of the major streaming services of unusual streaming activity on Boomy recordings.

The Boomy songs were taken down due to suspicions of “artificial streaming,” which is online bots impersonating real people to increase the number of listeners for particular songs. Because technology enables the instant generation of several music files that can then be posted online and streamed, AI has made this kind of activity simpler.

Read More: OpenAI Closes $300 Million Funding Round Between $27-$29 billion Valuation

Boomy, a platform that was introduced two years ago, allows users to select different genres or descriptors, such “rap beats” or “rainy nights,” to produce music that is automatically generated. The song can then be made available via streaming platforms, where users will get royalties. According to Boomy, a company based in California, its customers have written over 14 million songs.

According to the firm, “artificial streaming is a long-standing, industry-wide problem that Spotify is working to stamp out across our service.” 

Last month, according to a report in The Financial Times, Universal Music wrote to streaming services requesting that they take action to limit the use of generative AI on their platforms. The Weeknd and Drake’s voices were imitated in a song that went viral on streaming services the same week.

Advertisement

Meta Introduces Open-source Multisensory AI Model ImageBind that Combines Six Types of Data

Meta introduces open-source multisensory AI model ImageBind
Image Credits: Zee News

ImageBind, an open-source AI model that can simultaneously learn from six different modalities, has been released by Meta. Machines can now comprehend and link various types of data, including text, image, audio, depth, temperature, and motion sensors. Without having to be taught on every potential modality combination, machines can learn a single shared representation space using ImageBind.

ImageBind is significant because it gives machines the ability to learn holistically. Researchers might investigate novel possibilities by fusing various modalities, such as developing multimodal search tools and building immersive virtual environments. By effortlessly generating richer media, ImageBind could help enhance content recognition and moderation while fostering creative design.

Meta‘s greater objective of developing multimodal AI systems that can learn from all kinds of data is reflected in the creation of ImageBind. Researchers now have additional options to create fresh, all-encompassing AI systems, thanks to ImageBind, as the number of modalities rises.

Read More: OpenAI Closes $300 Million Funding Round Between $27-$29 billion Valuation

AI models that rely on many modalities have a lot of room to grow because of ImageBind. ImageBind learns a single joint embedding space from image-paired data that enables several modalities to “talk” to one another and discover relationships without being observed simultaneously. This makes it possible for other models to comprehend novel modalities without the need for time-consuming training.

A larger vision model may be advantageous for non-visual tasks like audio classification because of the model’s strong scaling behavior, which shows that its performance increases with the strength and size of the vision model. Along with audio and depth classification tasks, ImageBind performs better than earlier research in zero-shot retrieval.

Advertisement

IBM Unveils New AI and Data Platform IBM Watsonx 

IBM AI data platform IBM Watsonx
Image Credits: IBM

At its annual Think conference today, IBM unveiled IBM Watsonx, a new AI and data platform that will allow businesses to scale and hasten the impact of AI with reliable data. 

Enterprises utilizing AI today require access to a complete technology stack that enables them to develop, test, and roll out AI models across their organization with reliable data, speed, and governance, all this in one location and across any cloud environment.

With Watsonx, IBM is providing businesses with a seamless end-to-end AI workflow that will make AI easier to adapt, scale, and access foundation models that have been curated and trained by IBM as well as open-source models. It will act as a data store to enable the gathering and cleaning of training and tuning data, and a toolkit for governance of AI.

Read More: OpenAI Closes $300 Million Funding Round Between $27-$29 billion Valuation

Arvind Krishna, chairman and CEO of IBM, said, “With the development of foundation models, AI for business is more powerful than ever. Using AI is vastly more scalable, economical, and effective when foundation models are used.”

“In order for clients to be more than just users and gain the benefits of AI, we designed IBM Watsonx specifically for the needs of businesses. While maintaining complete control over their data, companies can swiftly train and deploy customized AI capabilities across their whole business with IBM Watsonx,” he added.

Clients will have access to the set of tools, technology, infrastructure, and consulting skills needed to develop their own AI models on their own data, or to enhance and modify existing models, and then deploy them at scale in a more dependable and open environment to promote commercial success. 

Advertisement

Chinese Police Arrests Man for Allegedly Using ChatGPT to Lie about Train Crash

Chinese Police Arrests Man ChatGPT Lie about Train Crash
Image Credits: Rediffmail

Chinese police detained an individual on Sunday in what may be the first instance of someone reportedly spreading false information using the hugely popular AI chatbot ChatGPT

According to the South China Morning Post, authorities in the northern Chinese province of Gansu say they detained a man for allegedly fabricating news reports about a train accident that claimed nine lives. 

The man, whose last name was given as Hong by the authorities, was said to have created the “fake” news using ChatGPT and propagated it via numerous blogs online. Hong was expressly accused of “picking fights and causing trouble,” a general political charge leveled at activists and dissidents.

Read More: OpenAI Closes $300 Million Funding Round Between $27-$29 billion Valuation

Several hundred users on the Chinese blogging platform Baijiahao are said to have posted the disputed posts. Police allegedly claimed that Hong circumvented Baijiahao’s rules on submitting the same content to several accounts. He used ChatGPT to create various versions of the news article, resulting in about 15,000 clicks. 

Hong apparently owns a business that runs numerous blog-style websites. According to reports, the platforms are registered in Shenzhen as a significant big tech manufacturing and business center in southern China.

Advertisement

OpenAI Releases Shap-E, A 3D Asset Conditional Generative Model

OpenAI releases Shap-E
Image Credits: OpenAI

OpenAI has released Shap-E, a 3D asset conditional generative model. According to the paper, Shape-E can directly generate the parameters of implicit functions that can be rendered as both textured meshes and neural radiance fields (NeRF) with a single text prompt, in contrast to conventional 3D generative models that provide a single output representation.

Shap-E is one of the few OpenAI products that is available as open source, and it can be found on GitHub together with the model weights, inference code, and an example. 

The paper claims that Shap-E receives training in two phases. First, an encoder that deterministically maps 3D assets into the parameters of an implicit function is trained. Second, using the encoder’s data, a conditional diffusion model is trained. 

Read More: OpenAI Closes $300 Million Funding Round Between $27-$29 billion Valuation

The research states that when trained on a sizable dataset of linked 3D and text data, “our models can generate complex and diverse 3D assets in just a few seconds.” 

The intriguing thing about OpenAI’s Shap-E is that it converges more quickly and generates samples with an equivalent or greater quality than Point-E, despite modeling a higher-dimensional, multi-representation output space.

Even though the 3D objects that are produced could appear pixelated and rough, the models can be built using just one sentence. Another drawback of this is that, as the study points out, it currently struggles to locate numerous attributes and can only produce objects with single object prompts and basic attributes.

Advertisement

OpenAI Doubles Losses to $540 Million Due To Rising ChatGPT Costs

OpenAI doubles losses $540 million due ChatGPT cost
Image Credits: Gizmodo

According to reports, OpenAI, the AI research business behind well-known language models like ChatGPT and Dall-E 2, increased its losses to $540 million in 2022 as a result of rising ChatGPT maintenance costs. 

The company’s goal is to create artificial general intelligence (AGI), an AI capable of enhancing its own capabilities, and is now expecting to raise upto $100 billion in the upcoming years.

The Information claims that OpenAI’s losses are also a result of the high cost of hiring qualified personnel, particularly recruiting engineers and research specialists. As more consumers utilize AI technology and the company develops new software, the cost of training machine-learning models and getting fresh data sets is also anticipated to rise.

Read More: OpenAI Closes $300 Million Funding Round Between $27-$29 billion Valuation

Even though OpenAI’s revenue has increased, spending hundreds of millions of dollars annually just a few weeks after the launch of a paid edition in February, expenditures are probably going to keep going up. Data prices are anticipated to soar as the AI arms race heats up as a result of the introduction of policies by websites like Reddit and StackOverflow that charge AI firms for access to their previously free datasets.

According to CEO Sam Altman, OpenAI may attempt to raise $100 billion to fund the development of AGI, a move that some experts fear could result in an AI monopoly. By purchasing the domain name AI.com and applying for the trademark “GPT,” OpenAI has already started moving in this direction. 

Advertisement

Google to Integrate Short Videos and Social Media Posts into Search Results

Google short videos social media posts into search results
Image Credits: Google

According to Wall Street Journal, Google plans to incorporate conversational interfaces powered by AI short video clips and social media posts into search results. According to corporate records and persons with knowledge of the situation, they would push the service further away from its conventional structure, also known as the “10 blue links.” 

In response to the changing ways that people get information on the internet, Google plans to make big changes to its search engine. With a focus on meeting the requirements of young people all over the world, the goal is to make the search engine more interesting, customized, and relatable.

According to the documents, Google intends to improve its search engine by making it even more “visual, snackable, personal, and human.” The documents state that as part of the move, it intends to include more human voices and support content creators the same way it has in the past with websites.

Read More: OpenAI Closes $300 Million Funding Round Between $27-$29 billion Valuation

The search engine giant is anticipated to introduce new features that enable users to have conversations with an artificial intelligence programme, a project code-named “Magi,” during its annual I/O developer conference this coming week, according to several sources with knowledge of the situation.

For years, Google, owned by Alphabet, has made only minor changes to the look and feel of search, which drives the advertising sector and generated more than $162 billion in revenue in 2017. However, this is altering due to the rapid growth of AI chatbots and short-video apps like TikTok, both of which have drawn in younger users.

Advertisement

Biden to Invest $140 Million to Launch Seven New AI Research Institutes

Biden invest launch seven AI Research Institutes
Image Credits: Sky News

The Biden Administration announced a broad range of planned actions on Thursday to help reduce some of the risks that these developing technologies pose to the American public. This announcement came ahead of a meeting between Vice President Kamala Harris and the leaders of America’s four top AI tech companies: Alphabet, OpenAI, Anthropic, and Microsoft. 

This includes requesting the Office of Management and Budget (OMB) to draft policy guidance for federal employees, asking leading AI companies to commit to participating in a “public evaluation” of their AI systems at DEFCON 31. It also includes allocating $140 million to the establishment of seven new AI research and development centers as part of the National Science Foundation.

A senior administration official stated during a call with reporters on Wednesday that “the Biden Harris administration has been leading on these issues since long before these latest generative AI products debuted last autumn.”

Read More: OpenAI Closes $300 Million Funding Round Between $27-$29 billion Valuation

In an effort to help guide the design, development, and deployment of AI and other automated systems to protect the rights of the American public, according to a White House news release, the Administration revealed its “AI Bill of Rights” “blueprint” last October.

“In a time of rapid innovation,” the administration document continued, “it is essential that we make clear the values we must advance and the rights we must protect. We have provided business, legislators, and the people creating these technologies with some clear ways that they can limit the risks with the framework for an AI bill of rights.”

Advertisement

White House to Meet Google, Microsoft CEOs to Discuss AI Dangers

White House meet Google, Microsoft CEOs AI danger
Image Credits: NYT

Vice President Kamala Harris and other top officials from the Biden administration will meet with top executives from leading technology companies, including Google, Microsoft, OpenAI, and Anthropic, to discuss critical concerns relating to artificial intelligence (AI). 

The White House representative who confirmed the discussion emphasized that President Joe Biden expects businesses to give their products’ safety a priority before making them available to the general public.

The main challenges raised by the rapid development of AI technology include privacy invasions, bias, and the possible spread of frauds and false information. While acknowledging that it is too soon to say whether AI is dangerous, US President Joe Biden emphasized that technology companies have a duty to make sure that their products are secure in April. 

Read More: OpenAI Closes $300 Million Funding Round Between $27-$29 billion Valuation

He continued, “Social media has already shown the damage that strong technologies can do without the proper safeguards.” The administration has been aggressively seeking public feedback on suggested accountability mechanisms for AI systems as worries about how AI may affect national security and education rise. 

Recently, representatives from the White House Office of Science and Technology Policy and the Domestic Policy Council published a blog post cautioning about the potential hazards that AI can present to workers.

Senior members of the administration, including Chief of Staff Jeff Zients, National Security Adviser Jake Sullivan, and Secretary of Commerce Gina Raimondo, are expected to attend the meeting on Thursday.

Advertisement