Home Blog Page 115

Martin Kon, Ex-YouTube CFO, Joins AI Startup Cohere as the COO

martin kon joins ai startup cohere

Martin Kon, ex-YouTube CFO (Chief Financial Officer) since 2021, exits the company as he joins an artificial intelligence (AI) startup, Cohere, as the COO (Chief Operating Officer) and President.

Kon is an experienced business leader who has worked with globally recognized firms like the Olliver Wyman and Boston Consulting Group. He joined YouTube in 2019 and was responsible for strategy, finance, business operations, and commercial data analytics at YouTube and worked directly with Susan Wojcicki, YouTube’s CEO and Google’s Corporate CFO. 

As Cohere’s COO, Kon will focus on understanding enterprise consumer needs and fetching relevant commercial products and solutions to the market. Aidan Gomez, Cohere’s CEO, added that Kon would also help businesses to realize the “enormous value from harnessing the power of language models.”

Read More: Researchers introduce wearable electronic skin for tactile feedback in AR and VR

Cohere is an AI startup based out of Toronto that focuses on helping businesses to adopt and take advantage of natural language processing (NLP) in the real world. The startup caters to predictive text generation, copywriting, conversational AI, summarization, or content moderation. 

Gomez said, “With Martin’s network and experience, I look forward to ushering in Cohere’s next chapter, one where we establish ourselves as the commercial leader of language AI.”

Advertisement

Altair leads seed funding of $10M for Xscape Photonics

Altair, an AI company providing software and cloud solutions in high-performance computing (HPC), data analytics, and AI, has invested $10M in Xscape Photonics. Xscape is a startup that has developed patented photonic chip technology for ultra-high bandwidth connections inside computing systems and data centers. 

James R. Scapa, founder, and CEO of Altair stated that the investment and collaboration with some of the world’s best innovators in Photonics would enable Altair to focus on advanced technologies to help customers to solve their problems more efficiently, 

Until now, computing relied on traditional electronic ways to transport vast amounts of data from chip to chip. This resulted in a lot of space and power and generated a lot of heat from the computer systems. Therefore, it caused performance issues in high-performance computing applications, especially in AI and data science applications.

Read more: Carter secures £1.7m to use generative AI for gaming characters

Xscape has created a platform that integrates diverse computing elements in an environmentally sustainable way while providing the maximum possible performance using breakthrough photonics technology. The use of photonics cuts power consumption and heat output while increasing communication speed and power in applications. 

Alexander Gaeta, CEO and co-founder of Xscape Photonics mentioned that Xscape is proud to re-invent the future of computing by developing the highest bandwidth, energy efficiency, and high-performance photonics technology to meet the needs of the future. The investment and collaboration with Altair will enable Xscape to push the boundaries of its platform and integrate with the best HPC and AI software to help customers in all sectors.

Advertisement

Diver X, the Startup Behind HalfDrive Headsets, Launches VR Haptic Gloves

diver x vr haptic gloves

Diver X, a Japanese VR startup that pitched HalfDrive VR Headsets earlier this year, has launched a new Kickstarter campaign for a pair of Diver X VR haptic gloves that contain flexing and compressing membranes to mimic touch sensations. 

The HalfDrive Kickstarter fame saw the light in January as the campaign secured enough cash to be fully funded. However, the Diver X team decided against it and returned the funds as the device that clearly took inspiration from Sword Art Online failed the scalability test. 

Now, the company is back with another Kickstarter campaign with ContactGlove, a pair of Diver X VR haptic gloves that tracks fingers and positions with SteamVR and offers input emulation via buttons. 

It is up to the user to decide if and when to use this function because button input is an emulated process in which configuration software links certain buttons to hand motions, such as bending your right index finger to pull a trigger.

Read More: Mercedes Files for Five NFT Trademarks

The “pro” function on higher-end versions also offers haptic feedback by contracting and expanding to mimic touch on the user’s fingertips. As per the company, the VR haptic gloves are compatible with SteamVR and come with mounting adapters for Tundra Trackers and Vive Trackers.

The Diver X VR haptic gloves are available for pre-order on Kickstarter, which already seems to have caught fire in the landscape. The project has now surpassed its original funding goal of US$200,000. The VR haptic gloves start at US$490 for models without the touch membrane and US$710 for those with Tundra Trackers. The haptic versions start at approximately US$870.


For more information, refer to the project page.

Advertisement

Carter secures £1.7m to use generative AI for gaming characters

Carter, a London-based generative AI startup, secures £1.7m in pre-seed funding round to use conversational AI for background video game characters. The funding round was led by Play Ventures, Connect Ventures, and a group of angel investors: Chris Lee, Steve Chard, GFR Fund, Jas Purewal, Affan Butt, and Rupert Loman.

Carter is working on conversational AI to help game developers realistically develop computerized gaming characters. The company is developing an AI toolkit to allow developers to integrate conversational AI to create game characters speaking in local languages.

Read more: Hexo Raises $270,000 in Pre-Seed Funding by Antler India

Danial Ali, the co-founder of Carter, stated that computerized gaming characters could bring unconditional love and friendship through the human-to-machine relationship. According to him, creating realistic video game characters is a big goal and might sound like sci-fi, but there is a standard to the idea of such a computerized community. 

Established in April 2022, Carter follows the vision of building meaningful relationships between humans and digital companions. It allows developers to build more meaningful conversations in their games and projects. Carter uses different techniques enabled by AI, inspired by human behavior and the industry’s leading research. The founders of Carter, Ali, and Huw Prosser have exclusive experience in the AI field. They have founded the business automation software company Bloomware previously.

Advertisement

Mercedes Files for Five NFT Trademarks

mercedes nft trademarks

The German automotive company Mercedes-Benz paves the path to enter the metaverse by filing for five NFT trademarks for its line. Michael Kondoudis, a licensed attorney, revealed the trademark in a tweet that showed the car manufacturer’s application with the United States Patent and Trademark Office (USPTO).

As per the update, the Mercedes trademarks are filed for Mercedes Benz, S-Class, G-Class, Maybach, and Mercedes. The applications included goods and services like computer programs featuring textiles, beauty products, fragrances, food & drinks, trading cards, parasols, and audio/visual devices for online and virtual use. The Maybach filing additionally described plans for crypto-collectibles featuring animal furs, carpets, rugs, etc.

The trademark applications also mention plans to include financial services catering to digital currencies or tokens via a global computer network, digital currency exchange services, liquidity services for digital currencies, and blockchain assets. 

Read More: Point-E, OpenAI’s New Open-Source AI That Generates 3D Models

The Mercedes NFT trademarks come nearly a year after Mercedes Benz unveiled its first NFT project in January, wherein Art2People collaborated with five artists to compile their digital renditions of the Mercedes-Benz G-Class.

In May, the carmaker joined the Aura Blockchain Consortium of Luxury Brands as a founding member, providing access to read-to-use NFT and blockchain technology.

Advertisement

Point-E, OpenAI’s New Open-Source AI That Generates 3D Models

openai point e

OpenAI introduced its new open-source AI generator that generates 3D models based on text prompts. Point-E, a machine learning system, differs from other traditional 3D generators because it uses discrete data points to represent 3D shapes rather than create them. 

3D modeling is a highly applicable technology in movies, video games, AR, VR, metaverse, etc. However, producing photorealistic 3D graphics still requires a lot of resources and effort, and doing so using text prompts is a further achievement.

Taking inspiration from the recently viral text-to-image systems like DALL-E, Lensa, and HuggingFace’s Stable Diffusion, Point-E attempts to enhance text-to-3D technology. Point-E, or Point Efficiency, uses point clouds as they are easily synthesized in terms of computational requirements. Unlike existing systems like DreamFusion, Point-E does not require hours of GPU functions. However, its resolution is not that great.

Read More: Zhejiang, Among Other Chinese Provinces, to Build a US$28.7b Industry Metaverse by 2025

OpenAI’s research team, led by Alex Nichol, said, “Other systems leverage a large corpus of (text, image) pairs, allowing it to follow diverse and complex prompts, while our image-to-3D model is trained on a smaller dataset of (image, 3D) pairs.”

When prompted with a text, Point-E first creates a synthetic 3D rendering. It will then run this version through a series of diffusion models to generate a 3D, RGB, 1024-point cloud model. The next step generates a finer version of the same, with 4096 points. Each of these diffusion models was developed using “millions” of 3D models that had all been transformed into a standardized format.


OpenAI has released the source code on Github, along with the research paper.

Advertisement

Researchers introduce wearable electronic skin for tactile feedback in AR and VR

wearable electronic skin tactile feedback AR VR

A team of researchers from the City University of Hong Kong (CityU) has introduced ‘WeTac,’ which is a thin, wearable electronic skin that can provide tactile feedback to users in AR and VR environments.  

This wireless electro-tactile system uses a skin-friendly hydrogel layer that sticks onto the palm of the hand and collects personalized tactile sensing data to bring a more realistic virtual touch experience to the metaverse. 

The WeTac system developed by CityU consists of two parts: a palm patch with hydrogel electrodes as a tactile interface and a tiny flexible actuator that acts as a control panel. The whole actuator weighs only 19.2 grams and is small enough to be worn on one’s forearm. 

Read More: Zhejiang, Among Other Chinese Provinces, To Build A US$28.7b Industry Metaverse By 2025

It also has Bluetooth Low Energy (BLE) and a tiny rechargeable lithium-ion battery for wireless transmission and power. The thickness of the palm patch is a mere 220 microns to 1 mm, and the electrodes reach from the palm to the fingertips. 

Through this, users will be able to experience objects in virtual scenarios, such as grasping a tennis ball during sports practice or touching a cactus in virtual social networks or games.

Advertisement

Zhejiang, Among Other Chinese Provinces, to Build a US$28.7b Industry Metaverse by 2025

chinese industry metaverse

A coastal province called Zhejiang, in China is planning to build a US$28.7b industry metaverse by 2025, clubbing multiple companies together to facilitate technology development. The province presented the plan on December 15 as a part of its efforts to become one of the largest metaverse hubs. 

Over the last few years, many Chinese provinces have expressed interest in developing metaverse-oriented technologies and making the country a metaverse hub. Recently, it was reported that the Chinese metaverse industry raises US$780m in funding and is expected to become a US$5.8 trillion industry in the coming decade.

Read More: Chinese Platforms to Test Metaverse During Qatar FIFA World Cup 2022

In the presented document, Zhejiang authorities outline the idea and actions starting in 2023. One of these is the incubation of 50 startups, ten industry leaders, and other metaverse-related essential technologies, such as blockchain, virtual reality (VR), and artificial intelligence (AI). These technologies will bring companies together in production processes, industrial design, and governance.


Zhejiang’s document is not the only metaverse proposal. A few other local governments in China have also expressed their ideas outlining similar plans for developing a metaverse. In June, Shanghai also presented an industry metaverse roadmap for US$52m.

Advertisement

A Photo-editing AI startup, ImagenAI, Raises $30m in an All-Equity Growth Investment

imagenai all equity investment

ImagenAI, a startup that uses artificial intelligence (AI) for photo-editing and automating media production processes, raises US$30m in an all-equity growth investment from Summit Partners. The investment will further the startup’s Saas (software-as-a-service) offering.

ImagenAI, not to be confused with Google’s Imagen, was founded by Yotam Gil, Ron Oren, and Yoav Chai in 2020. The idea rose from stealth after Chai’s wedding, wherein he had to wait months to get the wedding pictures and videos. After speaking with photographers, the founders identified a significant industry issue: post-production is “tedious and time-consuming.”

Imagen is available as a cloud plugin for Adobe Lightroom Classic and an independent app designed to learn the photographer’s style from their previous work. It uses machine learning (ML) to capture the editing process and make predictions of editing parameters within half a second at US$0.05 per photo. 

Read More: Top AWS re:Invent 2022 Announcements.

Gil said, “Imagen profiles evolve and learn with the user over time, allowing better accuracy and consistency in applying each photographer’s style to new photos ingested into Imagen.” He added that the service would benefit photographers who edit at scale.

ImagenAI also provides pre-trained profiles based on industry experts. These profiles, called Talent AI Profiles, have pre-determined editing styles so that users can directly optimize their clicks without setting up editing parameters.

ImagenAI is making over US$10m in year-wise recurring revenue and will be profitable in the near future as the company plans to innovate using the all-equity investment for services like automatically picking the best set of pictures from a shoot and a lot more.

Advertisement

Helm.ai raises $31 million in Series C funding round 

Helm.ai raises $31 million Series C funding round

Helm.ai, a California-based startup developing software designed for advanced driver assistance systems, recently raised $31 million in a Series C funding round led by Freeman Group, only one year after it secured $26 million in venture funding. 

Its partners, including Amplo and strategic investors Honda Motor, Goodyear Ventures, and Sungwoo Hitech, have pushed Helm.ai’s valuation to about $431 million.

Founder of the Freeman Group, Brandon Freeman, is joining Helm.ai’s board of directors as part of this financing. The company has raised $78 million to this date.

Read More: Hexo Raises $270,000 In Pre-Seed Funding By Antler India 

The six-year-old startup, Helm.ai, uses an unsupervised learning method to develop software that can train neural networks without the requirement for large-scale fleet data, simulation, or annotation.

Helm.ai provides its software to various Tier 1 suppliers and OEMs in the automotive industry to help them achieve software differentiation with high-end ADAS and L4 solutions.  

The recent funding will add more employees to the 50-person workforce, R&D, and will help towards establishing several commercial partnerships.

Advertisement