Friday, November 21, 2025
ad
Home Blog Page 195

HDFC Bank-backed Lentra Acquires AI Startup TheDataTeam to Enhance Digital Lending

The heart and soul of our company lie with the merchandisers. They bridge the market with buyers who wish to avail of our products. Our team of merchandisers ensures the manufacturer of highly sophisticated garments delivered timely, irrespective of the volume.

Backed by the HDFC Bank, Lentra AI, a digital cloud lending platform, acquires a Chennai-Singapore-based AI company TheDataTeam, for an undisclosed amount. The acquisition will see the former taking over TDT’s customer intelligence platform Cadenz. 

Through this agreement, Lentra will integrate TDT’s Cadenz platform for behavior intelligence, which aids banks and finance companies to determine a customer’s creditworthiness based on a particular financial trip. It will also assist in the quicker go-to-market of innovative items.

Launched in 2019, Cadenz makes it easier for organizations to remove friction and implement new initiatives by accelerating the transition from raw customer data to actual intelligence.

Rangarajan Vasudevan, the founder and CEO of TDT, has joined Lentra as a co-founder and Chief Data Officer due to the acquisition. He iterated that both the companies shared similar ambitions to scale in digital lending products and business-building practices. Lentra, already an established, fastest-growing lending cloud would benefit from Cadenz’s customer platform. 

Read More: Voice Recognition Scaleup Speechmatics Raises $62M in a Series B Funding Round, for its Speech-to-Text Software

Lentra will incorporate Cadenz’s stack with its SaaS and API-driven modular design. Lentra’s modular, Open API-driven design aids banks in customizing customer experiences and lending journeys with a 95 percent Straight Through Processing (STP) rate. This expands the customer base available to banks and other financial institutions, lowers non-performing assets (NPAs), and boosts operational effectiveness. 

The cloud platform can handle more than 1100 API calls per second and is highly scalable.

Sandeep Mathur, Chief Revenue Officer at Lentra, said that Cadenz’s integration into the Lentra cloud would enable the latter to provide an even bigger competitive edge in the entire loan disbursing and management system. He said, “With this acquisition of Cadenz, Lentra is now on track to become the leading platform of choice for financial institutions globally.” 

Advertisement

Voice Recognition Scaleup Speechmatics Raises $62M in a Series B Funding Round, for its Speech-to-Text Software

speechmatics raises $62M in series b funding

Speechmatics, a speech recognition software provider, has raised $62 million in a Series B investment headed by Susquehanna Growth Equity to support its speech-to-text software growth in the US and Asia-Pacific.

Speechmatics was initially founded in 2006 in Cambridge by Dr. Tony Robinson. In the interim, it has grown a base of 170 customers in the B2B model. Some of its renowned clients include Deloitte, Veritone, and 3Play Media. The voice recognition company has built software that is compatible with 34 languages. Speechmatic claims listening to “millions of hours” of “hundreds of thousands” of videos has enhanced its AI by eliminating bias and errors. 

Katy Wigdalh, an ex-executive of Speechmatics, said, “What we have done is gather millions of hours of data in our effort to tackle AI bias. Our goal is to understand any and every voice, in multiple languages.” 

Read More: Snowplow Partners with Databricks for Data-Driven Applications and CDPs

Previously, Speechmatics only produced its technology for developers only via private API.

To increase its customer base, it offers more publicly accessible API tools with a drag-and-drop sampler on its website. The AI company will also increase the capacity of its data centers and fund research and development with the Series B capital.

Jonathan Klahr, Managing Director of Susquehanna Growth Equity, said, “We started tracking Speechmatics when our portfolio companies told us that again and again Speechmatics win on accuracy against all the other options including those coming from ‘Big Tech’ players.”

Klahr will soon join Speechmatic’s Board as a part of the funding. 

Advertisement

Snowplow Partners with Databricks for Data-Driven Applications and CDPs

snowplow partners with databricks

Snowplow announced a partnership with Databricks at the Data + AI Summit 2022. Being the industry leader in data creation, Snowplow provides AI-ready behavioral data in real-time that can be used on Databricks Lakehouse for building data-driven applications.

Additionally, data teams can now use behavioral data to power Snowplow’s new custom-built web models, sophisticated analytics, and automated insight as part of a composable CDP, right from their Databricks Lakehouse.

Roger Murff, VP, ISV Partners at Databricks, said, “Our partners are vital to bringing the power of Databricks’ Lakehouse Platform to more customers around the world. We are thrilled for this partnership since behavioral data created with Snowplow can be loaded directly to Databricks without complex preparation processes, which is a key competitive advantage for modern data teams.”

Read More: Weights & Biases and Run:ai Announces a Joint Partnership with NVIDIA for MLOps Stack

The partnership would allow users to take benefit from AI by accessing high-quality data to build ML/AI data applications within Databricks. Snowplow’s pioneering approach for data creation allows the users to leverage extensive data preparation, saving a lot of time. 

Alex Dean, CEO and co-founder at Snowplow, said, “With Snowplow and Databricks, data teams can harness this potential by making accurate predictions with AI-ready data and generating deep customer understanding in real-time within Databricks.”

Nick King, President of Chief Product and Marketing Officer at Snowplow, will have a speaking session on June 29, 2022, to give more insight into the partnership. 

Advertisement

AI-Powered MyRaasta App Offers Complete Vehicle Inspection in 30 Seconds

Karn Nagpal designed the MyRaasta app to leverage state-of-the-art technology to offer customers complete vehicle inspections without human involvement. The car and bike service aggregator aims to provide convenience and hyper scalability of the network for MyRaasta users across the country. 

Founded in 2021, MyRaasta is a trusted and robust service network of standardized garages and on-site services for vehicles. It provides real-time assistance and 24×7 customer support via a one-touch mobile app. With a far-reaching goal of reducing the cost of vehicle services in the country by 40%, the repair platform covers a 360-degree spectrum of two-wheelers and car services. Customers can simply fill in their demands for tyres, batteries, extended warranty, roadside assistance, and much more. 

Read More: Tesla Plans to Launch Optimus Humanoid Robot within the next few months

The inclusive AI added to the platform allows customers to use their vehicle’s pictures and videos for analysis. The AI engine uses computer vision based on the customer’s input. Since there is no manual intervention except clicking the pictures/videos, the chances of bias remain negligible, providing a detailed report for all panels within 30 seconds. 

The report provides customers with information regarding any possible flaws or damage with their precise location and severity. It also gives accurate covering costs for the repair and rep

Karn Nagpal says, “MyRaasta is bringing much-required digitisation to the vehicle service experience nationwide. We are excited to be moving closer towards our goal of providing vehicle owners with world-class technology-enabled services with each integration.” He added, “We are confident that a real-time vehicle inspection will be extremely beneficial to customers and provide them with the tools to take better care of their vehicles in their fast-paced lives.”

Advertisement

Weights & Biases and Run:ai Announces a Joint Partnership with NVIDIA for MLOps Stack

weights & biases and run ai partnership with nvidia

To make AI compute orchestration easier and MLOps platform more manageable, Run:ai and Weights & Biases jointly partner with NVIDIA. The three-way collaboration will enable data scientists to use Weight & Biases for execution while Run:ai orchestrates the workloads on NVIDIA GPUs. Before the association, firms that wished to use Run:ai and Weights & Biases simultaneously had to process each manually.  

Omri Geller, CEO, and co-founder of Run:ai, iterated that Run:ai was engineered as a plug-in to run machine learning on Kubernetes. It enables the visualization of NVIDIA GPU resources and fractions them so multiple containers can access the same GPU.

Scott McClellan, senior director of product management at NVIDIA, said, “Our strategy is to partner fairly and evenly with the overarching goal of making sure that AI becomes ubiquitous.” He furthered that the two vendors provide complementary technologies that can now plug into a single NVIDIA AI Platform for the users.

Read More: Tesla Plans to Launch Optimus Humanoid Robot within the next few months

McClellan added, “The point in time when a data science or AI project tries to go from experimentation into production, that is sometimes a little bit like the Bermuda Triangle where a lot of projects die.” With the partnership, he hopes to develop and operationalize machine learning workflows better.

Seann Gardiner, VP of business development at Weights & Biases, commented that the partnership would enable users to benefit from Weights & Biases’ training automation with Run:ai’s orchestration. 

Advertisement

Tesla Plans to Launch Optimus Humanoid Robot within the next few months

tesla plans to launch optimus humanoid robot

Tesla is on the cusp of finishing its Optimus project. The company pushed back its annual technology day, Tesla’s AI Day, by six weeks, hoping to have a working prototype of its Optimus Humanoid Robot. Initially scheduled for 19 August, the event’s second edition focusing on artificial intelligence will witness Tesla’s AI ventures and innovations again. 

Elon Musk, Tesla’s co-founder and CEO, tweeted, “Tesla AI Day pushed to Sept 30, as we may have an Optimus prototype working by then.” At the previous event, as a part of the Optimus project, Musk surprised the audience with a hint of employing Dojo to train a humanoid robot. However, no working prototype has been revealed at the moment. 

Read More: PyPI module gets compromised to steal AWS keys and credentials

Musk presented only a three-dimensional version with basic specifications. The humanoid robot had a height of five feet and eight inches, weighing 125 pounds. The final rendering would be equipped with the Tesla FSD system for intelligence, powered by around 40 actuators. 

Musk seems to have a record of repeatedly overpromising and underdelivering, given his short-spanned interest in one project. Any delay in the Optimus project would put the company further behind its spotty track record of delayed launches. Like it happened at the beginning of this year, Musk announced a delay in Cybertruck, shifting the company’s focus to the robot. Hopefully, Optimus will be ready before any other projects are delayed. 

Advertisement

Meta develops AI models for realistic sound experience in VR 

Meta develops AI models for realistic sound experience in VR

Meta has developed three new AI models – Visual-Acoustic Matching, Visually-Informed Dereverberation, and VisualVoice – to make the sound more realistic in mixed and virtual reality (VR) experiences. 

The three AI models focus on human speech and sound in the video. They are designed to push the industry faster toward the immersive reality, the company said in a statement.

The AI models were built in collaboration with the University of Texas at Austin. The company is also making the audio-visual understanding models open to developers.

Read More: Microsoft Uses AI To Improve Audio And Video Quality On Microsoft Teams

Acoustics play a role in how sound will be experienced in the metaverse. According to Meta’s AI researchers and audio specialists, AI will be core to delivering realistic sound quality.

AViTAR, the self-supervised Visual-Acoustic Matching model, adjusts audio to match the space of a target image.

Despite the lack of acoustically mismatched audio and unlabelled data, the self-supervised training objective learns acoustic matching from in-the-wild web videos, said Meta. VisualVoice learns by learning visual and auditory cues from unlabelled videos to achieve audio-visual speech separation.

VisualVoice generalizes well to the challenging real-world videos of diverse scenarios, said Meta AI researchers.

For instance, consider attending a group meeting in the metaverse with colleagues worldwide. However, instead of people having fewer conversations and talking over one another, the acoustics and reverberation would adjust accordingly as they join smaller groups and moved around the virtual spaces. 

Advertisement

ML-based approach forecasts lake ecosystem’s response to phosphorus pollution

ML-based approach forecasts lake ecosystem's response to pollution

George Sugihara and his four international colleagues have discovered the machine learning-based empirical dynamic modeling (EDM) approach to forecast and manage Lake Geneva’s ecological response to the threat of phosphorus pollution. Sugihara is a biological oceanographer at Scripps Institution of Oceanography.

Phosphorus inputs from detergents and fertilizers have degraded the water quality of Switzerland’s Lake Geneva throughout the middle of the 20th century. This led the officials to take action to remediate pollution in the 1970s.

The authors explain that their machine learning-based approach leads to substantially better predictions and a more actionable description of the biogeochemical and ecological processes that sustain water quality.

Read More: Artificial Intelligence Is Now Helping Forecast Amazon Deforestation

The hybrid model suggests that the impact of raised air temperature by 3°C (5.4°F) on water quality would be the same as the phosphorus pollution of the previous century. It also implies that the best management practices may no longer involve single control lever approaches like reducing phosphorus inputs alone.

The team of researchers also includes Damien Bouffard of the Swiss Federal Institute of Aquatic Sciences and Technology. The new hybrid empirical dynamic modeling (EDM) approach was published in the journal Proceedings of the National Academy of Sciences.

EDM can also help as a supervised machine learning tool, a way for computers to learn patterns and educate researchers about the mechanisms involved in data.

Advertisement

PyPI module gets compromised to steal AWS keys and credentials

pypi compromised to steal aws keys

Several malicious Python packages accessible through the PyPI module were discovered, taking confidential data, including AWS keys and credentials, and sending it to openly accessible destinations. 

PyPI is an open-source repository of Python packages that developers use for their Python-based projects. The widely used PyPI package “ctx” was recently compromised and might release versions that leak your environment variables to an external server. “Ctx” is a simple Python package that enables programmers to manipulate their “dictionary” or “dict” objects.

Companies like Sonatype, specialize in software supply-chain security and employ specific automated malware detection methods to find them. Sonatype identified several more packages to be malicious. These include:

  • loglib-modules
  • pyg-modules
  • pygrata
  • hkg-sol-utils
  • Pygrata-utils

J. Cardona and C. Fernandez, Sonatype analysts, identified that ‘loglib-modules’ and ‘pygrata-utils’ were used for exfiltration and snatching AWS credentials and other essential information. 

Read More: OpenAI’s New AI, trained on 70,000 in-game hours on YouTube, can play Minecraft.

The two analysts contacted the domain owners to alert them to the public exposure and to provide an explanation under the assumption that they might be missing anything. The endpoint was quickly made inaccessible to the public without any other response, likely indicating illegitimacy. 

PyPI often responds quickly to reports of harmful packages on the platform, but because there is no actual filtering before submission, risky packages may remain for some time. It’s interesting to note that “pygrata” requires “pygrata-utils” as a dependency because it lacks the data-stealing functionality. Because of this, even though four malicious packages were swiftly detected and deleted from PyPI, “pygrata” stayed there for a more extended period despite its limited autonomy.

Software developers are recommended to examine package descriptions, upload dates, release histories, and upload dates in addition to package names. These factors tell whether a Python package is authentic or a risky imitation.

Advertisement

OpenAI’s New AI, trained on 70,000 in-game hours on YouTube, can play Minecraft

open ai new ai play minecraft

In the most recent development, OpenAI claims its new AI model can play Minecraft, having trained on 70,000 hours of in-game visuals. The new AI uses standard keyboard-and-mouse inputs to play in the same world as humans, unlike many earlier Minecraft algorithms that function in far simpler “sandbox” versions of the game.

In recent years, numerous neural networks, like DeepMind’s MuZero for chess, have triumphed in various games with reinforcement learning. For the more complicated “open-world” game environment of Minecraft, Bowen Baker and his team sought to create a neural network. 

They broke ground in releasing “Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos”. They used a large dataset to train the neural network to mimic human keystrokes in solving different tasks in the game.

Read More: Ai-Da, Ultra-realistic Humanoid Robot Artist Portraits at Glastonbury Festival

After some tweaking, OpenAI discovered that the model could carry out a wide range of complex tasks, from swimming to tracking down prey and eating it. The AI also mastered the “pillar jump,” in which the player deposits a block of material beneath itself mid-jump to boost height. It learned to construct a diamond pickaxe after further fine-tuning with reinforcement learning—a feat that requires typically human gamers 24,000 actions and 20 minutes to complete.

Baker and his team said, “While we only experiment in Minecraft, we believe that VPT provides a general recipe for training behavioral priors in hard, yet generic, action spaces in any domain that has a large amount of freely available unlabeled data, such as computer usage.”

OpenAI has been doing wonders in training large datasets for AI-based tasks since its GPT-3 success in 2020. It blew people away by ingesting billions of words into the algorithm and receiving well-crafted sentences. VPT is yet another addition to the company’s outstanding AI portfolio. 

Advertisement