Saturday, November 22, 2025
ad
Home Blog Page 226

Biofourmis raises $300 million in Series D Funding Round

Biofourmis Series D Funding Round

HealthTech firm Biofourmis raises $300 million in its recently held series D funding round led by global growth equity firm General Atlantic. Other investors, including CVS Health, also participated in the company’s latest funding round. 

The newly added capital has increased Biofourmis’ market valuation to over $1.3 billion, giving the company a unicorn status. 

Biofourmis intends to use the raised funds to extend its virtual care offerings, including providing personalized and predictive in-home care to an increasing number of severely sick patients and expanding its newly announced virtual specialty care service, Biofourmis Care, to people with complicated chronic diseases. 

Read More: Philips’ Speech partners with Sembly AI to launch Speech Technology Solution for Meetings

Artificial intelligence (AI) and powerful FDA-approved analytics are used in Biofourmis’ device, and sensor-agnostic solutions to dynamically monitor patients, set tailored baselines, and eliminate false positives. 

Biofourmis’ experts can quickly confirm and respond to these alerts through its round-the-clock virtual clinical care team. In addition to this, the company’s certified experts can also discuss medication modifications and updated care plans with primary care teams to ensure patient care is effectively coordinated. 

Managing Director and Global Head at General Atlantic, Robbert Vorhoff, said, “We believe Biofourmis is differentiated by technology solutions underpinned by its deep clinical research.” 

He further added that Biofourmis promotes personalized therapy and better outcomes and gives crucial patient health insights to health systems. 

Former Medtronic CEO and Chairperson at Intel, Dr. Omar Ishrak, will join the company’s board. Ishrak mentioned that he is delighted to join Biofourmis as Chairman of the Board at this pivotal moment in the company’s rapid rise to prominence as a leader in virtual care and digital medicine. 

United States-based virtual care and digital medicine company Biofourmis was founded by Kuldeep Singh Rajput, Maulik Majmudar, and Wendou Niu in 2015. The firm specializes in discovering, developing, and delivering clinically tested software-based therapeutics for patients. To date, Biofourmis has raised more than $443 million from multiple investors over nine funding rounds. 

Co-founder and Chief Executive Officer of Biofourmis, Kuldeep Singh Rajput, said, “We are excited to partner with General Atlantic, which shares our vision for the future of virtual care and the urgency to bring the Biofourmis solution to customers and patients across the globe.”

Advertisement

University of Johannesburg launches Blockchain-based Certificates for Graduates

University of Johannesburg Blockchain Certificates

The University of Johannesburg (UJ) launches its new blockchain-based certificates for graduates in an effort to boost security measures. 

According to the University, this step will considerably help UJ minimize the chances of any fraudulent activity related to certification, including stopping counterfeiting and avoiding false representation of qualifications. 

With the launch of this initiative, each university-issued qualification document will have a QR code that can be used to verify its authenticity. Earlier, UJ had also implemented digital certification and a virtual qualification verification system, allowing graduates to digitally access their credentials. 

Read More: AI can model First Impressions based on Facial Features

The Digital Certificate system, which was implemented for graduates, gave them digital access to their certificates and allowed them to share them with third parties or potential employers securely. 

Dr. Tinus Van Zyl, Director of Central Academic Administration at UJ, said, “The new blockchain-based certificate features will enhance the security of certificates even more. Certificates issued from this year on will have QR codes printed on them, which anybody can scan with a smartphone to verify whether the information on the certificate is correct and has been issued legitimately by UJ.” 

Van Zyl further added that the public can now confirm UJ graduates’ given qualifications without contacting the University or go via a verification service, just by scanning the QR code on the certificate, and best of all, at no cost. 

According to officials, the blockchain-based certifications will defend not only the University’s credentials from fraud but also the institution’s reputation and qualification integrity. 

“UJ is committed to applying new technologies to improve systems and service delivery. This continuous improvement strategy and use of cutting-edge technology, facilitated through the Fourth Industrial Revolution, are at the heart of our philosophy,” said Registrar of UJ, Prof Kinta Burger. 

Advertisement

MyValueVision plans to launch 100 Franchisee Stores with AI Technology

MyValueVision 100 Franchisee Stores AI

India’s one of the fastest-growing eyewear brands, MyValueVision, is all set to launch 100 new franchisee stores that will be equipped with modern technologies such as artificial intelligence (AI), virtual reality (VR), and augmented reality (AR). 

This is a step toward the company’s aim of expanding into Tier II and Tier III cities in India, along with overseas markets. In addition to the opening of 100 stores, MyValueVision is also planning to open additional 200 stores in the coming years. 

MyValueVision.com currently has operations in Hyderabad, New Delhi, Bangalore, Ahmedabad, Vizag, Kakinada, Rajahmundry, Warangal, Khammam, and Karimnagar, among other Indian cities and this new development will drastically help the company in reaching out to more customers. 

Read More: AI tool predicts Tumor Regrowth in Cancer Patients 

MyValueVision.com has partnered with Virtooal, a premier European technology company, to improve their customer experience with virtual mirror technology. 

COO of MyValueVision, Prasanna Kumar, said, “We believe augmented reality product visualization is the future of fashion-based eCommerce.” 

He further added that the virtual mirror technology allows customers to digitally and in real-time try on the company’s newest and most fashionable spectacle frame designs, helping in providing enhanced product experience and consumer happiness. 

The franchise business model is gaining immense popularity over the last few years, and with India on the verge of becoming the world’s third-largest consumer market, MyValueVision’s plan to open new stores will help increase its customer base. 

Apart from lens companies like VisionRX, Essilor, and Zeiss, MyValuevision.com has teamed with leading brands like IDEE, Tommy Hilfiger, Marvel, Teddy, Ray-Ban, David Jones, Police, Carrera, and others. 

“We are expecting to raise an investment of US$ 10 million for scaling up our services, technology and expanding our physical presence by introducing more than 200 plus stores,” added Kumar. 

Advertisement

Philips’ Speech partners with Sembly AI to launch Speech Technology Solution for Meetings

Philips’ Speech partners Sembly AI

Philips’ leading speech-to-text solutions provider, Speech Processing Solutions, partners with voice and conversation analytics technology company Sembly AI to launch a new speech technology solution for meetings. 

Both the companies plan to launch high-quality microphones and smart meeting technology that will drastically improve the meeting experience of users. 

According to the companies, their solution named SmartMeeting Conference devices includes top-of-the-line microphones that record 360° and capture each speaker in the room, ensuring crisp, clear audio. 

Read More: MIT Team Builds New algorithm to Label Every Pixel in Computer Vision Dataset

The solution comes with USB and Bluetooth support that increases the technology’s versatility and enables a variety of use-cases. Customers of SmartMeeting Conference devices can seamlessly connect in meetings from anywhere, starting from conference rooms to open streets, using the smart devices and connectivity options. 

Moreover, the companies say that one of their devices is equipped with an intelligent camera that can be set up to automatically recognize and focus on the person who is speaking. 

CEO of Speech Processing Solutions, Dr. Thomas Brauner, said, “We provide best-in-class conference microphones and pair them up with a smart meeting assistant solution, which creates an automatic transcript, summary, and an action list of every meeting. Our unique solutions help busy professionals save time and conduct meetings more efficiently than ever before.” 

He further added that they are glad to partner with Sembly AI as this collaboration has opened up new market opportunities by combining the best of both worlds. 

United States-based technology company Sembly AI was founded by Artem Koren and Gil Makleff in 2019. The startup is best known for providing solutions to help remote teams function more efficiently by simplifying their work lives and delivering sophisticated insights. 

“The Philips products seamlessly integrate with the included Sembly SaaS to provide more value to our end customers,” said Co-founder and CEO of Sembly AI Gil Makleff. He also mentioned that they are particularly enthusiastic about the complementary nature and synergy between the two teams, which they believe will benefit both the top and bottom lines of the companies. 

Advertisement

AI can model First Impressions based on Facial Features

AI model First Impressions facial features

Researchers from the Steven Institute of Technology, Princeton University, and the University of Chicago have developed a novel artificial intelligence (AI) algorithm that can accurately model first impressions based on images of a person’s facial features. 

The research has been published in the Proceedings of the National Academy of Sciences journal. It is very natural for humans to develop certain preconceptions about someone when they meet for the very first time. 

The judgments we make regarding others at first glance might not be entirely accurate, but they play a vital role in shaping relationships. 

Read More: Hour One raises $20 million in Series A Funding Round

First impressions not only matter for inter-personal reasons, but they also impact several other decisions like choices made related to hiring to criminal sentencing. 

Thousands of people were asked to score over 1,000 computer-generated photographs of faces based on qualities such as how intelligent, electable, religious, trustworthy, or outgoing the person in the picture appeared to be. 

The gathered data was then used to train a neural network to make similar snap decisions about people based merely on images of their faces. 

Moreover, the algorithm can also be used to edit pictures to make its subjects appear in a certain way. For instance, the AI algorithm can modify an image of a person in a manner to make the person look more trustworthy or intelligent. 

Jordan W. Suchow, a cognitive scientist and AI expert at the School of Business at Stevens, led the research along with other team members, including Joshua Peterson and Thomas Griffiths at Princeton and Stefan Uddenberg and Alex Todorov at Chicago Booth. 

Suchow said, “Given a photo of your face, we can use this algorithm to predict what people’s first impressions of you would be, and which stereotypes they would project onto you when they see your face.” 

He further added that the AI algorithm does not give specific feedback or explain why a particular image evokes a particular response. 

Advertisement

AI tool predicts Tumor Regrowth in Cancer Patients 

AI tool predicts Tumor Regrowth

Researchers have developed a novel artificial intelligence (AI)-powered tool that can accurately predict tumor regrowth in cancer patients. 

After treatment, patients must be closely monitored to ensure that any cancer recurrence is addressed quickly. 

The newly developed AI tool has the potential to revolutionize cancer treatment by considerably helping doctors in monitoring patients for tumor regrowth. 

Read More: G3 Global continues to focus on AI and other IT solutions

Traditional methods involve doctors analyzing and comparing the original amount and spread of cancer to determine regrowth, which is not entirely accurate and is a time cum labor-intensive process. Therefore, multiple clinical oncologists see this AI tool as exciting. 

Researchers from the Royal Marsden NHS Foundation Trust, the Institute of Cancer Research, London, and Imperial College London came up with this AI model. 

The developing team fed their model with clinical data from 657 NSCLC patients treated at five UK hospitals, including their age, gender, BMI, smoking status, radiotherapy intensity, tumor characteristics, and data on various prognostic factors to better predict the chance of recurrence of tumor in a patient.

Dr. Richard Lee, a consultant physician in respiratory medicine and early diagnosis at the Royal Marsden NHS Foundation Trust, said, “This is an important step forward in being able to use AI to understand which patients are at highest risk of cancer recurrence and to detect this relapse sooner so that re-treatment can be more effective.” 

The AI tool can drastically help in the early detection of recurrence in high-risk individuals, ensuring that they receive treatment more quickly. An additional benefit of this AI model is that it would also reduce the workload on hospitals by minimizing unnecessary follow-ups of cancer patients. 

“Reducing the number of scans needed in this setting can be helpful, and also reduce radiation exposure, hospital visits, and make more efficient use of valuable NHS resources,” added Lee. 

Advertisement

MIT Team Builds New algorithm to Label Every Pixel in Computer Vision Dataset

Image Credits: Ilija Mihajlovic

Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Microsoft, and Cornell University collaborated to develop “STEGO,” an algorithm that can jointly find and segment objects down to the pixel level without the need for any human labeling. 

For a long time, the machine learning domain has had a model-centric approach, with everyone trying to build the next best model. Machine learning algorithms, particularly computer vision models, require a large number of annotated pictures or videos to discover patterns, hence a labeled dataset is frequently required. When it comes to training a model for your activity, you’ll probably discover that the most significant advantages come from carefully curating and refining datasets through annotation rather than worrying about the precise model architecture you’re employing.

In addition, labeling every image and object in a computer vision dataset can be a challenging task. Normally, humans draw boxes around certain items inside an image to create training data for computers to read. For example, in a clear blue sky, you can find a box drawn around a bird and labeled “bird.” Labeling the dataset is necessary as models will have a hard time recognizing objects, people, and other crucial visual features without it. However, even an hour of tagging and categorizing data can be taxing to humans. 

Data labeling takes a long time to complete, especially when done manually. Furthermore, there are other considerations to be made when labeling using bounding boxes and polygons. It’s crucial, for instance, to draw the lines slightly outside the item, but neither too far outside nor too close to the shape. The computer vision model may not learn the correct patterns needed for detection if an instance of an object type is not identified while doing data annotation. 

Machine learning models become more efficient if they are data-aware – which is made possible by training models with a properly labeled dataset. This is especially important when labeling necessitates the use of expensive expertise. For example, a computer vision model designed to detect lung cancer must be trained using lung images classified by competent radiologists. The model learns to pre-label the scans over time, and once the pre-labeling is precise enough, the task of verifying the presence of infected areas can be delegated to those who are less experienced. 

STEGO stands for Self-supervised Transformer with Energy-based Graph Optimization. It employs a method called semantic segmentation, which entails assigning a class label to each pixel in an image. Humans, vehicles, flowers, plants, buildings, roads, animals, and so on might all be included in these labels. In the previous versions of image or object or instance classification, all we cared about is acquiring the labels of all the objects in the image. This meant that the target object was sometimes confined within a designated box that also included other things in the surrounding pixels within the boxed-in border. Using semantic segmentation, the user can still precisely label every pixel in the dataset, but just the pixels that form the object – you get only bird pixels, not bird pixels plus some clouds. In other words, semantic segmentation is an upgrade from previous techniques that could label distinct things like humans, cars but struggled with “stuff” like vegetation, sky, and mashed potatoes.

Only a few researchers, however, have attempted to tackle the challenge of semantic segmentation without the need for motion cues or human supervision. STEGO furthers this research by looking for comparable things that exist throughout a dataset to find these objects without the assistance of a human. It clusters related items together to create a consistent world picture across all of the photos it learns from. Because STEGO can learn without labels, it can recognize things in a wide range of domains, even some that humans don’t completely comprehend. The researchers put STEGO to the test on a variety of visual domains, including general photographs, driving images, and high-altitude aerial photography. The results reveal that STEGO was able to distinguish and separate relevant items in each location and that these objects were entirely associated with human evaluations.

The COCO-Stuff dataset, which includes images from all around the world, ranging from interior settings to people performing sports to trees and cows, was STEGO’s most extensive benchmark. COCO-Stuff augments all 164K images of the popular COCO 2017 dataset with pixel-wise annotations for 91 stuff classes and 80 thing classes. Scene understanding tasks such as semantic segmentation, object identification, and picture captioning can all benefit from these annotations.

Read More: Top 15 Popular Computer Vision Datasets

MIT CSAIL’s STEGO not only doubled the performance of prior systems on the COCO-Stuff test, but it also outperformed them. When applied to data from driverless automobiles, STEGO efficiently distinguished streets, people, and street signs with far better precision and granularity than previous systems. According to MIT researchers, previous state-of-the-art technologies could capture a low-resolution essence of a scenario in most instances but failed with fine-grained details such as mislabeling humans as blobs, misidentifying motorcycles as people, and failing to spot any geese.

The STEGO is built on the DINO algorithm, which uses the ImageNet database to learn about the world by viewing over 14 million photos. STEGO fine-tunes the DINO backbone through a learning process that replicates its approach of putting together environmental elements to build meaning.

Consider two photographs of dogs strolling in the park, for example. STEGO can tell (without human intervention) how each scene’s items connect to one another, even whether they’re different dogs, with different owners, in various parks. The writers even explore STEGO’s thinking to see how similar each small brown fuzzy creature in the photographs is, as well as other common objects like grass and humans. STEGO creates a consistent representation of the word by linking things across photos.

Despite outperforming previous systems, STEGO has limitations. For example, it can recognize both pasta and grits as “food-stuffs,” and “food-things” like bananas and chicken wings but it isn’t particularly good at distinguishing between them. It also grapples with nonsensical or conceptual imagery, such as a banana resting on a phone receiver. The MIT CSAIL team expects that future iterations will provide a bit more flexibility, allowing the algorithm to recognize items from several classes.

“In making a general tool for understanding potentially complicated datasets, we hope that this type of algorithm can automate the scientific process of object discovery from images. There are a lot of different domains where human labeling would be prohibitively expensive, or humans simply don’t even know the specific structure, like in certain biological and astrophysical domains. We hope that future work enables the application to a very broad scope of datasets. Since you don’t need any human labels, we can now start to apply ML tools more broadly,” says Mark Hamilton, lead author of the study. Hamilton is also a Ph.D. student in electrical engineering and computer science at MIT, a research affiliate of MIT CSAIL, and a software engineer at Microsoft.

Advertisement

Hour One raises $20 million in Series A Funding Round

Hour One series A funding

Artificial intelligence company Hour One raises $20 million in its recently held series A funding round led by Insight Partners. 

Other investors, including Galaxy Interactive, Remagine Ventures, Kindred Ventures, Semble Ventures, Cerca Partners, Digital-Horizon, and Eynat Guez, also participated in the funding round. 

Hour One is an artificial intelligence platform that transforms humans into virtual human characters that can be triggered with natural expressiveness in any language for various commercial and professional applications. 

Read More: Twitter board accepts Musk’s offer of $44 billion

According to Hour One, the freshly raised funds will be used to improve and streamline the process of becoming a virtual person on the platform, allowing it to be done from any mobile device with studio-quality video creation and complete automation. 

Lonne Jaffe, Managing Director of Insight Partners, will now join the board of Hour One as a part of the investment. 

Jaffe said, “The team’s grand vision is to be able to embed this extraordinary capability within any software product or allow it to be invoked in real-time via API. We look forward to partnering with Oren and the team at Hour One as they ScaleUp and capture this fast-growing market opportunity.” 

He further added that Hour One is at the forefront of generative AI’s power and accuracy as it continues to advance at a breakneck speed. 

Israel-based artificial intelligence-powered video creation platform Hour One was founded by Lior Hakim and Oren Aharon in 2019. The startup specializes in offering a platform that uses photo-real AI people to digitize and improve the standard video production process, allowing for a fully scalable way of making and releasing a live-action video for professional use cases. 

To date, the company has raised $25 million from multiple investors over three funding rounds. Hour One’s partners include several industry-leading companies such as Intel, Microsoft, Cameo, NVIDIA, etc. 

“Very soon, any person will be able to have a virtual twin for professional use that can deliver content on their behalf, speak in any language, and scale their productivity in ways previously unimaginable,” said CO-founder and CEO of Hour One, Oren Aharon. He also mentioned that they are delighted that Insight Partners has joined them at this crucial moment in the company’s journey.

Advertisement

G3 Global continues to focus on AI and other IT solutions

G3 Global focus AI IT solutions

G3 Global Berhad, a developer of artificial intelligence (AI) and other IT-based solutions for a variety of industries, stays committed to the goal of the Memorandum of Understanding (MoU) with SenseTime Group and China Harbour Engineering Company for the establishment of an AI Park. 

The Memorandum of Understanding was signed back in 2019 and expired on 25th April 2022. However, this development ensures that the AI park project is still in progress but, of course, with some changes in landscape and planning. 

Drik Quinten, Managing Director of G3, said, “G3 will continue to focus on its AI and other IT-based solutions to grow the business. The parties to the MoU noted that the landscape for the development of the originally anticipated AI Park has changed and that the project may have to take on a new form.” 

Read More: Twitter board accepts Musk’s offer of $44 billion

G3, SenseTime, and CHEC are all eager to collaborate on large-scale AI and IT projects that will be sustainable in the long run. 

“We have been exploring and discussing concepts that leverage each other’s strengths and expertise whilst considering Malaysia’s AI roadmap and strategic position at the same time,” added Quinten. 

The MoU was signed initially to develop Malaysia’s first AI-powered’ technopolis’ over five years. The project received a whopping investment of nearly $1 billion and was aimed to promote research in artificial intelligence and the development of public-service infrastructure. 

The AI park, according to Dirk Quinten, would redefine intelligent city life through digital technologies that are rapidly merging and have already changed the way people live, work, and play.

Advertisement

Twitter board accepts Musk’s offer of $44 billion

Twitter accepts Musk;s offer

Twitter announces that it has now decided to accept Elon Musk’s first and final offer to acquire the microblogging company for $54.20 per share in cash in a transaction valued at approximately $44 billion

Twitter has agreed to enter into a definitive agreement with the CEO of Tesla and the World’s richest man Elon Musk to close this deal. However, it can be expected that the acquisition deal will take over six months for complete closure. 

Post-acquisition, Twitter will become a private company, requiring shareholder and regulatory approval. 

Read More: Google to Ban Third-party Call Recording apps from Play Store

Musk earlier stated that he was interested in buying Twitter as he thinks that the platform is in need of a complete transformation, and he believes that Twitter can play a vital role in promoting and supporting free speech, which is an integral part of any democratic society. 

The offer made by Musk to buy 100% shares of Twitter is a 38 percent premium over Twitter’s stock closing price on 1st April. This acquisition has raised many questions and uncertainties in the minds of Twitter employees regarding their jobs. 

Parag Agarwal, CEO of Twitter, said, “Once the deal closes, we don’t know which direction the platform will go. I believe when we have an opportunity to speak with Elon, it’s a question we should address with him.” 

According to a recent CNBC report, Musk will meet with Twitter employees for a question-and-answer session at a later date. 

Moreover, former CEO of Twitter Jack Dorsey posted a series of Tweets suggesting that he is in support of Musk buying the company. According to him, this is the right step toward the future of Twitter. “Elon’s goal of creating a platform that is maximally trusted and broadly inclusive is the right one. This is also @paraga’s goal and why I chose him,” Dorsey mentioned. 

Though many Twitteratis are celebrating this development, criticism is also around the corner. After this episode of Musk buying Twitter, actress Jameela Jamil announced her departure. 

“I fear this free speech bid is going to help this hell platform reach its final form of totally lawless hate, bigotry, and misogyny,” she mentioned in a tweet. 

Advertisement