Researchers have developed a novel artificial intelligence (AI)-powered tool that can accurately predict tumor regrowth in cancer patients.
After treatment, patients must be closely monitored to ensure that any cancer recurrence is addressed quickly.
The newly developed AI tool has the potential to revolutionize cancer treatment by considerably helping doctors in monitoring patients for tumor regrowth.
Traditional methods involve doctors analyzing and comparing the original amount and spread of cancer to determine regrowth, which is not entirely accurate and is a time cum labor-intensive process. Therefore, multiple clinical oncologists see this AI tool as exciting.
Researchers from the Royal Marsden NHS Foundation Trust, the Institute of Cancer Research, London, and Imperial College London came up with this AI model.
The developing team fed their model with clinical data from 657 NSCLC patients treated at five UK hospitals, including their age, gender, BMI, smoking status, radiotherapy intensity, tumor characteristics, and data on various prognostic factors to better predict the chance of recurrence of tumor in a patient.
Dr. Richard Lee, a consultant physician in respiratory medicine and early diagnosis at the Royal Marsden NHS Foundation Trust, said, “This is an important step forward in being able to use AI to understand which patients are at highest risk of cancer recurrence and to detect this relapse sooner so that re-treatment can be more effective.”
The AI tool can drastically help in the early detection of recurrence in high-risk individuals, ensuring that they receive treatment more quickly. An additional benefit of this AI model is that it would also reduce the workload on hospitals by minimizing unnecessary follow-ups of cancer patients.
“Reducing the number of scans needed in this setting can be helpful, and also reduce radiation exposure, hospital visits, and make more efficient use of valuable NHS resources,” added Lee.
Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Microsoft, and Cornell University collaborated to develop “STEGO,” an algorithm that can jointly find and segment objects down to the pixel level without the need for any human labeling.
For a long time, the machine learning domain has had a model-centric approach, with everyone trying to build the next best model. Machine learning algorithms, particularly computer vision models, require a large number of annotated pictures or videos to discover patterns, hence a labeled dataset is frequently required. When it comes to training a model for your activity, you’ll probably discover that the most significant advantages come from carefully curating and refining datasets through annotation rather than worrying about the precise model architecture you’re employing.
In addition, labeling every image and object in a computer vision dataset can be a challenging task. Normally, humans draw boxes around certain items inside an image to create training data for computers to read. For example, in a clear blue sky, you can find a box drawn around a bird and labeled “bird.” Labeling the dataset is necessary as models will have a hard time recognizing objects, people, and other crucial visual features without it. However, even an hour of tagging and categorizing data can be taxing to humans.
Data labeling takes a long time to complete, especially when done manually. Furthermore, there are other considerations to be made when labeling using bounding boxes and polygons. It’s crucial, for instance, to draw the lines slightly outside the item, but neither too far outside nor too close to the shape. The computer vision model may not learn the correct patterns needed for detection if an instance of an object type is not identified while doing data annotation.
Machine learning models become more efficient if they are data-aware – which is made possible by training models with a properly labeled dataset. This is especially important when labeling necessitates the use of expensive expertise. For example, a computer vision model designed to detect lung cancer must be trained using lung images classified by competent radiologists. The model learns to pre-label the scans over time, and once the pre-labeling is precise enough, the task of verifying the presence of infected areas can be delegated to those who are less experienced.
STEGO stands for Self-supervised Transformer with Energy-based Graph Optimization. It employs a method called semantic segmentation, which entails assigning a class label to each pixel in an image. Humans, vehicles, flowers, plants, buildings, roads, animals, and so on might all be included in these labels. In the previous versions of image or object or instance classification, all we cared about is acquiring the labels of all the objects in the image. This meant that the target object was sometimes confined within a designated box that also included other things in the surrounding pixels within the boxed-in border. Using semantic segmentation, the user can still precisely label every pixel in the dataset, but just the pixels that form the object – you get only bird pixels, not bird pixels plus some clouds. In other words, semantic segmentation is an upgrade from previous techniques that could label distinct things like humans, cars but struggled with “stuff” like vegetation, sky, and mashed potatoes.
Only a few researchers, however, have attempted to tackle the challenge of semantic segmentation without the need for motion cues or human supervision. STEGO furthers this research by looking for comparable things that exist throughout a dataset to find these objects without the assistance of a human. It clusters related items together to create a consistent world picture across all of the photos it learns from. Because STEGO can learn without labels, it can recognize things in a wide range of domains, even some that humans don’t completely comprehend. The researchers put STEGO to the test on a variety of visual domains, including general photographs, driving images, and high-altitude aerial photography. The results reveal that STEGO was able to distinguish and separate relevant items in each location and that these objects were entirely associated with human evaluations.
The COCO-Stuff dataset, which includes images from all around the world, ranging from interior settings to people performing sports to trees and cows, was STEGO’s most extensive benchmark. COCO-Stuff augments all 164K images of the popular COCO 2017 dataset with pixel-wise annotations for 91 stuff classes and 80 thing classes. Scene understanding tasks such as semantic segmentation, object identification, and picture captioning can all benefit from these annotations.
MIT CSAIL’s STEGO not only doubled the performance of prior systems on the COCO-Stuff test, but it also outperformed them. When applied to data from driverless automobiles, STEGO efficiently distinguished streets, people, and street signs with far better precision and granularity than previous systems. According to MIT researchers, previous state-of-the-art technologies could capture a low-resolution essence of a scenario in most instances but failed with fine-grained details such as mislabeling humans as blobs, misidentifying motorcycles as people, and failing to spot any geese.
The STEGO is built on the DINO algorithm, which uses the ImageNet database to learn about the world by viewing over 14 million photos. STEGO fine-tunes the DINO backbone through a learning process that replicates its approach of putting together environmental elements to build meaning.
Consider two photographs of dogs strolling in the park, for example. STEGO can tell (without human intervention) how each scene’s items connect to one another, even whether they’re different dogs, with different owners, in various parks. The writers even explore STEGO’s thinking to see how similar each small brown fuzzy creature in the photographs is, as well as other common objects like grass and humans. STEGO creates a consistent representation of the word by linking things across photos.
Despite outperforming previous systems, STEGO has limitations. For example, it can recognize both pasta and grits as “food-stuffs,” and “food-things” like bananas and chicken wings but it isn’t particularly good at distinguishing between them. It also grapples with nonsensical or conceptual imagery, such as a banana resting on a phone receiver. The MIT CSAIL team expects that future iterations will provide a bit more flexibility, allowing the algorithm to recognize items from several classes.
“In making a general tool for understanding potentially complicated datasets, we hope that this type of algorithm can automate the scientific process of object discovery from images. There are a lot of different domains where human labeling would be prohibitively expensive, or humans simply don’t even know the specific structure, like in certain biological and astrophysical domains. We hope that future work enables the application to a very broad scope of datasets. Since you don’t need any human labels, we can now start to apply ML tools more broadly,” says Mark Hamilton, lead author of the study. Hamilton is also a Ph.D. student in electrical engineering and computer science at MIT, a research affiliate of MIT CSAIL, and a software engineer at Microsoft.
Artificial intelligence company Hour One raises $20 million in its recently held series A funding round led by Insight Partners.
Other investors, including Galaxy Interactive, Remagine Ventures, Kindred Ventures, Semble Ventures, Cerca Partners, Digital-Horizon, and Eynat Guez, also participated in the funding round.
Hour One is an artificial intelligence platform that transforms humans into virtual human characters that can be triggered with natural expressiveness in any language for various commercial and professional applications.
According to Hour One, the freshly raised funds will be used to improve and streamline the process of becoming a virtual person on the platform, allowing it to be done from any mobile device with studio-quality video creation and complete automation.
Lonne Jaffe, Managing Director of Insight Partners, will now join the board of Hour One as a part of the investment.
Jaffe said, “The team’s grand vision is to be able to embed this extraordinary capability within any software product or allow it to be invoked in real-time via API. We look forward to partnering with Oren and the team at Hour One as they ScaleUp and capture this fast-growing market opportunity.”
He further added that Hour One is at the forefront of generative AI’s power and accuracy as it continues to advance at a breakneck speed.
Israel-based artificial intelligence-powered video creation platform Hour One was founded by Lior Hakim and Oren Aharon in 2019. The startup specializes in offering a platform that uses photo-real AI people to digitize and improve the standard video production process, allowing for a fully scalable way of making and releasing a live-action video for professional use cases.
To date, the company has raised $25 million from multiple investors over three funding rounds. Hour One’s partners include several industry-leading companies such as Intel, Microsoft, Cameo, NVIDIA, etc.
“Very soon, any person will be able to have a virtual twin for professional use that can deliver content on their behalf, speak in any language, and scale their productivity in ways previously unimaginable,” said CO-founder and CEO of Hour One, Oren Aharon. He also mentioned that they are delighted that Insight Partners has joined them at this crucial moment in the company’s journey.
G3 Global Berhad, a developer of artificial intelligence (AI) and other IT-based solutions for a variety of industries, stays committed to the goal of the Memorandum of Understanding (MoU) with SenseTime Group and China Harbour Engineering Company for the establishment of an AI Park.
The Memorandum of Understanding was signed back in 2019 and expired on 25th April 2022. However, this development ensures that the AI park project is still in progress but, of course, with some changes in landscape and planning.
Drik Quinten, Managing Director of G3, said, “G3 will continue to focus on its AI and other IT-based solutions to grow the business. The parties to the MoU noted that the landscape for the development of the originally anticipated AI Park has changed and that the project may have to take on a new form.”
G3, SenseTime, and CHEC are all eager to collaborate on large-scale AI and IT projects that will be sustainable in the long run.
“We have been exploring and discussing concepts that leverage each other’s strengths and expertise whilst considering Malaysia’s AI roadmap and strategic position at the same time,” added Quinten.
The MoU was signed initially to develop Malaysia’s first AI-powered’ technopolis’ over five years. The project received a whopping investment of nearly $1 billion and was aimed to promote research in artificial intelligence and the development of public-service infrastructure.
The AI park, according to Dirk Quinten, would redefine intelligent city life through digital technologies that are rapidly merging and have already changed the way people live, work, and play.
Twitter has agreed to enter into a definitive agreement with the CEO of Tesla and the World’s richest man Elon Musk to close this deal. However, it can be expected that the acquisition deal will take over six months for complete closure.
Post-acquisition, Twitter will become a private company, requiring shareholder and regulatory approval.
Musk earlier stated that he was interested in buying Twitter as he thinks that the platform is in need of a complete transformation, and he believes that Twitter can play a vital role in promoting and supporting free speech, which is an integral part of any democratic society.
The offer made by Musk to buy 100% shares of Twitter is a 38 percent premium over Twitter’s stock closing price on 1st April. This acquisition has raised many questions and uncertainties in the minds of Twitter employees regarding their jobs.
Parag Agarwal, CEO of Twitter, said, “Once the deal closes, we don’t know which direction the platform will go. I believe when we have an opportunity to speak with Elon, it’s a question we should address with him.”
According to a recent CNBC report, Musk will meet with Twitter employees for a question-and-answer session at a later date.
Moreover, former CEO of Twitter Jack Dorsey posted a series of Tweets suggesting that he is in support of Musk buying the company. According to him, this is the right step toward the future of Twitter. “Elon’s goal of creating a platform that is maximally trusted and broadly inclusive is the right one. This is also @paraga’s goal and why I chose him,” Dorsey mentioned.
Elon’s goal of creating a platform that is “maximally trusted and broadly inclusive” is the right one. This is also @paraga’s goal, and why I chose him. Thank you both for getting the company out of an impossible situation. This is the right path…I believe it with all my heart.
Though many Twitteratis are celebrating this development, criticism is also around the corner. After this episode of Musk buying Twitter, actress Jameela Jamil announced her departure.
Ah he got twitter. I would like this to be my what lies here as my last tweet. Just really *any* excuse to show pics of Barold. I fear this free speech bid is going to help this hell platform reach its final form of totally lawless hate, bigotry, and misogyny. Best of luck. ❤️ pic.twitter.com/fBDOuEYI3e
“I fear this free speech bid is going to help this hell platform reach its final form of totally lawless hate, bigotry, and misogyny,” she mentioned in a tweet.
Google announces it soon plans to ban all third-party call recording apps from its Play Store. This development comes after Google launches its latest Play Store Policy, which will be effective from 11th May 2022.
Call recording will be prohibited from using Google’s accessibility APIs under the new policy. The new policy would prevent developers from using the Android Accessibility API to record calls when submitting apps to the Play Store.
This effort of Google is a step towards providing better privacy for Android users by disabling Android’s accessibility APIs for non-accessibility reasons.
The changes to the Google Play Policy, first reported on Reddit by user /u/NLL-APPS, mean that developers’ access to the Accessibility API will be restricted even more. In a recent developer webinar, Google also clarified any doubt regarding the policy change.
“Apps not eligible for IsAccessibilityTool may not use the flag and must meet prominent disclosure and consent requirements. The policy mentions the policy that the Accessibility API is not designed and cannot be requested for remote call audio recording,” mentions the policy.
Over the last few years, Google has taken several steps to disallow call recording software to be operational for Android by restricting microphone access for call recording with the release of Android 10.
Nowadays, many smartphone manufacturing brands are already providing built-in call recording options in their smartphone that allows users to seamlessly record calls without the need for downloading suspicious third-party applications.
However, if anyone still wants to continue using third-party apps on their devices, it is possible. Users can directly download the application from the developer’s website or any other application marketplace available for Android. This is possible as Android is an open-source operating system, unlike iOS, which restricts users from installing third-party applications.
Autonomous taxi developing company Pony.ai announces that it has received a taxi license for its robotaxis in China.
This new development will now enable Pony.ai to charge fees for its autonomous taxi services in some parts of China where it has been testing its robotaxi.
Officials say that Pony.ai has been granted permission to operate 100 self-driving vehicles in the Nansha neighborhood of Guangzhou.
The episode makes Pony.ai the first company in the country to receive such a license allowing it to charge for its autonomous taxi services. This new taxi license is an addition to Pony.ai’s last year’s clearance for launching paid autonomous robotaxi service in Beijing, China.
The recent approval is a significant development compared to the previous year’s as earlier Pony.ai was only able to offer its paid services in a small and restricted area.
According to the latest plan revealed by the company, it will begin charging fees across the district’s 800 square kilometers in Nansha. Interested passengers can use Pony.ai’s official app to book and pay for rides in the company’s robotaxis.
In the initial days, Pony.ai will deploy onboard drivers in its vehicles to ensure the complete safety of passengers, but it plans to make the cars fully driverless over the coming months.
“We will expand the scale of our services, provide quality travel experiences to the public in Guangzhou, create an industry benchmark for robotaxi services and continue to lead the commercialization of robotaxis and robotrucks,” said the Co-founder and CEO of Pony.ai, James Peng.
However, things did not go as per the company’s expectations in the United States. The National Highway Traffic Safety Administration (NHTSA) of the US said earlier this month that it would review the actions of robotaxi startup Pony.ai regarding crash reporting norms set up by the government.
Pony.ai recently decided to issue a recall for some autonomous driving system software versions in response to an October incident in California. Authorities claim that this is the first-ever autonomous driving system recall.
ECommerce giant Flipkart acquires full-stack Software as a Service solution providing company ANS Commerce.
This acquisition will allow Flipkart to further strengthen its capabilities and offerings as a part of its effort to meet the needs of India’s rapidly developing and evolving digital retail industry.
D2C (Direct to Consumer) is a booming sector in the eCommerce Industry, and Flipkart was to establish its dominance in the industry using ANS Commerce’s expertise in the Indian market as marketers seek direct contact with their customers.
Flipkart said in a statement that ANS Commerce would continue to operate as an independent eCommerce solutions platform under its current leadership team. Neither company disclosed any information regarding the valuation of this acquisition deal. However, Flipkart said that the deal is expected to close in the second half of 2022.
Ravi Iyer, Senior Vice President and Head of Corporate Development at Flipkart, said, “Our efforts focus on ensuring that businesses, including MSMEs and smaller brands, can leverage the opportunities that e-commerce offers to provide greater value and deeper experiences for Indian customers who are rapidly adopting digital commerce.”
He further added that they first associated with ANS Commerce last year when the startup participated in Flipkart’s tech startup accelerator program, Flipkart Leap, and they are delighted to welcome them to the Flipkart Group.
Gurgaon-based D2C SaaS platform provider ANS Commerce was founded by Amit Monga, Nakul Singh, Sushant Puri, and Vibhor Sahare in 2017. The startup is best known for its full-stack eCommerce software that provides end-to-end solutions, including platform support, performance marketing, marketplace management cum warehousing, and fulfillment.
ANS Commerce supports the shift to digital commerce for over 100 clients across the enterprise, mid-market, and direct-to-consumer companies in various categories.
“Over the past few years, we’ve seen a dramatic change in consumer behavior, and as a result, brands have also pivoted in their approach on how to engage with consumers,” said Co-founders of ANS Commerce in a joint statement.
The statement also mentioned that they are excited to be part of the Flipkart Group as they continue to assist brands in leveraging the power of technology to reach customers and create more value.
The National Aeronautics and Space Administration (NASA) announces that it has selected Elon Musk’s Aerospace company SpaceX and technology giant Amazon to develop new commercial space communication services.
Apart from SpaceX and Amazon, NASA has also picked six more companies, including Inmarsat, SES, Telesat, and Viasat, to develop the technology as it plans to decommission its near-Earth satellite fleet over the coming years.
NASA mentioned that its Communications Services Project (CSP) funded agreements’ combined value is $278.5 million. Each organization has offered a technical method to reduce costs, expand flexibility, and improve performance for a wide range of tasks.
According to NASA, it has been investigating the viability of using commercial SATCOM networks for near-Earth operations for more than a year, and with this new development, NASA would be able to devote more time and resources to deep space exploration and science missions.
Eli Naffah, CSP Project Manager at NASA’s Glenn Research Center, said, “We are following the agency’s proven approach developed through commercial cargo and commercial crew services. By using funded Space Act Agreements, we’re able to stimulate industry to demonstrate end-to-end capability leading to operational service.”
Naffah further added that the flight demonstrations would establish several capabilities and provide operational ideas, performance validation, and acquisition models needed to plan the future acquisition of commercial services for each class of NASA missions.
According to the plan, the selected companies will complete the technology development and in-space demonstrations to showcase their proposed solution to provide mission-oriented operations by 2025.
“NASA intends to seek multiple long-term contracts to acquire services for near-Earth operations by 2030 while phasing out NASA owned and operated systems,” mentioned NASA in a press release.
Global technology and eCommerce giant Amazon acquires fast-growing online reselling platform GlowRoad. Neither company provided any information regarding the valuation of this recently signed acquisition deal.
Experts suggest that this is a step by Amazon to further strengthen its offerings and capabilities to compete against similar emerging platforms like Meesho and Flipkart’s recently launched Shopsy.
According to an Amazon spokesperson, GlowRoad’s highly popular service will be enhanced by Amazon’s technology, infrastructure, and digital payment capabilities, resulting in increased efficiency and cost savings for everyone.
India is one of the largest global eCommerce consumers, and the market has witnessed tremendous growth over the last few years. The reselling sectoring in the eCommerce industry has also grown exponentially thanks to several platforms like Meesho.
Therefore to establish a presence in this domain, Amazon will use GlowRoad’s platform and improve it using its expertise to provide an unmatched shopping experience to its customers.
An Amazon spokesperson said, “Amazon continues to explore new ways to digitize India and delight customers, micro-entrepreneurs, and sellers, and bringing GlowRoad onboard is a key step in this direction.”
The spokesperson further added that Amazon and GlowRoad would work together to help innumerable creators, homemakers, students, and small business owners around the country become more entrepreneurial.
Bengaluru-based reselling platform GlowRoad was founded by Kunal Sinha, Nilesh Padariya, Nitesh Pant, Shekhar Sahu, and Sonal Verma in 2017. The company provides a smartphone app that allows online retailers and small businesses to make money by selling selected items and services through their online and offline social networks.
According to GlowRoad, it has a reseller network of 6 million+ resellers selling across 1000+ cities. To date, GlowRoad has raised more than $31 million from multiple investors like Accel, Vertex Ventures, Korea Investment Partners, CDH Investments, and others over four funding rounds.
“GlowRoad lets users resell products (from 100+ categories) directly from manufacturers and wholesalers with the power of social networks. We truly believe in the potential of Digital India, with our majority of users being homemakers from Tier II, III, and IV cities,” mentioned GlowRoad in a statement.