Tuesday, November 18, 2025
ad
Home Blog Page 127

Google Health and iCAD to Develop AI in Breast Cancer Screening

google and icad develop ai for breast cancer

Google will partner with iCAD, a global mammography AI vendor, to integrate Google Health’s artificial intelligence into iCAD’s portfolio of breast screening solutions. The partnership will aim to develop more enhanced detecting and assessing solutions for people with short-term cancer risk. 

iCAD already has an advanced AI Suite of breast screening technologies for detection, density assessment, and risk evaluation. The medical vendor plans to incorporate Google’s AI into ProFound AI Risk, the world’s first clinical decision tool that provides risk estimation. Currently, it is the only technology that is commercially available for clinicians. 

Stacey Stevens, President and CEO of iCAD, said, “With the addition of Google Health’s technology, we are positioned to improve the performance of our algorithms for both 2D and 3D mammography, which will further strengthen our market leadership position and drive worldwide adoption.”

Read More: Sony Launches Wearable Motion Trackers to Bring Metaverse to Smartphones

As per the press release, this is the first time Google Health has ventured into a commercial partnership to introduce breast imaging AI into clinics. As breast cancer is one of the most common cancers, Google is actively working to improve the accuracy and availability of screenings, as early detection can save lives.


Google is also working with the Imperial College London and three NHS trusts to test different screening technologies as the screening systems vary globally.

Advertisement

LG files most metaverse patent applications since 2016, followed by Samsung, Meta and Huawei

LG most metaverse patent applications since 2016

According to a survey by Nikkei Asia, LG Electronics has filed the most number of metaverse patent applications since 2016, moving up from 11th place between 2010 to 2015. Samsung Electronics held its position in second. Huawei was ranked fourth with various patents related to image and display processing. 

According to the report, Meta, Microsoft, Apple, and Intel were among the six American companies on the top 10 list, with Meta at third, Microsoft at fifth, and Sony as the only Japanese company in the sixth position.

Tokyo, South Korean, and Chinese companies like LG, Samsung, and Huawei are rising in ranks with the most number of Metaverse patents as the electronics industry is looking beyond smartphones. 

Read More: University Of Southern California Launches Center On AI Research For Health

Semiconductors and display color schemes are among the strengths of South Korean companies, components rather than headsets and other finished products.

The top 20 companies filed a total of 7,760 patents, with the United States accounting for 57%, South Korea at 19%, and China at 12%. Japanese firms accounted for 8%, said the report. 

The global metaverse market is expected to touch $996 billion in 2030, reaching a CAGR of 39.8%. In 2021, the size reached a value of $22.79 billion, according to GlobalData, a leading data and analytics company. 

Advertisement

Sony Launches Wearable Motion Trackers to Bring Metaverse to Smartphones

sony launches wearable motion trackers

Sony launches wearable motion trackers called the Mocopi System. The Mocopi System will bring metaverse to smartphones as its trackers track the whole body movement using six pucks tied around the user’s ankles, hands, hips, and ankles. 

These wearable trackers have sensors that scan body movements and animate avatars inside metaverse applications on smartphones, both Android and iPhones. It will be a cost-effective accessory with which ordinary people can experience the metaverse. 

Read More: Bionaut Labs Develop Robots to Deliver Drugs Directly into the Brain

Sony plans to release the trackers in Japan in January 2023 at approximately 49,500 Yen (approximately INR 29,222). The company is targeting YouTube and online content developers as the primary audience. 

Sony is one of the many companies exploring the metaverse domain. Others like Meta are also working on developing immersive virtual reality (VR) devices like the Meta Quest Pro. However, Sony is specifically focusing on people with its PlayStation VR device. Alongside Mocopi trackers, the company will also release the next generation of its PlayStation VR 2 headset next year. 

Advertisement

University of Southern California Launches Center on AI Research for Health

Researchers at the University of Southern California (USC) have recently announced the launch of an Artificial Intelligence Research Center for Health (AI4 Health) to enable the use of AI and big data to improve healthcare.

Michael Pazzani, Ph.D., Director of the AI4Health and principal scientist at Information Sciences Institute (ISI), USC, mentioned that ISI has already been using AI for health research. One of the objectives of AI4Health is to make it easier for medical researchers to find people with AI expertise. 

Read more: Notate: A New Jupyter notebook extension that turns sketches and handwriting into codes

AI4Health will hold many events in partnership with the Keck School of medicine, USC, to strengthen the connection between AI and medicine. AI4Health research also focuses on increasing the amount of health data available for analysis. The health data will be helpful in multiple research areas, such as data management, knowledge discovery, precision health, machine learning for health, data analytics, and more.

AI4Health’s data analytics and knowledge discovery work will help researchers analyze EHRs (Electronic Health Records), medical images, and other health data to detect patterns that may lead to breakthroughs. One of AI4Health’s projects is to leverage cell phone mobility and health data to link food environment and diet-related diseases.

Advertisement

Notate: A New Jupyter notebook extension that turns sketches and handwriting into codes

Researchers from Cornell University, New York, have recently developed a Jupyter notebook extension-Notate that enables users to include handwriting and sketch and convert it into code. Jupyter notebook extensions are simple add-ons used to extend the functionality of the Jupyter environment. The information about Notate was published in the 35th Annual ACM Symposium on User Interface Software and Technology.

With the pen-based extension, Notate, users can draw circuit diagrams on canvases, which can then be included in the code. Using the deep learning model, Notate extension bridges handwritten and textual programming context: notation in the handwritten chart can reference textual code and vice versa.

Lan Arawjo, a doctoral student in the field of information science at Cornell University, realized that recent developers hardly support images and graphical interfaces inside code. Therefore, Lan Arawjo, with his professors, Anthony DerArmas, Michael Roberts, Tapan Parikh, and Shrutasrhi Basu, developed Notate, artificial intelligence-enabled pen-based coding. He stated that Notate is an excellent data science tool for sketching plots and charts interacting with textual codes.

Notate is a challenge to conventional coding, which typically relies on typing. Tools like Notate are significant as they open new ways to programming and describe how different tools and practices can change the programming perspective.

Read more: London researchers develop a new visual speech recognition model

Advertisement

NLSIU announces free certificate course in AI and Human Rights

NLSIU free certificate course AI Human Rights

The National Law School of India University (NLSIU) is offering a free online/hybrid certificate course in AI and Human Rights to foster a greater understanding of the AI age’s challenges faced by human rights in India. The course will be conducted in collaboration with the German Consulate, Bangalore, between December 2022 and January 2023.

The course will begin with a two-week introductory module consisting of online classes on December 3. The second module will cover the evolution of conflicts between human rights and AI in India and prospective resolution approaches. In the third module, participants will attend an in-person workshop on the NLSIU campus in Bengaluru, which will be conducted on January 7 and 8, 2023.

During the course, participants will be pitched to understand the conflicts and interactions between AI and human rights in India. They will also focus on the tools to resolve tensions between the two. The participants will be selected from the applicants who register on aiandhr.nls.ac.in

Read More: Sony Acquires Beyond Sports To Foray Into Sports Metaverse

Since global values concerning human rights are being questioned in novel ways because of increasing AI adoption, the course will examine the impact of the same within India’s economic, legal, social, and cultural contexts.

The course will bring together a diverse set of professionals from across the tech sector, from both larger corporations and startups, including healthcare professionals, doctoral researchers, and academics working on AI from different perspectives. 

Advertisement

AI to be used to ensure foolproof security at Mahakumbh 2025

AI foolproof security Mahakumbh 2025

Artificial intelligence (AI) will be used to ensure foolproof security at Mahakumbh 2025, which will be held in Uttar Pradesh’s Prayagraj district. Drones and closed-circuit television (CCTV) cameras will also be used.

Prayagraj police officials have sent a proposal of ₹400 crores for equipment and other security arrangements for the mega religious fair, said officials acquainted with the matter. The district police is ensuring tight security of pilgrims along with the VIPs and foreign tourists who are expected to arrive in large numbers to Mahakumbh-2025, they said.

Shailesh Kumar Pandey, SSP, Prayagraj, said they would upgrade the Integrated Command and Control Centre (ICCC) and install hi-tech CCTV cameras in high numbers. 

Read More: Sony Acquires Beyond Sports To Foray Into Sports Metaverse

“The high-quality cameras will enable security personnels to identify and trace suspects. Moreover, AI technology will be used to keep an eye on every part of Mahakumbh Mela,” he added.

“Policemen on duty will also be strictly monitored. Every cop will be provided with a radio frequency identification (RFID) card through which the control room will know their exact location and whether they are on their duty points or not,” he said.

Advertisement

Sony acquires Beyond Sports to foray into sports metaverse

Sony acquires Beyond Sports

Sony recently closed the acquisition of Beyond Sports, a 3D animation and imaging company that possesses tech to transform accurate information from a sports match into a representation in the metaverse. 

Along with the technology of Hawk-Eye Innovations (another company owned by the conglomerate), this purchase will allow the company to produce real-time content relating to basketball, baseball, tennis, and football matches. 

Hawk-Eye Innovations, acquired in 2011, produces tech that allows pinpointing the ball’s position at any time and has been used by the National Hockey League (NHL) and National Football League (NFL).

Read More: London Researchers Develop A New Visual Speech Recognition Model

Combining the technology from these two companies might allow Sony to create a digital representation of a court or a field featuring realistic ball and player movement. 

Sony has already ventured into NFT (non-fungible token) technology and metaverse, filing a set of patents to use NFTs to track the history and ownership of in-game assets. Sony CEO Kenichiro Yoshida has also stated that Sony’s priority is to create a metaverse around entertainment with the help of all of the brand’s tools.

Advertisement

Jio to launch an Indian short video app ‘Platform’

Jio short video app 'Platform'

Jio Platforms Limited, Rolling Stones India, and Creativeland Asia have partnered to launch ‘Platform,’ an Indian short video app for entertainers to introduce a rival to Meta’s reels feature. This app is expected to work precisely the same way how Facebook’s and Instagram’s reels do. 

The app is aimed at entertainers, having an ecosystem built for steady monetization and organic growth. The Platform app will be all about musicians, dancers, comedians, fashion designers, creators, singers, actors, and everyone who wants to be a cultural influencer.

The first 100 founding members will join through invite-only. They will be provided with a golden tick verification on their profiles. These original members will then further invite new artist members who will sign up through referral programs. As per sources, the app is expected to go live next year in January and has already started testing the beta version.

Read More: London Researchers Develop A New Visual Speech Recognition Model

The upcoming Platform app would likely allow the creators to grow through reputation and rankings organically instead of using paid algorithms used by Meta. This will eventually result in monetizing creators’ talent and a regular revenue stream.

The ranking of the content creators will be depicted by red, blue, and silver tick verifications. The app will also get a ‘Book Now’ feature for all creators. This will let people and organizers interact with the creators and book them for events, collaborations, and more.

Advertisement

London researchers develop a new visual speech recognition model

Researchers from Imperial College, London, develop a new deep learning model to detect visual speech recognition tasks in multiple languages. The new model outperforms some previously proposed models trained on larger datasets.

A Ph.D. graduate, Pingchuan Ma, from Imperial College London, with his friends, realized that many visual speech recognition projects only deal with the English language. His objective was to recognize speech in other languages except for the English language from the lip movements of speakers and then compare it with other models trained to recognize English speech.

The new model created by the Imperial College researchers is similar to the speech recognition models in the past. But, some of its hyperparameters are optimized, the dataset is augmented, and additional loss functions are used. Therefore, by carefully designing the new deep-learning model, Ma and his colleagues achieved state-of-the-art performances in visual speech recognition.

Read more: UK to fine social media companies that fail to remove harmful content

However, some of the deep learning models in the past have achieved greater performance on visual speech recognition tasks. Still, they were primarily trained to detect speech in English, as most existing training datasets only include English speech. This limits users to work in English-speaking contexts only. But the new deep learning model by the London researchers can perform in many languages and allow users to work on many speeches.

In the future, the new model can inspire other researchers to develop alternative visual speech recognition models to recognize speech from the lip movements of speakers effectively.

Advertisement