Friday, November 21, 2025
ad
Home Blog Page 200

YogiFi launches 2nd Gen AI Yoga Mats: YogiFi Gen 2 and YogiFi Gen 2 Pro

yogifi launches 2nd gen ai yoga mats

YogiFi, an intelligent yoga mat producer, launched the sec-gen line of its AI-enabled mats. YogiFi Gen 2 and YogiFi Gen 2 are upgraded versions of its first-gen mats to offer a more integrated experience. 

The smart mats offer support for enhanced posture tracking, real-time suggestions, stepwise instructions, and feedback. In the sec-gen mats, YogiFi has introduced research-backed therapy programs as well. These programs can be availed for issues like hypertension and diabetes. The programs also aid in relieving stress, moderating blood pressure, managing metabolism, attaining emotional stability, and many other physiological and psychological functions.

The company provides a particular mobile application for the users to track their progress. The sec-gen mats come with a redesigned user interface for a better experience. The app tracks daily workouts and refers to workout routines based on the purpose.

Read More: Ai-Da, Ultra-realistic Humanoid Robot Artist Portraits at Glastonbury Festival

Subscription: The mats also offer interactive sessions, consultation, and live support from nutrition experts, instructors, and therapists. However, the live sessions require subscriptions to their therapy programs. This subscription costs differently for YogiFi 2 and YogiFi 2 Pro. The Pro version costs ₹24,999 and comes with a few more advanced features compared to the introductory YogiFi 2, which costs ₹14,999. It has a dedicated android tablet, a charging cable, and a tablet stand.

A subscription also enables the user to have 1-1 connection with a coach, access to the reports, and monthly live guiding sessions. People can directly book the mat and a corresponding subscription from the YogiFi website. 

Advertisement

Microsoft and Museum of Art & Photography Bengaluru develops new AI platform, INTERWOVEN

Microsoft Museum of Art & Photography Bengaluru AI Platform INTERWOVEN

Microsoft and Bengaluru’s Museum of Art & Photography announced the launch of a new artificial intelligence (AI)-powered platform for linking artworks and cultures throughout the world. 

INTERWOVEN, a new artificial intelligence-powered platform for connecting artworks and cultures globally. The AI platform has been developed as part of Microsoft’s AI for Cultural Heritage initiative and is based on MAP’s collection of South Asian textiles. 

INTERWOVEN brings together collections from prominent institutions worldwide, along with MAPs, to uncover links between artworks from many cultures, mediums, and historical periods. 

Read More: Ai-Da, Ultra-realistic Humanoid Robot Artist Portraits at Glastonbury Festival

Kamini Sawhney, Director, MAP, said, “Covid 19 and the lockdown really forced us to reflect on how people interacted with the online space. Right from week one, we began looking at how we could engage with our online communities.” 

Sawhney further added that MAP would be more than simply a collection of objects; it will be a venue for ideas and dialogues sparked by their collections, and INTERWOVEN fits in perfectly with this idea. 

Users can see predefined journeys generated primarily by MAP’s educational and research arm, the MAP Academy. According to MAP, it is devoted to making the history of South Asian art more accessible and inclusive, both inside the Indian subcontinent and globally. 

Brad Smith, President, and Vice-Chair, Microsoft, said, “Using technology to enhance human ingenuity, celebrate human creativity, and enable human connection is at the heart of Microsoft’s work.” 

He also mentioned that INTERWOVEN reminds them both of the vibrancy of other cultures and how these traditions are shared in a conversation that spans time and space.

Advertisement

Ai-Da, Ultra-realistic Humanoid Robot Artist Portraits at Glastonbury Festival

ai da robot portraits glastonbury festival

Ai-Da, a humanoid robot artist, used a camera and computer memory to draw portraits of artists like Billie Eilish, Diana Ross, Kendrick Lamar, and Sir Paul McCartney at the Glastonbury Festival. The ultra-realistic humanoid robot Ai-Da scans the images and uses its robotic arm to make a layered multi-dimensional portrait. 

Ai-Da, named after Ada Lovelace, was invented by Aidan Meller in association with Engineered Arts, a robotic company. Ai-Da works on AI technology that makes drawings and sculptures. The robot progressed its drawing skills at the University of Oxford.

Ai-Da is all set to attend the festival and will be at the Shangri La field, where she will demonstrate her skills. Festival-goers can see Ai-Da Robot in action, with two painting sessions scheduled for each day of the festival.

Read More: Fujitsu and Intel display Real-Life AI- Solutions at AI Summit 2022

The robot said, “Well, it’s a kind of fun thing for me to do. I’ll be at ShangriLa, and I’m doing some portraits – I hope that my art encourages discussion about art, music, and of course our futures. See you there!”

Prints of Ai-Da Robot’s portraits will be accessible for festival attendees to purchase in the ShangrilART gallery at Glastonbury. They will be available for purchase on the ShangrilART website following the festival.

Aidan Meller said, “After making history with her self portraits, Ai-Da is continually developing her skills. It’s an exciting time as her painting ability is progressing, and there’s a lot of innovation.”  

The festival will host Ai-Da at Glastonbury, Worthy Farm in Somerset from June 22-16.

Advertisement

Deep learning algorithm interprets dental X-rays to detect periodontal disease

Deep learning algorithm interprets dental X-rays

A deep learning algorithm, a form of AI, has successfully detected periodontal disease from 2D bitewing radiographs. The finding was reported in research presented at EuroPerio10, a periodontology and implant dentistry congress organized by the European Federation of Periodontology (EFP).

According to Dr. Burak Yavuz of Eskisehir Osmangazi University, Turkey, the study shows the potential of artificial intelligence (AI) to identify periodontal pathologies that might be missed automatically. He added that AI could reduce radiation exposure by avoiding repeat assessments, enabling the earlier treatment, and preventing periodontal disease’s silent progression.

The study utilized 434 bitewing radiographs from patients who had periodontitis. A convolutional neural network, U-net Architecture, used to accurately segment images, was used for image processing.

Read More: Dental care Startup Smiles.ai raises $23 million in Series A Funding Round

Experienced specialist physicians also evaluated the images with the segmentation method. Assessments included:

  • Vertical bone loss.
  • Horizontal bone loss.
  • Total alveolar bone loss about the upper and lower teeth.
  • Calculus around maxillary and mandibular teeth.
  • Furcation defects.

The neural network identified 2,215 cases of horizontal bone loss, 859 alveolar bone loss, 508 cases of dental calculus, 340 cases of vertical bone loss, and 108 furcation defects. 

The algorithm’s success at identifying defects was compared against the physician’s assessment and reported as sensitivity, precision, and F1 score. F1 score is the weighted average of accuracy and sensitivity. For sensitivity, precision, and F1 score, 0 is the worst, and one is the best. 

The sensitivity, F1 scores, and precision for gross alveolar bone loss were 1, 0.96, and 0.94, respectively. The corresponding scores for horizontal bone loss were 1, 0.95, and 0.92, respectively. AI could not detect vertical bone loss. 

The sensitivity, F1 score, and precision results for dental calculus were 1.0, 0.82, and 0.7, respectively. For furcation defects the values were 0.62, 0.66, and 0.71 respectively.

According to Dr. Yavuz, the study illustrates that artificial intelligence (AI) can pick up many types of defects from 2D images, which can aid in periodontitis diagnosis. More comprehensive studies are required on extensive data to maximize the success of the models and extend their usage to 3D radiographs. 

Advertisement

Meta and Microsoft to use AI in their Data Centers

meta and microsoft use ai in data centers

Microsoft and Meta are looking to develop AI-based solutions to make their data centers safer and secure. These centers are responsible for driving the apps, services, and websites that people use endlessly. Data centers are a risky work environment for those who maintain and build the systems. The companies plan to use AI to anticipate how the data centers can be made workable in extreme conditions. 

Microsoft is working on an AI system that generates alerts for operations and construction teams as a mitigation strategy. The company is also working on a few complimentary systems to deter any work environment accidents. A Microsoft spokesperson said, “These initiatives are both in early testing phases and are expected to begin expanding into our production environments later this year.”

Meta is also investigating AI technologies to make its data centers workable in extreme conditions without any employee health hazards. The company claims to be developing models to replicate conditions and optimize power consumption, airflow and cooling across its systems. 

Read More: Google AI Launches LIMoE, a Large-Scale Architecture for Pathways

“We have significant operational data from our data centers, in some areas at high frequency with built-in sensors in servers, racks, and in our data halls. Each server and network device, taking on different workloads, will consume different amounts of power, generate different amounts of heat, and make different amounts of airflow in the data centers,” said a Meta spokesperson.

The exploration of AI solutions also has an ulterior motive of deterring expensive power outages. Both the companies have several data centers in operation around the world. Meta manages around 20 data centers, whereas Microsoft manages more than 200 data centers!

The companies, however, claim to use AI for energy-tuning purposes, following Google’s claim of DeepMind AI systems to deliver 30% energy savings in its data centers. 

Advertisement

Eros Investments and Wipro to evolve AI & ML based content localization solution

Eros Investments and Wipro content localization solution

Eros Investments has announced a signed alliance agreement with Wipro to scale and evolve the Machine Learning (ML) and Artificial Intelligence (AI) based content localization solution. 

Eros Investments is the entertainment, media, and technology portfolio of ventures such as Eros Now, Eros Media World, Xfinite’s Mzaalo, and others. At the same time, Wipro is a technology service and consulting company.

The AI and ML-based solution will automate the time-consuming manual content localization process of dubbing and subtitling. It is expected to be done with near human-level accuracy, driving significant time and cost savings for global media organizations, direct-to-consumer over-the-top (OTT) streaming platforms, and post-production. 

Read More: Wipro and DataRobot Announce Partnership to Offer Augmented Intelligent Solutions

Wipro’s Vantage solution is an ML/AI-powered content intelligence platform that uses Google Cloud’s Translation AI suite of services to generate computer-generated voices from Text to Speech and Speech to Text models. The platform also includes emotion tagging, voice cloning, and speed syncing in various languages. Wipro’s Vantage also assists in extracting intelligence/metadata from multiple forms of content, including audio, video, images, printed text, and more.

In collaboration with Wipro’s technology team, Eros Investments’ data science experts will leverage the platform. 

The first phase of subtitling or automated translation will be available in numerous languages, including French, Spanish, English, Arabic, Malay, Mandarin, Bahasa, Tamil, Hindi, Telugu, and Bengali. The ‘use cases’ from these languages will later be used to train models to develop the solution in other languages.

According to Victor Morales, Managing Director, Global Systems Integrator Partnerships, Google Cloud, Eros Investments and Wipro’s content localization service, when collaborating with Google Cloud’s machine learning capabilities, will avail the customers across the media sector the functionality they need to deliver exceptional viewing experiences to their audiences everywhere. 

Advertisement

Fujitsu and Intel display Real-Life AI- Solutions at AI Summit 2022

fujitsu intel ai summit 2022

AI Summit London: The info-tech company Fujitsu and processor chips manufacturer Intel showcased their AI solutions on display at the AI Summit in London. The demonstration showed how convenient integrating AI technologies into daily life is. 

Intel displayed its Voice Creation Software, 3D athlete Tracking Wearable, and Food Redistribution Network to feed those in need. The three technologies demonstrate the best tech that genuinely impacts people’s lives, with the athlete tracking model and food redistribution being the more practical AI benefits.

“There’s a lot of talk in the industry about AI, and we wanted to highlight where it’s making a real difference. They’re powerful stories with relatable AI, which we thought people would get immediately,” said Chris Feltham, a tech specialist at Intel. 

He also said, “The best AI is the one that you don’t even realize is there. Our goal is to be as accessible as possible and overcome any skills gaps existing in the AI and make it as easy as possible.”

Read More: NVIDIA DRIVE Orin Powers Intelligent Robo-01 Concept Vehicle by JIDU

Fujitsu displayed its Quantum Supercomputer and Digital twin technologies. The company co-developed the supercomputer Fugaku with its optimization system Digital Annealer. 

Fujitsu’s CTO Matthew Chase, said, “The supercomputer can determine the potential outcome of something depending on the pattern it recognizes. With something like traffic flows, we can use AI to spot and monitor human behavior to identify people in trouble and preemptively prevent any problems.”

Fujitsu is also working on digital twin technologies to provide a “quantum harness” while focusing on interoperability and synergy. The idea is to provide a platform for quantum scientists to practice on.

Advertisement

Google AI Launches LIMoE, a Large-Scale Architecture for Pathways

google ai limoe for pathways

Google AI launched a breakthrough new technology called LIMoE as a step towards its far-reaching goal of an AI architecture known as Pathways. Pathways is a single model AI architecture that can accomplish multiple tasks. It encapsulates Google’s research goal to use sparse models that handle text and images simultaneously. 

LIMoE stands for Learning Multiple Modalities with One Sparse Mixture-of-Experts Model. It is not the only architecture that can multi-task. Nevertheless, what sets it apart from others is the use of sparse models. Sparse models distinguish as one of the most promising future approaches to deep learning. They are different from the ‘dense’ models in that the sparse-model routes specific tasks to other “experts” in the network instead of using dependent computation instead of conditional computation. 

Read More: Lightning AI Unveils an Open-source Platform and Raises $40M in Series B Funding

Google AI team has presented a sparse mixture of experts in “Multimodal Contrastive Learning with LIMoE: the Language Image Mixture of Experts.” the technology analyzes words and images simultaneously with sparsely activated experts. 

It outperforms other dense multimodal models and techniques in zero-shot image categorization. LIMoE can learn to handle a range of inputs and scale them up because of its sparsity. 

In the announcement, Google experts explained how LIMoE works. “There are also some clear qualitative patterns among the image experts — e.g., in most LIMoE models, there is an expert that processes all image patches that contain text. …one expert processes fauna and greenery, and another processes human hands.”

Advertisement

Maruti Suzuki invests ₹2 Cr in AI startup Sociograph Solutions

Maruti Suzuki Sociograph Solutions

India’s largest automobile manufacturing company Maruti Suzuki India said on Friday that it had invested Rs 2 crore in Sociograph Solutions, an artificial intelligence (AI) firm. 

This new investment is part of the company’s MAIL program, which aims to encourage emerging mobility entrepreneurs. Maruti acquired a 12.09 percent share in the firm, claiming it is the first investment made under the Maruti Suzuki Innovation Fund. 

Maruti intends to leverage SSPL’s Dave visual artificial intelligence (AI) platform. AI, to improve its clients’ digital sales experience. 

Read More: Snowflake invests in Domino Data Lab to provide deeper integrations

According to the company, to improve the consumer experience, the Dave.AI technology will provide 3D visualization. Since 2019, Maruti has been implementing the ambitious Mobility & Automobile Innovation Lab (MAIL) initiative to strengthen the country’s mobility startup ecosystem. 

MSIL Managing Director & CEO Hisashi Takeuchi said, “Our investment in SSPL demonstrates our resolve towards improving business metrics using contemporary technology.” 

He further added that the Maruti Suzuki Innovation Fund was established with the goal of investing in early-stage firms that are part of Maruti Suzuki programs. 

According to Dave.AI’s CEO, Sriram PH CTO Ananth, the partnership with Maruti Suzuki greatly benefits the firm by not only proving its concepts but also learning and assimilating the skills necessary to scale up operations in a sustainable manner. 

They said, “Post our collaboration with Maruti Suzuki under the MAIL program, we registered 300 percent growth in revenues and are on track to achieve USD 1 million annual revenue milestone this financial year.” 

Advertisement

Snowflake invests in Domino Data Lab to provide deeper integrations

Snowflake invests in Domino Data Lab

The venture capital arm of the data cloud company Snowflake has announced that it has invested in the Series F of Enterprise MLOps platform Domino Data Lab. The latter also revealed the presence of Snowflake-powered capabilities in Domino 5.2.

The investment on behalf of Snowflake Ventures will allow both the companies to co-develop deeper Domino-Snowflake product integrations. Through the power of Domino’s platform and Snowpark, the developer framework for Snowflake, the companies will be able to deliver an end-to-end enterprise data-science lifecycle solution on a single shared data and deployment platform. 

With Domino, data science teams can easily embed model-driven applications throughout the enterprise, thus leveraging Snowflake’s Data Cloud. The partnership between Domino and Snowflake boosts the MLOps lifecycle by eliminating workarounds caused by inflexible tools and complex processes, automating non-data science tasks, and avoiding DevOps tensions between IT and data scientists. 

Read More: Snowflake to bring Python to its Data Cloud platform

Luca Foschini, Chief Data Scientist at Evidation, said that Domino and Snowflake enable their data science team to connect to person-generated health data (PGHD) seamlessly and prototype rapidly within a secure environment.

The partnership has strengthened Domino’s position as the industry’s most open and flexible Enterprise MLOps platform. This is due to the exceptional capabilities across each stage of the MLOps life cycle. 

Integrating Snowpark will bridge the gap between IT, data scientists, and the business by allowing direct execution on Snowflake as the common platform. The new features will be available this month.

Christian Kleinerman, Snowflake SVP of Product, said that the investment with Domino solidifies their partnership and joint commitment to help the enterprise customers leverage the power of the Data Cloud through the adoption of machine learning models.

Advertisement