Home Blog Page 200

Anantnag school introduces AI-enabled teaching solutions

Anantnag school introduces AI-enabled teaching solutions

St. Luke’s Convent School, Anantnag, Jammu & Kashmir, has partnered with a leading education technology platform, EMBIBE, to train teachers with Artificial Intelligence (AI) enabled personalized teaching solutions.

This initiative aims to create an adaptive learning environment through the teachers for the students. The training will catalyze high-quality digital education with 3D content related to the syllabus that will encourage imagination.

Using AI-based teaching, teachers can create impactful graphic lessons and easily switch between online and offline modes. The platform will also enable teachers to assign personalized homework to students and conduct tests easily. 

Read More: Deloitte Partners With The University Of Sydney Business School On AI

Moreover, the AI technology also keeps track of students’ grades, score progress, syllabus coverage, ascertain behavioral changes, and class engagements. Overall, teachers will be able to impart quality education and provide personalized attention to each student while delivering learning outcomes consistent with the New Education Policy.

According to Masood Ahmad, Chairman of St. Luke’s Convent School, the initiative will provide students with better access to high-quality content and detailed reports on their performance and learning patterns. Also, he added that teachers could understand every student’s strengths and weaknesses and take necessary action through this platform. 

EMBIBE is a leading educational technology platform that troubleshoots the learning gaps using artificial intelligence, designed for personalized learning and upgrading the student-teacher trade matrix. The platform caters to the education ecosystem with top-notch content aligned with every prescribed curriculum in every language.

Advertisement

Salesforce Open-Sources OmniXAI an Explainable AI Machine Learning Library

salesforce omnixai open source library

OmniXAI, which stands for Omni eXplainable AI, is a python-based machine learning library that provides deeper insights into explainable AI (XAI) models. It is an open-source framework by Salesforce, which offers a range of explanatory methods as ‘model-agnostic’ and ‘model-specific’ AI decisions. 

Primitive AI models can get very complex for human understanding. As a consequence, a lot of crucial applications deter using them. The complexity of AI models based on deep neural networks has caused a surge in developing more XAI models. These models offer more transparency and persuasiveness to enhance model performance. 

However, existing XAI libraries pose some restrictions. These libraries can handle only a limited type of data and models. Each XAI library has a different interface, making it difficult to switch from one to another. Furthermore, there is a lack of visualization and comparative explanation in the existing frameworks.

Read More: YogiFi launches 2nd Gen AI Yoga Mats: YogiFi Gen 2 and YogiFi Gen 2 Pro

OmniXAI is designed as a one-stop comprehensive library that addresses these flaws. It makes XAI accessible for all machine learning processes; it is not restricted to feature engineering, model development, data exploration, or decision-making. With OmniXAI, users can deploy a ‘model-agnostic’ approach. With this approach, the framework can provide insights without any prior knowledge of the AI model. And in the ‘model-specific’ approach, it enables the framework to generate explanations with some prior knowledge of the model.

Salesforce’s open-source library, OmniXAI, is also compatible with PyTorch, TensorFlow, and others. Users can choose from several explanation methods for various elements of the model. OmniXAI facilitates this process by providing a standard interface that generates explanations with few lines of code. 

The primary design philosophy of OmniXAI is to allow users to apply many explanation methods simultaneously and visualize the resulting generated explanations. Salesforce is still occupied with developing and improvising OmniXAI with more algorithms and compatibility with more data types. 

Advertisement

OSRTC installs AI-based smart toilet in Puri district 

OSRTC installs AI-based smart toilet

Odisha State Road Transport Corporation (OSRTC) installed an innovative ultra-modern artificial intelligence-based toilet-cum-integrated commercial facility in the Puri district.

The AI-based facility has ambient lighting and aesthetic design, keeping women’s safety in mind. The UV lights-enabled toilet will ensure no viruses and bacteria. The facility will operate on pay, use, and redeem with a lounge facility.

The toilets with refreshment rooms would provide a superior sanitation facility to passengers and act as a relaxation center.

Read More: Odisha-based Startup’s Aircraft to be displayed in Paris

The AI-powered facility has been developed with a Madhya Pradesh-based startup, Freshrooms Hospitality Services. Such a modern facility has been introduced in the eastern part of the country, and it will open for tourists and travelers from Monday. 

According to the managing director of OSRTC, Diptesh Pattnayak, being the only State Transport Undertaking (STU), it has always been the mandate for corporations to provide better and hygienic passenger facilities. He added that OSRTC employs advanced artificial intelligence, better enforcement, and visionary administration to fulfill that mandate. 

He added that the AI-based innovative system is expected to mitigate the issue faced by the tourism industry and benefit tourists from various parts of the world.

Advertisement

ML detects autism speech patterns in different languages

ML detects autism speech patterns

A team of researchers from Northwestern University (NU) has successfully used machine learning to identify speech patterns in children with autism that were consistent between languages like English and Cantonese. The research suggests that speech features may be a valuable tool for diagnosing the condition.

The study concluded results that could assist scientists in differentiating between environmental and genetic factors shaping the communication capabilities of people with autism. The results can potentially help them research more about the condition’s origin and develop new therapies.

A team of NU scientists Molly Losh and Joseph C.Y. Lau, in collaboration with Hong Kong-based Patrick Wong, and his team, have successfully used supervised machine learning techniques to identify speech differences associated with autism.

Read More: Researchers Use Neural Network To Gain Insight Into Autism Spectrum Disorder

The data used recordings of English and Cantonese-speaking young people with and without autism telling their version of the story to train the algorithm. The stories were depicted in a wordless picture book for children.

According to Joseph Lau, using machine learning to recognize the critical elements of speech predictive of autism represented a crucial step forward for researchers. He added that research had been limited by English language bias in autism and humans’ subjectivity when categorizing speech differences between people with and without autism.

The researchers believe this study work has the potential to contribute to a better understanding of autism. AI has the potential to make the diagnosis of autism easier by reducing the burden on healthcare professionals and making autism diagnosis more accessible to the public.

Advertisement

VideaHealth receives Regulatory License from Health Canada

VideaHealth Regulatory License Canada

Virtual care platform that provides health coaching from experienced healthcare providers VideaHealth receives a regulatory Medical Device Establishment License (MDEL) from Health Canada for its artificial intelligence-powered platform named Videa Caries Assis. 

The platform is a dental caries (cavity) detecting system driven by AI. Apart from this license, the company had earlier received FDA 501k clearance of Videa Carries Assist and its trial. 

The trial showed how the company’s cavity detection AI technology-enhanced dentists’ diagnosis accuracy without introducing new workflow procedures. The AI and software solutions from VideaHealth help dentists in better analyzing patient X-rays, capturing faster reimbursements, and providing greater transparency and accuracy for treatment suggestions. 

Read More: Microsoft and Museum of Art & Photography Bengaluru develops new AI platform, INTERWOVEN

Founder and CEO of VideaHealth, Florian Hillen, said, “We’re excited to partner with dentists and industry-leading DSOs within Canada’s innovative dental market as we work together to bring the power of AI to all.” 

Hillen further added that VideaHealth’s primary objective is to provide clinical access to our powerful AI so that millions of people can receive more effective preventative treatment.

United States-based leading dental artificial intelligence (AI) solution provider VidaHealth was founded by Connie Chen, Jono Chang, Ozan Onay, and Stephanie Tilenius in 2014. The company specializes in providing a virtual care platform that is specifically developed to treat a person’s overall health by addressing both mental and physical disorders concurrently. 

Vida’s app provides video sessions, messaging, and digital information to assist individuals in preventing, managing, and curing chronic disorders such as diabetes, hypertension, and others. Videa Caries Assist empowers dentists with VideaHealth’s Videa Factory, which holds over 100 million data points to assist in eliminating possible AI bias in dentistry and give comprehensive standards for data trends, pathology diagnosis, and treatment suggestions.

Advertisement

YogiFi launches 2nd Gen AI Yoga Mats: YogiFi Gen 2 and YogiFi Gen 2 Pro

yogifi launches 2nd gen ai yoga mats

YogiFi, an intelligent yoga mat producer, launched the sec-gen line of its AI-enabled mats. YogiFi Gen 2 and YogiFi Gen 2 are upgraded versions of its first-gen mats to offer a more integrated experience. 

The smart mats offer support for enhanced posture tracking, real-time suggestions, stepwise instructions, and feedback. In the sec-gen mats, YogiFi has introduced research-backed therapy programs as well. These programs can be availed for issues like hypertension and diabetes. The programs also aid in relieving stress, moderating blood pressure, managing metabolism, attaining emotional stability, and many other physiological and psychological functions.

The company provides a particular mobile application for the users to track their progress. The sec-gen mats come with a redesigned user interface for a better experience. The app tracks daily workouts and refers to workout routines based on the purpose.

Read More: Ai-Da, Ultra-realistic Humanoid Robot Artist Portraits at Glastonbury Festival

Subscription: The mats also offer interactive sessions, consultation, and live support from nutrition experts, instructors, and therapists. However, the live sessions require subscriptions to their therapy programs. This subscription costs differently for YogiFi 2 and YogiFi 2 Pro. The Pro version costs ₹24,999 and comes with a few more advanced features compared to the introductory YogiFi 2, which costs ₹14,999. It has a dedicated android tablet, a charging cable, and a tablet stand.

A subscription also enables the user to have 1-1 connection with a coach, access to the reports, and monthly live guiding sessions. People can directly book the mat and a corresponding subscription from the YogiFi website. 

Advertisement

Microsoft and Museum of Art & Photography Bengaluru develops new AI platform, INTERWOVEN

Microsoft Museum of Art & Photography Bengaluru AI Platform INTERWOVEN

Microsoft and Bengaluru’s Museum of Art & Photography announced the launch of a new artificial intelligence (AI)-powered platform for linking artworks and cultures throughout the world. 

INTERWOVEN, a new artificial intelligence-powered platform for connecting artworks and cultures globally. The AI platform has been developed as part of Microsoft’s AI for Cultural Heritage initiative and is based on MAP’s collection of South Asian textiles. 

INTERWOVEN brings together collections from prominent institutions worldwide, along with MAPs, to uncover links between artworks from many cultures, mediums, and historical periods. 

Read More: Ai-Da, Ultra-realistic Humanoid Robot Artist Portraits at Glastonbury Festival

Kamini Sawhney, Director, MAP, said, “Covid 19 and the lockdown really forced us to reflect on how people interacted with the online space. Right from week one, we began looking at how we could engage with our online communities.” 

Sawhney further added that MAP would be more than simply a collection of objects; it will be a venue for ideas and dialogues sparked by their collections, and INTERWOVEN fits in perfectly with this idea. 

Users can see predefined journeys generated primarily by MAP’s educational and research arm, the MAP Academy. According to MAP, it is devoted to making the history of South Asian art more accessible and inclusive, both inside the Indian subcontinent and globally. 

Brad Smith, President, and Vice-Chair, Microsoft, said, “Using technology to enhance human ingenuity, celebrate human creativity, and enable human connection is at the heart of Microsoft’s work.” 

He also mentioned that INTERWOVEN reminds them both of the vibrancy of other cultures and how these traditions are shared in a conversation that spans time and space.

Advertisement

Ai-Da, Ultra-realistic Humanoid Robot Artist Portraits at Glastonbury Festival

ai da robot portraits glastonbury festival

Ai-Da, a humanoid robot artist, used a camera and computer memory to draw portraits of artists like Billie Eilish, Diana Ross, Kendrick Lamar, and Sir Paul McCartney at the Glastonbury Festival. The ultra-realistic humanoid robot Ai-Da scans the images and uses its robotic arm to make a layered multi-dimensional portrait. 

Ai-Da, named after Ada Lovelace, was invented by Aidan Meller in association with Engineered Arts, a robotic company. Ai-Da works on AI technology that makes drawings and sculptures. The robot progressed its drawing skills at the University of Oxford.

Ai-Da is all set to attend the festival and will be at the Shangri La field, where she will demonstrate her skills. Festival-goers can see Ai-Da Robot in action, with two painting sessions scheduled for each day of the festival.

Read More: Fujitsu and Intel display Real-Life AI- Solutions at AI Summit 2022

The robot said, “Well, it’s a kind of fun thing for me to do. I’ll be at ShangriLa, and I’m doing some portraits – I hope that my art encourages discussion about art, music, and of course our futures. See you there!”

Prints of Ai-Da Robot’s portraits will be accessible for festival attendees to purchase in the ShangrilART gallery at Glastonbury. They will be available for purchase on the ShangrilART website following the festival.

Aidan Meller said, “After making history with her self portraits, Ai-Da is continually developing her skills. It’s an exciting time as her painting ability is progressing, and there’s a lot of innovation.”  

The festival will host Ai-Da at Glastonbury, Worthy Farm in Somerset from June 22-16.

Advertisement

Deep learning algorithm interprets dental X-rays to detect periodontal disease

Deep learning algorithm interprets dental X-rays

A deep learning algorithm, a form of AI, has successfully detected periodontal disease from 2D bitewing radiographs. The finding was reported in research presented at EuroPerio10, a periodontology and implant dentistry congress organized by the European Federation of Periodontology (EFP).

According to Dr. Burak Yavuz of Eskisehir Osmangazi University, Turkey, the study shows the potential of artificial intelligence (AI) to identify periodontal pathologies that might be missed automatically. He added that AI could reduce radiation exposure by avoiding repeat assessments, enabling the earlier treatment, and preventing periodontal disease’s silent progression.

The study utilized 434 bitewing radiographs from patients who had periodontitis. A convolutional neural network, U-net Architecture, used to accurately segment images, was used for image processing.

Read More: Dental care Startup Smiles.ai raises $23 million in Series A Funding Round

Experienced specialist physicians also evaluated the images with the segmentation method. Assessments included:

  • Vertical bone loss.
  • Horizontal bone loss.
  • Total alveolar bone loss about the upper and lower teeth.
  • Calculus around maxillary and mandibular teeth.
  • Furcation defects.

The neural network identified 2,215 cases of horizontal bone loss, 859 alveolar bone loss, 508 cases of dental calculus, 340 cases of vertical bone loss, and 108 furcation defects. 

The algorithm’s success at identifying defects was compared against the physician’s assessment and reported as sensitivity, precision, and F1 score. F1 score is the weighted average of accuracy and sensitivity. For sensitivity, precision, and F1 score, 0 is the worst, and one is the best. 

The sensitivity, F1 scores, and precision for gross alveolar bone loss were 1, 0.96, and 0.94, respectively. The corresponding scores for horizontal bone loss were 1, 0.95, and 0.92, respectively. AI could not detect vertical bone loss. 

The sensitivity, F1 score, and precision results for dental calculus were 1.0, 0.82, and 0.7, respectively. For furcation defects the values were 0.62, 0.66, and 0.71 respectively.

According to Dr. Yavuz, the study illustrates that artificial intelligence (AI) can pick up many types of defects from 2D images, which can aid in periodontitis diagnosis. More comprehensive studies are required on extensive data to maximize the success of the models and extend their usage to 3D radiographs. 

Advertisement

Meta and Microsoft to use AI in their Data Centers

meta and microsoft use ai in data centers

Microsoft and Meta are looking to develop AI-based solutions to make their data centers safer and secure. These centers are responsible for driving the apps, services, and websites that people use endlessly. Data centers are a risky work environment for those who maintain and build the systems. The companies plan to use AI to anticipate how the data centers can be made workable in extreme conditions. 

Microsoft is working on an AI system that generates alerts for operations and construction teams as a mitigation strategy. The company is also working on a few complimentary systems to deter any work environment accidents. A Microsoft spokesperson said, “These initiatives are both in early testing phases and are expected to begin expanding into our production environments later this year.”

Meta is also investigating AI technologies to make its data centers workable in extreme conditions without any employee health hazards. The company claims to be developing models to replicate conditions and optimize power consumption, airflow and cooling across its systems. 

Read More: Google AI Launches LIMoE, a Large-Scale Architecture for Pathways

“We have significant operational data from our data centers, in some areas at high frequency with built-in sensors in servers, racks, and in our data halls. Each server and network device, taking on different workloads, will consume different amounts of power, generate different amounts of heat, and make different amounts of airflow in the data centers,” said a Meta spokesperson.

The exploration of AI solutions also has an ulterior motive of deterring expensive power outages. Both the companies have several data centers in operation around the world. Meta manages around 20 data centers, whereas Microsoft manages more than 200 data centers!

The companies, however, claim to use AI for energy-tuning purposes, following Google’s claim of DeepMind AI systems to deliver 30% energy savings in its data centers. 

Advertisement