Saturday, November 22, 2025
ad
Home Blog Page 255

Artificial Intelligence can be used to Plan Hydropower across Amazon Basin

Artificial Intelligence Plan Hydropower Amazon

Researchers have come up with a unique approach that allows them to use artificial intelligence to plan sustainable hydropower development across the Amazon basin. 

It is a groundbreaking demonstration that can considerably help harness hydropower electricity in regions stretching across South America. The research demonstrates how artificial intelligence may be used to identify dams that are likely to be particularly environmentally destructive. 

In addition, the system can also be used to discover lost environmental benefits from hundreds of existing dams built in the region. 

Read More: Does deployment of robotic dogs at US-Mexico border pose a serious ethical conundrum?

Over the years, the construction of dams has caused massive alterations in the course of the Amazon river, causing several climatic changes. Researchers believe that the demonstrated AI-powered technology can help reduce those adverse effects and harness sustainable hydropower energy. 

Associate professor at Florida International University, and a member of the research team, Elizabeth Anderson, said, “The Amazon is the world’s largest and most biodiverse river system, home to more freshwater fish species than any other place on Earth. At the same time, an estimated 47 million people live in the Amazon Basin, and their lives are intertwined with rivers in so many ways—socially, economically, and culturally.” 

Researchers discovered that optimizing the size and location of dams requires continuous evaluation of many variables such as sediment transport, river connection, flow regulation, fish biodiversity, and greenhouse gas emissions. 

Researchers claim to have developed a unique computational method that allows them to evaluate each trade-off individually and simultaneously, making the method applicable for multiple other basins. 

“It is really a pleasure and uniquely rewarding that we are using computer science and AI to address this sustainability challenge,” said Carla Gomes from Cornell University. 

This research is far more ambitious in terms of offering insights into the world’s largest and most biodiverse river basin. It points out the importance of international cooperation for planning the entire basin of the Amazon river as it spreads across eight countries. 

Advertisement

University of Florida develop National AI Curriculum

University of Florida National AI Curriculum

The University of Florida has been selected to develop a new national artificial intelligence curriculum in the United States focusing on the ethical use of AI. 

A newly established National Humanities Center has chosen UF, a top-ranked public university, to build a college-level curriculum that will look at how to develop and deploy ethical AI systems. 

According to the plans, faculty members of the selected universities will develop the new course by 2023-2024. 

Read More: Soul Machines Raises $70 million from SoftBank

The University of Florida says that it plans to hire more than 100 AI-focused faculty and is the first university in the US to provide AI courses across disciplines to prepare students for future jobs. 

Faculty members from more than 15 universities, including UF, will participate in the program to develop and implement courses, allowing students to understand how AI can be ethically used in their daily lives. 

UF already hosts more than 6000 students enrolled in artificial intelligence courses because of its public-private partnership with technology company NVIDIA. Additionally, Google will also provide support in this new program. 

Senior Vice President of Academic Affairs at the University of Florida, Joe Glover, said, “This partnership with the National Humanities Center is a powerful step forward in further positioning UF faculty and students as prominent voices in the national conversation about developing ethical, equitable AI technologies to help solve the planet’s most pressing problems.” 

He further added that they want to ensure that all University of Florida students have access to AI technology as they pursue careers as scholars and future professionals in an AI-enabled workforce. 

A UF faculty member will engage in a seven-day intensive session at the National Humanities Center in this program. 

“The University of Florida’s AI initiative is an impressive investment that promises to make them a leader in AI research and teaching in the years to come,” said President and Director of the National Humanities Center, Robert D. Newman. 

Earlier this month, UF partnered with technology giant IBM to tackle multiple society’s biggest challenges by jointly launching a comprehensive skills program.

Advertisement

Meta launches Mephisto Dataset Collection Tool

Meta Mephisto dataset collection

Meta, formerly known as Facebook, announces the launch of its new Mephisto Dataset Collection Tool. The newly launched tool named Mephisto is used to standardize and codify best practices and infrastructure for collecting and annotating research data. 

The tool is capable of providing a straightforward way to open source and share Meta’s methods. Over time, Meta has worked towards stabilizing the platform to make it more reliable. 

Meta is now accepting applications from University faculties related to Mephisto. Researchers need to submit their research papers to Meta for applying to this program. 

Read More: Policybazaar Launches AI-powered WhatsApp Chatbot

However, it is mandatory for applicants to disclose present or previous partnerships with Meta researchers related to the request for proposal before applying. 

The prize pool ranges from $25,000 to $37,500, in which four awards will be presented. The prizes include the overhead of up to 40% of project costs, and the funds will be distributed to RFP winners under the conditions of a Sponsored Research Agreement (SRA) that includes open scientific provisions. 

Meta encourages researchers from multiple diverse fields and skill sets to participate in this program. Applicants must be full-time faculty members at an accredited academic institution that provides Ph.D. students with research degrees. Government officials excluding faculties of public universities will not be able to participate in Meta’s new program. 

Interested applicants must submit the following documents to apply for this program – 

  1. A project summary in English that explains the target dataset and its importance to their research area.
  2. A draft budget description that includes an estimate of the award’s cost and an explanation of how they plan to utilize the funds. 
  3. Applicants must submit their Curriculum Vitae.
  4. Details about the organization, including tax and administrative contact information.

The application process has already started, and Meta will stop accepting applications after 23rd March at 11:59 PM PT. Interested candidates can submit their application forms from the official website of Meta.

Advertisement

Does deployment of robotic dogs at US-Mexico border pose a serious ethical conundrum?

robotic dogs US-Mexico border, ghost vision 60 ghost robotics
Image Source: Techno Pixel

In a futuristic turn of events, the U.S. Department of Homeland Security (DHS) recently announced it has been working with the Philadelphia-based company Ghost Robotics to develop robot dogs for patrolling at the US-Mexico border.

These 100-pound quadrupeds were designed to scale all forms of natural terrains, such as sand, rocks, and hills, as well as man-made environments, such as stairs. Each robot dog is equipped with a variety of sensors and is capable of transmitting live video and data streams. The devices are not currently operational on the US-Mexico border, but they are being tested and evaluated in El Paso, Texas. These robots would eliminate the need for human people to be exposed to extreme temperatures and other dangers near the border.

“The southern border can be an inhospitable place for man and beast, and that is exactly why a machine may excel there,” added DHS Science and Technology Directorate manager Brenda Long.

Jiren Parikh, CEO of Ghost Robotics, said he couldn’t reveal any more details about the Border Patrol’s robot dogs, but some in development have unique sensors and can carry technology to detect narcotics, nuclear materials, and chemical weapons. The company has previously unveiled its robots at Tyndall Air Force Base in Florida as part of an effort to replace stationary surveillance cameras. Last year, during the Association of the United States Army’s annual conference in October, Sword International has unveiled an armed robot dog from Ghost Robotics with a custom-made weapon termed a “special purpose unmanned rifle” or Spur.

Robot dogs could soon help patrol the U.S.-Mexico border - Axios

One of the most popular models from the Ghost Robotics, the Ghost Vision 60, measures 2.5 feet tall (76cm), weighs 70lbs (32kg), and can travel over 7.5 miles in 3 hours on a single battery charge. It is currently priced at $150,000, with specialized sensors or other add-ons costing extra. The government has not said how many robots it plans to buy or how much the program would cost.

The Science and Technology Directorate, which advises Homeland Security on research and development, began collaborating with US Customs and Border Protection and Ghost Robotics to create and test a border patrol robot for a period of 2.5-years. The Ghost Vision 60 has to be modified to function in adverse weather conditions and isolated regions in order to work at the border. The robots delivered to DHS can work in temperatures ranging from -40 degrees Celsius to 55 degrees Celsius and can also withstand submerged conditions. Before being shifted to El Paso, Texas, for further trials in tougher environments, robot dogs went through intensive testing in Lorton, Virginia.

Read More: Introducing LEO: Caltech’s Humanoid Robot can Fly, Walk, and Jump

While the idea of deploying robot dogs to patrol at the US-Mexico border sounds like exciting usage of robotics, not all feel the same about this initiative. According to Fernando García, executive director of the Border Network for Human Rights, “Instead of fixing our immigration systems, understanding the pushes and pulls of the immigration process and try to actually fix what is broken. We continue to invest in this enforcement-only approach that includes this technology.” Many have objected to adding robot dogs to the already militarized US-Mexico border.

Further employment of robots dogs does not have a good reputation so far. For instance, in 2021, the New York City Police Department (NYPD) had deployed a canine-like robot nicknamed Digidog at Manhattan public housing complex. This remote-controlled bot was created by Boston Dynamics, a robotics company known for viral videos of robots dancing and sprinting with human-like skill. However, after a public outcry over ethics and privacy, the NYPD was forced to end this initiative. 

Not long time ago, the Honolulu Police Department recruited Boston Dynamics’ Spot to scan the eyes and measure temperatures of homeless citizens to check for Covid signs. Even this step was heavily criticized as dehumanizing. 

Meanwhile, Parikh assures that the robots dogs from Ghost Robotics at US-Mexico border are not autonomous yet and people should not be worried. Although, the whole development of robotic dogs begs a question, “Do we seriously need robots to keep an eye on human activities – illegal or not?” Are we racing towards a dystopian future or a world on the quagmire of ethical rights?

Advertisement

Jaguar Land Rover Announces Partnership With NVIDIA for Autonomous Driving

Jaguar Land Rover Partnership NVIDIA

One of the world’s leading car manufacturers, Jaguar Land Rover, announces partnership with technology company NVIDIA for autonomous driving systems. 

The companies have signed a multi-year contract for developing next-generation automated driving systems and artificial intelligence-powered services for customers. According to the companies, Jaguar and Land Rover vehicles will be powered by the NVIDIA Drive software-defined platform from 2025. 

NVIDIA’s smart platform will be able to provide a wide range of active safety features, automated driving and parking, driver assistance features, and lots more, making future JLR vehicles highly reliable and safe to use. 

Read More: Mastercard AI Modeling Enhances Milliman Payment Integrity Solution

The newly announced strategic partnership between the two companies has already increased TATA Motors’ shares by roughly 2%. 

Chief Executive Officer of Jaguar Land Rover, Thierry Bolloré, said, “Collaboration and knowledge-sharing with industry leader NVIDIA is essential to realizing our Reimagine strategy, setting new benchmarks in quality, technology, and sustainability. Jaguar Land Rover will become the creator of the world’s most desirable luxury vehicles and services for the most discerning customers.” 

He further added that as the company continues its transformation into a truly global, digital powerhouse, its long-term strategic partnership with NVIDIA will open a world of possibilities for their future cars. 

NVIDIA’s driving system will provide multiple artificial intelligence functions such as driver and occupant monitoring, and advanced visualization of the vehicle’s environment, allowing cars to operate better. 

NVIDIA DRIVE Hyperion is the foundation for this full-stack solution, including DRIVE Orin centralized AV processors, DRIVE AV and DRIVE IX software, safety, security, networking technologies, and surround sensors. 

In addition, the vehicle manufacturing company also plans to couple their developed data center solutions with NVIDIA DGX for training AI models and DRIVE Sim software built on NVIDIA Omniverse for real-time simulations. 

“Next-generation cars will transform automotive into one of the largest and most advanced technology industries. Fleets of software-defined, programmable cars will offer new functionalities and services for the life of the vehicles,” said founder and CEO of NVIDIA, Jensen Huang. 

He also mentioned that they are excited to collaborate with Jaguar Land Rover to reinvent the future of mobility and develop the most advanced vehicles.

Advertisement

Soul Machines Raises $70 million from SoftBank

Soul Machines Raises $70 million SoftBank

Virtual beings startup Soul Machines raises $70 million in its recently held series B funding round led by SoftBank Vision Fund 2. Other new investors like Cleveland Avenue, Liberty City Ventures, and Solasta Ventures participated in the funding round. 

Additionally, the round witnessed participation from multiple existing investors, including Temasek, Salesforce Ventures, and Horizons Ventures. According to the company, it plans to use its fresh funds to accelerate its growth in the Enterprise market. 

Soul Machines will focus primarily on deepening its research into its Digital Brain technology. Additionally, the firm aims to launch the future of digital entertainment for the metaverse with hyper-realistic digital twins of real-life celebrities. 

Read More: Indian Engineering Student develops AI model to turn American Sign Language into English

Chief Business Officer of Soul Machines, Greg Cross, said, “We are in a transformational era where brands need to introduce different ways of personalization and ways to deliver unique brand experiences to customers in a very transactional digital world.” 

He further added that he’s excited to continue working with forward-thinking, global businesses that recognize the power of digital people to communicate, engage and interact with the rest of the world. New Zealand-based technology company Soul Machines was founded by Greg Cross Mark Sagar in 2016. 

The firm specializes in designing intelligent and emotionally responsive avatars that change the way people interact with machines. Soul Machines’ product allows companies to offer customer service, advertising, and entertainment using artificial intelligence attached to synthetic voices and visuals in the metaverse. 

To date, the company has raised total funding of $135 million over three funding rounds. One of the most popular products of Soul Machines is Ella, a virtual assistant that the New Zealand Police Department uses. The highly capable virtual officer can interact with visitors to a police station via a standing kiosk. 

“With strong R&D capabilities and advanced back-end solutions, we believe that Soul Machines is at the cutting edge for creating digital people that can support companies across functions including customer service, training, and entertainment,” said Investment Director at SoftBank Advisers, Anna Lo. 

Advertisement

Mastercard AI Modeling Enhances Milliman Payment Integrity Solution

Mastercard AI Modeling Milliman Payment Integrity

Global payment processing products and solutions providing company Mastercard enhance Milliman Payment Integrity (MPI) solution using Mastercard’s expertise in artificial intelligence modeling. 

MPI solutions will be able to detect suspected healthcare fraud, waste, and abuse more effectively with the help of its newly deployed artificial intelligence model. 

Mastercard and Milliman have signed a formal reseller agreement, understanding the importance of this combined solution for their clients. 

Read More: Sam Altman Invites Meta AI researchers to join OpenAI

Mastercard’s data scientists collaborated with Milliman to develop three artificial intelligence models based on the Milliman Payment Integrity solution’s outputs using its six-step AI model creation process, named AI Express. 

According to Milliman, it intended to use modern technologies such as artificial intelligence to find incremental FWA savings for its clients using non-discrete testing methodologies. Therefore Milliman and Mastercard jointly developed this AI model to cater to their needs. 

Chief Marketing, Communications Officer, and President of Healthcare at Mastercard, Raja Rajamannar, said, “Leveraging our proprietary technology to build a custom AI model helped them (Milliman) to do just that – provide enhanced fraud detection and operational efficiencies to improve their customers’ experience.” 

He further added that the healthcare business could benefit from Mastercard’s solutions designed to make payment systems run more efficiently. According to a report, annual financial losses due to healthcare financial fraud are in billions of dollars. 

Moreover, some government agencies also believe that the cost to be as high as 10% of annual health expenditures in the United States, or more than $300 billion. Therefore it becomes necessary to have a tool or solution that can considerably reduce these ever-increasing numbers. 

David Cusick from Milliman said, “Our proof-of-concept with Mastercard shows there is a very compelling value proposition when coupling our existing technology solution with Mastercard’s advanced AI and machine learning capabilities.” 

He also mentioned that Milliman Payment Integrity is pleased about the expanded fraud, waste, and abuse detection capabilities that the Mastercard FWA AI model now allows them to provide to their clients.

Advertisement

Policybazaar Launches AI-powered WhatsApp Chatbot

Policybazaar AI WhatsApp Chatbot

Online life and general insurance comparison portal Policybazaar launches its new artificial intelligence-powered WhatsApp chatbot to provide a better claim settlement process for group health insurance. 

According to the company, its newly launched AI chatbot will be used to automate and accelerate the claim settlement process for consumers. The COVID-19 pandemic caused health turmoil in the country, leading to a drastic increase in the number of claims. 

Policybazaar plans to tackle the skyrocketing claim number while not compromising on its customers’ quality services using the new technology. 

Read More: Sway AI launches No-Code Artificial Intelligence Platform

The chatbot is specifically designed to provide seamless support with intimation and settlement claims to all employees and their families who are covered under the group health plan through WhatsApp. 

Customers will now be able to effortlessly upload documents, hospitalization data, expenses, and many more using the AI chatbot just by using WhatsApp, making the chatbot highly accessible and user-friendly. 

Chief Business Officer of General Insurance at Policybazaar, Tarun Mathur, said, “Conventionally, consumers have had to follow up over phone calls or emails regarding claims with insurers. This is not only cumbersome but also leads to unpredictable wait time, which is even more complicated in a remote work setup. To iron out the friction during the crucial moment of truth, we have launched an automated communications platform.” 

He further added that the chatbot is integrated with APIs from insurers and TPAs, removing the need for human intervention to accept claim data and paperwork. After the ID gets generated after documentation, users can track the status of their claim. 

To date, more than 15,000 Policybazaar customers have already started enjoying the benefits of this AI-enabled WhatsApp chatbot, and the number is expected to grow exponentially over time. An added advantage of this chatbot is that it can be used to provide round-the-clock assistance to customers 365 days a year. 

Advertisement

Indian Engineering Student develops AI model to turn American Sign Language into English

Indian Engineering Student AI model sign language English

Indian engineering student Priyanjali Gupta has developed a one-of-a-kind artificial intelligence model that is capable of translating American sign language into English in real-time. 

Priyanjali is a third-year computer science student of Tamil Nadu’s renowned institute named Vellore Institute of Technology (VIT). 

According to Priyanjali, her newly developed artificial intelligence-powered model was inspired by data scientist Nicholas Renotte’s video on Real-Time Sign language Detection. She developed this new AI model using Tensorflow object detection API that can translate hand signs using transfer learning from a pre-trained model named ssd_mobilenet. 

Read More: Sam Altman Invites Meta AI researchers to join OpenAI

Priyanjali said, “The dataset is manually made with a computer webcam and given annotations. The model, for now, is trained on single frames. To detect videos, the model has to be trained on multiple frames, for which I’m likely to use LSTM. I’m currently researching on it.” 

She further added that it is pretty challenging to build a deep learning model dedicated to sign language detection, and she believes that the open-source community, which is much more experienced than her, will find a solution soon. 

Additionally, she mentioned that it might be possible in the future to build deep learning models solely for sign languages. She said in her Github post that the dataset was developed running the Image Collection Python file that collects images from webcams for multiple signs in the American Sign Language, including Hello, I Love You, Thank you, Please, Yes, and No. 

Despite the fact that ASL is the third most widely spoken language in the United States, not much effort has been made to create translation tools for the language. This newly developed AI model is a big step towards creating transition models for sign languages. 

Priyanjali said in an interview, “She (mother) taunted me. But it made me contemplate what I could do with my knowledge and skill set. One fine day, amid conversations with Alexa, the idea of inclusive technology struck me. That triggered a set of plans.”

Advertisement

Expert.ai expands Research and Support Capabilities at Los Alamos National Laboratory

Expert.ai expand Research Support Capabilities

Artificial intelligence company Expert.ai announces that it plans to expand research and support capabilities at Los Alamos National Library. 

According to Expert.ai, the United States National Security Research Center (NSRC) will use Expert.ai’s technology as a fundamental component of Titan Technologies’ Compendia solution to make digitized documents easier to search. 

The National Security Research Center (NSRC) is one of the world’s largest libraries with millions of items such as film, audio, journals, pictures, papers, drawings, microfiche, aperture cards, and several others. 

Read More: Intel plans to Acquire Tower Semiconductor for $5.4 billion

Expert.ai’s cutting-edge technology would allow the Compendia system from Titan Technologies to organize unstructured content into a safe digital library. 

The Compendia solution integrates knowledge and learning in the new NSRC system, Titan on the Red, with artificial intelligence-powered natural language understanding (NLU) and machine learning offered by expert.ai. 

Director, National Security Research Center, Rizwan Ali, said, “One of the greatest assets at Los Alamos National Laboratory is the information that we have generated in over 75 years of nuclear weapons work. This is what distinguishes us from any other weapons laboratory in existence. The Titan on the Red system will make this valuable information discoverable.” 

Los Alamos National Laboratory Weapons Program scientists and engineers underwent a successful six-month proof-of-concept before Expert.ai made this announcement. The PoC program provided the researchers with automated research support using AI-based natural language processing and machine learning. 

According to the company, its technology performs natural language understanding and user interface functions within Compendia by using proprietary semantics-based processes to generate granular metadata. 

Additionally, the technology can place textual content into context and detect even hidden information. 

United States-based artificial intelligence company Expert.ai, also known as Expert Systems, was founded by Marco Varone, Paolo Lombardi, and Stefano Spaggiari in 1989. The firm specializes in developing and deploying cutting-edge NLU solutions for the public sector, including law enforcement, the military, and others.

Advertisement