Saturday, November 22, 2025
ad
Home Blog Page 215

AI-powered iRASTE to make Roads in India Safer

iRASTE make Roads Safer India

A new project named ‘Intelligent Solutions for Road Safety through Technology and Engineering’ (iRASTE) has been launched in India to make roads safer by reducing the number of accidents. 

Currently, the project is being rolled out in Nagpur, Maharashtra. Road accidents are one of the prime reasons for deaths in the country. Currently, India records 150,000 fatalities and over 300,000 physical severe injuries annually because of road accidents. 

Therefore, developers took the aid of artificial intelligence to come up with a unique solution named iRASTE. 

Read More: Diane Staheli joins DoD as new Chief of Responsible AI

The one-of-a-kind project is being carried out by the I-Hub Foundation, IIIT Hyderabad, a Technology Innovation Hub (TIH) in the technology vertical of Data Banks and Data Services, which is funded by the Department of Science and Technology (DST) and INAI under the National Mission on Interdisciplinary Cyber-Physical Systems (NM-ICPS) (Applied AI Research Institute). 

As a step to prevent accidents, iRASTE will automatically push notifications to drivers regarding potential accidents using artificial intelligence. Talks are already underway with the Telangana government to implement the technology in a fleet of buses as well as in Goa and Gujarat. 

Moreover, iRASTE will also discover “greyspots” through data and mobility analysis, along with continually monitoring dynamic dangers throughout the whole road network. 

The project named ‘Intelligent Solutions for Road Safety through Technology and Engineering’ (iRASTE) at Nagpur will identify potential accident-causing scenarios while driving a vehicle and alert drivers about the same with the help of the Advanced Driver Assistance System (ADAS),” said the Indian Government. 

Recently, Minister for Road Transport & Highways in the Government of India, Nitin Gadkari, said that the Government aims to reduce the total number of road accidents to half by the end of 2024, and this new development will help the Government achieve its goals. 

Advertisement

Oxford High School tests AI gun detection system

Oxford High School AI gun detection

Oxford High School has started testing a new artificial intelligence (AI)-powered technology for detecting guns on the school campus. 

This development comes six months after a tragic incident in the school involving gun fires that caused the death of four and severely injured eight students within the school premises. The horrific incident elicited outrage and despair and called for action in the face of gun violence. 

The newly announced technology will work as a preventive system to minimize the chances of the recurrence of such mishaps in the educational institution. 

Read More: Clearview AI launches Consent-based product for Commercial use

This initiative is part of a year-long free trial project with ZeroEyes, a digital security business that uses artificial intelligence technology to identify weapons shown on surveillance cameras and inform police and security personnel of potential threats. 

ZeroEyes has also released a video describing the functioning of its new artificial intelligence-powered technology. “We have always paid great attention to safety. This isn’t a new thing since Nov. 30. We’re just looking at even more – even better – solutions,” said Jill Lemond, assistant superintendent of safety and operations for Oxford. 

When the artificial intelligence software detects a gun, an alert is forwarded to a team of ZeroEyes employees. After the officials receive an alert, they then manually check whether the threat is real. Once confirmed, a text message is sent to the school’s security personnel and local police enforcement within three to five seconds. 

Initially, the AI system will monitor over 100 cameras installed in Oxford High School and will later be used in other locations. However, the limitation of this technology is that it can currently only identify guns visible on the camera. Therefore many believe that the tech can not effectively prevent mishaps as, most of the time, gun bearers tend to hide the weapon inside bags. 

Advertisement

Diane Staheli joins DoD as new Chief of Responsible AI

Diane Staheli joins DoD

The United States Department of Defense (DoD) appoints veteran artificial intelligence expert Diane Staheli as its new Chief of Responsible AI division, reports Fedscoop

Staheli will assist in the development and implementation of policies, methods, standards, and metrics for procuring and producing AI that is trustworthy and responsible inside the Department of Defense. 

She took over about nine months after the Department of Defense’s first AI ethical head left the Joint Artificial Intelligence Center (JAIC). 

Read More: Microsoft introduces Dev Box, a developer workstation in cloud

The Pentagon has been recently upgrading its technological capabilities, especially in the field of new-age technologies like AI, and this new addition will definitely help DoD achieve its goals. 

“[Staheli] has significant experience in military-oriented research and development environments and is a contributing member of the Office of the Director of National Intelligence AI Assurance working group,” Sarah Flaherty, CDAO’s public affairs officer. 

Flaherty further added that Staheli joins the CDAO from the MIT Lincoln Laboratory, where she gained extensive experience in AI ethics and research, data analytics, and technology development. 

In recent years, the Pentagon has embraced AI-based technology to facilitate activities ranging from the back office to the combat. Earlier this month, The US DoD selected object data infrastructure provider OStream’s Percept for Vision AI solution to secure ports across the country. 

According to the Department of Defense, the solution will be used to address a variety of issues, including workflow issues and others. The DoD can link any camera to more than 300 artificial intelligence services using Percept’s AI Hub, resulting in real-time insights saved in a consolidated and private data lake.

Advertisement

Clearview AI launches Consent-based product for Commercial use

Clearview AI Consent Commercial use

Artificial intelligence-powered facial recognition technology company Clearview AI announces the launch of its new product called Clearview Consent, which is meant for commercial use. 

The Clearview Consent is meant to execute various identity verification activities for commercial companies utilizing the firm’s superior FRT algorithm, according to the company. 

Companies can employ Clearview AI’s highly accurate and bias-free face recognition technology (FRT) in consent-based company operations with the newly announced solution. 

Read More: Intel CEO Pat Gelsinger says, Chip Shortage risks expansion

Recently the company was fined by the UK government over privacy issues related to the Clearview AI’s operations. 

However, with the new consent-based approach, the company will be able to provide its services without breaching the privacy of individuals. 

CEO and founder of Clearview AI, Hoan Ton-That, said, “The launch of Clearview Consent is a gamechanger – for companies and consumers alike – who value the integrity and security of their identity and assets. Facial recognition is not the wave of the future; it is our present reality. Today, FRT is used to unlock your phone, verify your identity, board an airplane, access a building, and even for payments.” 

He further added that Clearview AI’s advanced, industry-leading FRT algorithm is now available to organizations that utilize facial recognition as part of a consent-based workflow, giving an improved degree of security and protection to the marketplace. 

According to the company, the newly announced Clearview Consent will be sold independently to enterprises across the globe and will not be linked with Clearview AI’s existing database of over 20 billion images. Clearview Consent is intended for use in consent-based workflows and excludes the Clearview AI database, which is reserved for government use only. 

“The launch of Clearview Consent helps further our mission to help combat crime and fraud. We are currently helping law enforcement agencies across the country to solve crimes after the fact, which harm many victims. Using facial recognition as a preventative measure means fewer crimes and fewer victims,” mentioned Ton-That.

Advertisement

Microsoft introduces Dev Box, a developer workstation in cloud

Microsoft Dev Box

Microsoft Dev Box is built on the Windows 365 Cloud PC service, allowing developers to access their Dev Boxes with a web browser. Microsoft Dev Box was introduced at the Microsoft Build event. Users can access their Dev Box on any device (Windows, macOS, iOS, Android), set up assigned team members, images, and begin coding their projects right away. With Microsoft Dev Box, developers can quickly access pre-configured cloud-powered workstations.

Dev Box empowers developers to concentrate only on the code they can write, making it easy to access the necessary resources and tools. Developers don’t have to worry about maintenance or workstation configuration. Dev teams can pre-configure Dev Boxes for specific projects and tasks. This helps them to get started with an environment ready to build and run their app. 

Users can set conditional access controls and set requirements for the connecting device. They can also set up multifactor authentication for better safety. However, the costs for high-spec developer virtual machines can be high.

Read more: Hugging Face and Microsoft to launch Hugging Face Endpoints on Azure

Microsoft Dev Box ensures security, unified management, and compliance by leveraging Windows 365 to integrate Dev Boxes with Endpoint Manager and Intune. Dev Box is based on Azure Virtual Desktop and is integrated with Windows 365/Cloud PC. 

This helps IT admins manage Cloud PC desktops along with any preconfigured Dev Box workstations they’re running in their organizations. Dev Box is in private preview currently and will move to public preview in the following months. Interested users can sign up. However, Microsoft has not released any word on pricing or licensing.

Advertisement

Intel CEO Pat Gelsinger says, Chip Shortage risks expansion

Pat Gelsinger Chip Shortage

Pat Gelsinger, CEO of semiconductor manufacturing giant Intel, says that a shortage of advanced equipment to make semiconductors could hold up global expansion plans. 

According to him, supply timelines for chipmaking equipment for additional chip factories that the company expects to establish in the US and Europe have been significantly extended. 

“To us, this is now the No. 1 issue, is, in fact, the delivery of equipment,” Gelsinger said during a press conference on the sidelines of the World Economic Forum. 

Read More: Hugging Face and Microsoft to launch Hugging Face Endpoints on Azure

He also stated that “the most important pinch point to the build-out of capacity today” is the supply of chipmaking equipment. Gelsinger is now urging authorities in the United States and Europe to speed up the implementation of their respective “Chips Acts” to boost national semiconductor production. 

Earlier this year, Intel had announced its plans to invest $36 billion in microchip manufacturing in Europe, including a mega manufacturing facility in Germany. The announced German fabrication facilities, according to Intel, will build chips with the company’s most advanced capabilities, and the plants are expected to open in 2027. Moreover, Intel also has plans to invest heavily in Ohio to establish new facilities. 

However, considering the current situation, it cannot be said that the project will end on time. “This broad initiative will boost Europe’s R&D innovation and bring leading-edge manufacturing to the region for the benefit of our customers and partners around the world,” said Gelsinger during the announcement. 

Advertisement

Hugging Face and Microsoft to launch Hugging Face Endpoints on Azure

Hugging Face Microsoft Face Endpoints Azure

Open-source machine learning platform Hugging Face partners with global technology giant Microsoft to launch its new Hugging Face Endpoints on Azure. 

According to the company, its Hugging Face Endpoints is a new Machine Learning inference service powered by Azure ML Managed Endpoints. 

Hugging Face Endpoints, available through Azure Machine Learning Services, allows clients to leverage Hugging Face models with a few clicks or lines of Microsoft Azure SDK code, drastically increasing its usability. 

Read More: Google Develops AI Text-to-image Generator Imagen

Interested users can directly visit Azure Marketplace to test this new solution. 

Corporate Vice President of Microsoft AI Platform, Eric Boyd, said, “Hugging Face has been on a mission to democratize good machine learning. With their Transformers open source library and the Hugging Face Hub, they are enabling the global AI community to build state-of-the-art machine learning models and applications in an open and collaborative way.” 

Boyd further added that he is delighted that they are combining the best of Hugging Face with the Azure platform to provide clients with new integrated experiences that are built on the secure, compliant, and responsible AI foundation that AzureML, their MLops platform, provides. 

United States-based ML platform provider Hugging Face was founded by Clement Delangue, Julien Chaumond, and Thomas Wolf in 2016. The company recently raised $100 million in its series C funding round led by Lux Capital and received participation from Sequoia and Coatue. The fresh funding raised Hugging Face’s market valuation to over $2 billion. 

Co-founder and CEO of Hugging Face, Clément Delangue, said, “We’re striving to help every developer and organization build high-quality, ML-powered applications that have a positive impact on society and businesses.” 

He also mentioned that they have made it easier than ever to deploy cutting-edge models with Hugging Face Endpoints, and they are excited to see what Azure customers will create with them.

Advertisement

Google Develops AI Text-to-image Generator Imagen

Google Imagen

Global technology giant Google has developed a one-of-a-kind artificial intelligence-powered test-to-image generator named Imagen that can create realistic-looking pictures from the entered text. 

The AI platform generates accurate images based on the description provided, which Google claims to have “an unprecedented degree of photorealism.” 

Google’s Imagen platform follows in the footsteps of other text-to-image generators such as DALL-E, VQ-GAN+CLIP, and Latent Diffusion Models. 

Read More: Pony.ai loses permit to test autonomous vehicles with driver in California

The AI generator’s capabilities include sketching, painting, creating oil paintings, and also producing CGI renders. People found Google’s algorithm surpassed competitors in accuracy and visual fidelity when comparing images made by Imagen and other text-to-image converters, according to Google. 

Imagen’s diffusion technologies start with a noisy image and polish it to the point of perfection. The tool then generates a 64X64-pixel image, which scales to a 1024 x 1024-pixel image after two super-resolution steps. 

“Imagen consists of a text encoder that maps text to a sequence of embeddings and a cascade of conditional diffusion models that map these embeddings to images of increasing resolutions,” mentioned Google. 

On its Imagen website, the company published many examples of text prompts and the AI’s accompanying graphics, including “A lovely corgi lives in a house made out of sushi.” 

A few other examples include a photo of a Shiba Inu dog with a backpack riding a bike, wearing sunglasses, and a beach hat. 

A high contrast portrait of a very happy fuzzy panda dressed as a chef in a high-end kitchen making dough, etc. 

However, it should be mentioned that this is only a text-to-image diffusion model and is not intended for public use.

Advertisement

GitLab launches new Cybersecurity and AI growth features

GitLab Cybersecurity AI features

DevOps platform GitLab launches its new cybersecurity and artificial intelligence (AI) growth features. 

GitLab 15 is the first release version (15.0), bringing forward new cutting-edge DevOps capabilities on a single platform.

GitLab 15’s complete DevOps capabilities help companies build and collaborate around business-critical code to deploy software securely and achieve targeted business goals. 

Read More: Meta’s Myosuite can create Realistic Musculoskeletal Models

Some of the features have been launched already, and the remaining will be available to the public soon. GitLab’s new platform release will improve the company’s offering, particularly in the field of cybersecurity. 

The official press release mentions, “We believe upcoming releases will enhance the platform’s capabilities in solution areas including visibility and observability, continuous security and compliance, agile enterprise planning, and workflow automation and support for data science workloads.” 

The release further cited that Airbus, a GitLab customer, was able to release feature updates in just 10 minutes after adopting GitLab, compared to the whole 24 hours required to set up for production and run manual testing before implementing GitLab. 

Since the launch of GitLab last year, the platform has gained tremendous popularity and currently has a huge customer base of leading companies such as NVIDIA. GitLab’s latest release provides companies with a purpose-built, unified DevOps platform that empowers teams to concentrate on achieving business transformation. 

“In today’s highly competitive landscape, organizations are under more pressure than ever to deliver software faster and more securely. They need a more mature, comprehensive platform to improve their time to market,” said David DeSanto, VP of Product at GitLab. 

He also mentioned that the One DevOps Platform from GitLab solves this issue as organizations can replace their DIY DevOps toolchains with a single application that brings teams together from planning to a product. 

Advertisement

Pony.ai loses permit to test autonomous vehicles with driver in California

Pony.ai loses permit test autonomous vehicles

Self-driving taxi developing company Pony.ai loses its permit to test its autonomous vehicles with onboard drivers in California, United States. 

The California Department of Motor Vehicles (DMV) recently canceled the issued license of Pony.ai, considering past incidents. 

According to authorities, this decision was made as the company was unable to monitor the driving records of the safety drivers on its testing permit. 

Read More: Amazon installs AI Cameras in UK delivery vehicles

Earlier this year, Pony.ai also lost its permit to test driverless vehicles in the United States because of an accident that raised questions about the security of passengers. The decision was followed after the National Highway Traffic Safety Administration (NHTSA) of the US said that it plans to review the actions of robotaxi startup Pony.ai regarding crash reporting norms set up by the government. 

A DMV spokesperson told TechCrunch, “While reviewing Pony.ai’s application to renew the testing permit, the DMV found numerous violations on the driving records of active Pony.ai safety drivers.” 

The official further added that the DMV is canceling the permit, effective immediately, due to the vital role of safety drivers in facilitating the safe testing of autonomous technology and the requirement that these drivers have a clean driving record as outlined by the DMV’s autonomous vehicle standards. With this new development, Pony.ai no longer holds any permit to test its autonomous vehicles on the roads of the United States. 

However, the good news is that the company recently received a taxi license for its robotaxis in China. The permit will allow Pony.ai to charge fees for its autonomous taxi services in some parts of China where it has been testing its robotaxi. Pony.ai has been granted permission to operate 100 self-driving vehicles in Guangzhou’s Nansha neighborhood, according to officials.

Advertisement