Friday, November 21, 2025
ad
Home Blog Page 198

Microsoft limits access to AI facial recognition tool

Microsoft limits access to AI facial recognition tool

Microsoft has changed its artificial intelligence ethics policies stating that it will no longer allow access for other companies to use its AI facial recognition tool to perform functions such as interpret gender, emotion, or age.

Microsoft has said that it aims to keep people and their goals at the center of system design decisions, as part of its new responsible AI standard of the company. The high-level principles will lead to tangible changes in practice with some features being altered and others withdrawn from sale, the company added. 

For instance, Microsoft’s Azure Face service, a facial recognition tool, is used by companies such as Uber for identity verification. However, according to new changes, the company wanting to use the facial recognition features, will need to apply for use actively. They will be obliged to prove that they are matching Microsoft’s AI ethics standards and the features benefit the end user and society as a whole. 

Read More:  AI Can Model First Impressions Based On Facial Features

According to Microsoft, even the companies that are already granted access to the tool will no longer be able to use the controversial features of Azure Face. The company will be retiring the facial analysis technology that infers emotions and attributes such as gender or age.

According to Sarah Bird, a product manager at Microsoft, the team collaborated with internal and external researchers to comprehend the shortcomings and potential advantages of the face recognition tool and navigate the tradeoffs. 

In the case of emotion segregation specifically, the efforts raised important questions about privacy, she added. The lack of unanimity on a definition of emotions, and the inability to generalize the connection between emotional state and facial expression, also came into question. 

Advertisement

Qualcomm Unveils its new Unified AI Stack

qualcomm unified ai stack

Qualcomm Inc. unveiled its new AI stack that unifies the software approach to incorporate AI across its products and services. The company is on its path to transforming into a “full-stack” company that provides intelligent solutions for cloud-to-edge technologies. Cristiano Amon, Qualcomm’s Chief Executive, explains that the company follows several semiconductor firms with a similar vision to aid developers in accelerating AI integration. 

According to the corporation, the Qualcomm AI Stack supports a wide range of intelligent devices with extensive AI software, access, and interoperability. It also supports a variety of AI frameworks, including open-source TensorFlow, PyTorch and ONNX.

The Qualcomm AI Stack portfolio gives developers direct access to the Qualcomm AI Engine, an AI library to delegate and deploy existing models directly to AI accelerators. The stack also provides specialized access to AI cores on Qualcomm Cloud AI 100. Further, it allows OEMs and developers to design one feature and then move the same model across other products and tiers.

Read More: Github’s AI Copilot is now generally available for all developers

Ziad Asghar, Qualcomm’s Vice President of Product Management, said, “We think with the Qualcomm AI stack, we are enabling developers and OEMs [Original Equipment Manufacturers] to be able to do so much more with the AI capability that we’re baking into our devices.”

The AI Stack portfolio also includes the AI Model Efficiency Toolkit, a specific AI development graphical user interface, model analyzers, and neural architecture search. The company announced its partnership with Google on Neural Architecture Search, to expand its neural network technology across the platform. Asghar added, “I think hardware is of course, critical, but increasingly AI software is absolutely critical. We’ve made very large investments to be able to make this AI stack, the best in class, AI stack for the intelligent edge.”

Advertisement

FANUC introduces ML Tool for Predictive Maintenance

FANUC introduces ML tool for Predictive Maintenance

FANUC, a leading global automation solutions provider, introduced the latest Industrial Internet of Things (IIOT) software, AI Servo Monitor, developed to prevent production problems before their occurrence. The software uses AI to predict possible failures of the drive systems in FANUC spindle motors and servo motors.

AI Servo Monitor, along with MT-LINKi and machine learning, analyzes the everyday performance of machines with FANUC CNCs. Daily data is depicted in intuitive graphs, enabling users to track abnormalities on these machines efficiently. 

The AI automatically creates a baseline model of the machine running in its normal state. An anomaly score developed in the process expresses a difference in the daily recorded values and the baseline model. 

Read More: Shell Scales AI Predictive Maintenance To 10,000 Pieces Of Equipment Using C3 AI

Users can easily view the anomaly scores in a graph on a web interface. Also, email notifications can be sent if the value exceeds the predefined thresholds.

According to Jon Heddleson, General Manager of Factory Automation for FANUC America, IIOT software detects a failure before it happens, not after. He added that predictive maintenance is crucial in preventing unexpected downtime. FANUC’s AI Servo Monitor ensures that production keeps running without any problems.

MT-Linki is data collection software and machine status monitoring of FANUC that connects shop floor equipment, including robots, machine tools, and PLCs. MT-Linki collects, monitors, and presents data in color-coded graphical illustrations of the factory floor. The graphs provide more information about manufacturing processes and historical data. 

Data presented in MT-Linki facilitates data-driven business decisions to enhance operations through advanced maintenance capabilities such as scheduling memory backups and presenting alarm/operator history. It also monitors the status of memory backup batteries, cooling fans, motor temperatures, etc.

Advertisement

Github’s AI Copilot is now generally available for all developers

github ai copilot available for developers

Github’s Copilot, an AI code writing assistant, was in technical preview until now. Finally, Github is leaping and making it available for all. Copilot, which helps complete lines of codes,  was unveiled in June 2021 in collaboration with OpenAI

As AI code writing practices are rising exponentially, GitHub decided to take the call to make it available for all developers at a menial subscription. GitHub’s CEO Thomas Dohmke wrote, “That’s [Copilot’s] creating more time and space for developers to focus on solving bigger problems and building even better software.”

Dohmke also explained that it is the first time in the history of software development that AI would be harnessed by all developers for code completion. Microsoft will provide a 60-day free trial after which developers can avail of the assistant for $10/month or $100/year. It will be free for students as well as “certified” open source contributors, with about 60,000 engineers chosen from the network and students in the GitHub Education program as the first to benefit.

Read More: AI-Based Voicemod makes you sound like Morgan Freeman in Real-Time

Copilot allows developers to accept, reject, or manually change suggestions for Python, JavaScript, TypeScript, Ruby, Go, and dozens of other programming languages. Copilot adjusts to the changes made by developers, matching certain coding styles to autofill or repeated code patterns and recommending unit tests that match implementation code.

In addition to Visual Studio Code, Copilot extensions are accessible for Noevim and JetBrains, as well as in the cloud on GitHub Codespaces.

Dohmke also added that AI-assisted coding has the potential to fundamentally change the nature of software and give developers a chance to code faster and easier.

Advertisement

ZIM invests $6 million in Data Science Consulting Group 

ZIM invests $6 million in Data Science Consulting Group

ZIM Integrated Shipping Services has invested $6 million in the Data Science Consulting Group (DSG) in its Series A funding round. DSG is a leading technology company specializing in AI-based solutions, products, and services.

DSG will use the proceeds of the investment to bolster the development of its holistic AI governance and decision management system, E-volve. The company will also invest in expanding its operations and presence to more global territories. 

The investment will enable ZIM to develop its products and technologies further and expand its international presence in markets such as Japan, Australia, etc. 

Read More: Maruti Suzuki Invests ₹2 Cr In AI Startup Sociograph Solutions

According to an official statement from ZIM, the investment is a result of commercial collaboration between DSG and ZIM in creating a center of excellence for developing AI tools for the maritime shipping industry.

Eli Glickman, CEO of ZIM, said that the center of excellence, initiated with DSG, is an excellent example of a productive partnership ZIM has established with an Israeli startup. It also shows the successful production and implementation of artificial intelligence tools to improve ZIM’s business-related decisions.

He added that having worked closely with the DSG team, ZIM believes in DSG’s capabilities to support the ecosystem and that the investment will enable DSG to grow its business further.

Advertisement

IBM Researchers Make an AI-assisted  E-Tongue called Hypertaste

ibm make e tongue hypertaste

The same team of researchers at IBM that invented Watson AI is now advancing their AI into an AI-assisted e-tongue called Hypertaste. The company plans to use the technology for sensing chemicals. The e-tongue will cater to various scientific and industrial applications to identify liquids without a high-end laboratory.

Patrick Ruch iterated on behalf of IBM, “For the rapid and mobile fingerprinting of beverages and other liquids less fit for ingestion, our team at IBM Research is currently developing Hypertaste, an electronic, AI-assisted tongue that draws inspiration from the way humans taste things.”

With Hypertaste, the company’s technology will reduce the gap between powerful stationary machines and portable sensors. Ruch added, “Closing this gap is crucial as most liquids of practical use are complex, meaning they comprise a rather large number of chemical compounds, none of which can serve as an identifier alone.”

Ruch explained that sending the liquids back to the lab for routine analysis adds a lot to the cost and impracticality of the study. The e-tongue would make the analysis much less costly and more time-efficient.

Read More: AI-Based Voicemod makes you sound like Morgan Freeman in Real-Time

Hypertaste uses AI for combinational sensing. Ruch said, “In these liquids, it’s not so much the single components that matter but rather the properties that arise from combining them. Combinatorial sensing relies on the ability of individual sensors to respond simultaneously to different chemicals.”

The tongue consists of an array of sensors that obtain a “fingerprint” of the liquid. These sensors are electrochemical. They consist of electrodes that measure voltage signals from the molecules present in the fluid and generate a “fingerprint” for it. A mobile application then passes on this data to a cloud server. 

Ruch told, “A trained machine learning algorithm compares the digital fingerprint just recorded to a database of known liquids. The algorithm figures out which liquids in the database are most chemically similar to the liquid under investigation, and reports the result back to the mobile app.”

Advertisement

NVIDIA Converts 2D Images into 3D with 3D MoMa AI Technology

nvidia converts 2d into 3d with 3d moma

Computer Vision and Pattern Recognition Conference: NVIDIA has made another attempt to transform still 2D images into 3D objects with AI. The GPU giant uses dubbed 3D MoMa technique for the conversion. 3D MoMa relies on photo measurements taken via photogrammetry and speeds up the process.

The company has been researching neural radiance fields to create 3D scenes from 2D source images. However, the newly unveiled 3D MoMa technology is very different from it. 

3D MoMa: The technology uses AI to approximate physical attributes like lighting and geometry in 2D images. Then it reconstructs realistic 3D form objects. Objects made using 3D MoMa are triangle mesh models that can be imported into graphics engines. NVIDIA’s Tensor Coe GPU can be done within an hour. The inverse rendering technique, which unifies computer graphics and computer vision, speeds up the process.

Read More: Salesforce Open-Sources OmniXAI an Explainable AI Machine Learning Library

Nvidia’s research and creative teams used 3D MoMa to represent jazz instruments as an example. The team then imported the newly developed models into Nvidia 3D Omniverse and dynamically changed their characteristics.

David Luebke, Vice President of Graphics Research of NVIDIA, said, “inverse rendering is a holy grail unifying computer vision and computer graphics.”

He added, “By formulating every piece of the inverse rendering problem as a GPU-accelerated differentiable component, the NVIDIA 3D MoMa rendering pipeline uses the machinery of modern AI and the raw computational horsepower of NVIDIA GPUs to quickly produce 3D objects that creators can import, edit, and extend without limitation in existing tools.”

3D MoMa is still in the works, but Nvidia believes it will allow game developers and designers to swiftly edit 3D objects and integrate them into any virtual scenario.

Advertisement

Iktos and Zealand Pharma to develop AI for Peptide Drug Design

Iktos and Zealand Pharma to develop AI

Iktos has announced collaborative research with Zealand Pharma A/S, a biotechnology company specializing in innovative peptide-based medicines, to co-develop generative and predictive AI technologies for peptide drug design. Iktos is a company specializing in developing AI solutions applied to chemical research.  

According to the agreement, Zealand Pharma will impart its expertise in peptide drug discovery to the generative modeling technologies, expertise in machine learning, and AI of Iktos.

In a statement, Iktos said that the company is looking forward to working with Zealand Pharma’s experienced R&D team to build leading peptide predictive and generative modeling technology in the field of peptides.

Read More: Meta Pharmaceuticals Raises $15M To Make Autoimmune Drugs With AI, New Immuno-Metabolism Tech

Iktos’ artificial intelligence technology is based on a unique data-driven chemical structure creation technology that brings new perspectives into the drug discovery procedure. The technology automatically designs virtual novel molecules with all the essential characteristics of an ideal drug molecule. 

Iktos has recently diversified its R&D efforts into developing an AI technology for peptide-based therapeutics. The company has also developed superior predictive and generative models to assist the design of new peptide therapeutics with desired properties.

Zealand Pharma A/S has an excellent track record of inventing and developing novel peptide-based drugs. The complete has extensive experience in improving the therapeutic characteristics of peptides. 

Zealand Pharma is focusing on expanding its computational chemistry toolbox to integrate artificial intelligence and machine learning-based procedures to design novel therapeutic peptides. 

Yann Gaston-Mathé, President and CEO of Iktos, said that the company is pleased to have joined forces with Zealand Pharma. He added that the company expects to leverage Zealand Pharma’s extensive knowledge in peptide therapeutics with Iktos’ existing technology to initiate peptide drug discovery.

Advertisement

BlackSky gets the Joint Artificial Intelligence Center Contract for 5 years. 

BlackSky gets the Joint Artificial Intelligence Center Contract

BlackSky Technologies was awarded an order agreement by the Joint Artificial Intelligence Center (JAIC) to produce and optimize data sets used in DoD AI models and applications. The contract has a ceiling value of $241 million over the next 5 years.

The agreement will open doors for BlackSky to contribute with its expertise to the various national security challenges the extensive DoD community faces. BlackSky has demonstrated AI expertise in space-based dynamic monitoring.

According to Patrick O’Neil, BlackSky’s chief innovation officer, their unique, data-rich platform brings a vital source of core AI-enabled end-to-end capabilities to the DoD’s mission sets by automatically tasking satellites to the low-latency delivery and analyzing high-frequency geospatial non-imagery and imagery data.

Read More: Artificial Intelligence Satellite From China Take Ultra High Pictures Of Earth

The JAIC, Office of Advancing Analytics, and Defense Digital Service were merged into a single organization called the Chief Digital Artificial Intelligence Office.

BlackSky is a worldwide provider of real-time geospatial intelligence and delivers on-demand, high-frequency imagery, monitoring, and analytics of the most critical strategic locations, events, and economic assets on Earth.

BlackSky operates and owns one of the industry’s leading low earth orbit small satellite constellations. It is optimized to capture imagery wherever and whenever the customers need it with cost-efficiency. BlackSky’s Spectra AI software platform processes data from its constellation and other third-party sensors to produce the critical analytics and insights that the customers require.

Advertisement

AI-Based Voicemod makes you sound like Morgan Freeman in Real-Time

ai voicemod sound morgan freeman

After transforming your voice, the voice changer app Voicemod is starting to use AI to make you sound like Morgan Freeman. The app has been changing voices using existing sound design techniques for a long time. More recently, it started combining the use of AI. 

The app allows users to pretend to have a polished voice of an actor, precisely that of Morgan Freeman, calling it the ‘Morgan’ voice. Besides transforming voices, Voicemod’s real-time AI-based pilot makes it possible to prank call your friends or stream live. The voice is recreated due to similar characteristics to the voices of English-speaking actors. 

These voice actors read out scripts to generate data as input, and then sound designers work on this curated data via sound design technologies to convert voices into characters. These AI voices include filters, background music, and dynamic effects. 

Read More: Samsung Ventures Invests in NeuReality

These voices are processed in real-time on your PC. however, getting the Morgan Voice would require more CPU power than the regular Voicemod effects. For starters, Voicemod will open a beta version where you can sign up and test the impact on your computer to ensure there are no performance issues. In further developments, the main version will be made available for Mac.

Voicemod also debuted its PowerPitch technology, allowing users to build a lasting online voice identity for gaming, role-play, work, education, or even regular calls. People can use this technology for amusement and pranks, but it can also help millions of people with vocal abnormalities improve their pitch, volume, and quality.

Advertisement