Friday, November 21, 2025
ad
Home Blog Page 194

Infometry announces partnership with Snowflake to power AI analytics  

Infometry announces partnership with Snowflake

Infometry, a leading data analytics products and solutions provider, has announced the accessibility of new product integration and partnership with the leading cloud data warehouse company, Snowflake.

The partnership will allow Snowflake customers to bring data into InfoFiscus, a software that helps customers unlock actionable insights into a 360-degree view of their business. 

Infometry’s extensive domain knowledge and robust business processes convey a deep understanding of which analytics trends lead to desired business results by capturing client activity from data sources. This bi-directional integration will allow clients to get cloud modernization, advanced analytics, data strategy, and data governance benefits along with unlocking of on-going self-service insights. 

Read More: Snowflake To Bring Python To Its Data Cloud Platform

Every individual from the respective organizations can utilize Snowflake and Infometry analytics solutions for making data-driven choices that fuel revenue growth and quicker go-to-market. 

The partnership will empower enterprises to monitor high-volume and high-frequency data to uncover significant insights for clients in data-intensive enterprises. These enterprises include technology, retail, energy, finance, supply chain, life science, and healthcare.

Infometry’s partnership with Snowflake facilitates a next-gen data analytics solution on the cloud for leading financial services clients. The data platform is utilized for executing various logical and AI use cases, offering continuous insight into fraud detection. This will further develop growing field-specialist selling viability, marketing adequacy, and decreasing loan dispute processing time. 

Advertisement

Google Developed Minerva, an AI That Can Answer Math Questions

google developed minerva ai to answer math questions

Google unveiled Minerva AI, a sophisticated natural language processing model that can handle complex computations and solve math questions. It is a neural network dubbed PaLM that came into existence in April, 2022. PaLM features around 540 billion parameters that determine how this particular AI model makes decisions. Based on the parameters, Google’s Minerva has also surpassed OpenAI’s GPT-3, with 175 billion parameters

Google claims that existing neural networks have only so far shown a modest capacity to handle so-called quantitative reasoning issues, such as math problems. Ethan Dyer and Guy Gur-Ari, Google researchers, wrote that most language models tend to overlook quantitative reasoning and still fall short of human-grade performance. They added, “It is often believed that solving quantitative reasoning problems using machine learning will require significant advancements in model architecture and training techniques.”

Read More: Calantic AI Marketplace by Bayer analyzes CT and MRI Scans.

118 GB of scientific papers and web pages containing mathematical expressions served as the dataset used by Google to train Minerva. The neural network retained the formatting and symbols that express the semantic meaning of mathematical expressions. This is why Minerva learned to express answers using standard mathematical notation.

Additionally, Google set up Minerva to produce a range of potential responses when analyzing a query. Minerva can even come up with various ways to calculate the exact answer regarding math difficulties. After all the computations, the neural network compares the solutions obtained from different calculations and choose the most appropriate one.

Dyer and Gur-Ari concluded, ”We hope that general models capable of solving quantitative reasoning problems will help push the frontiers of science and education.” 

Advertisement

AI model predicts rate of crime a week in advance with 90% accuracy

AI model predicts crime with 90% accuracy

An AI model developed by the University of Chicago has accurately predicted the rate of crime and its location across a city a week in advance with 90% accuracy. 

The AI model was created by Ishanu Chattopadhyay and his colleagues at the University of Chicago. The model analyzed past crime data from Chicago from 2014 to 2016 and then predicted crime levels for the weeks that followed the training period.

The city was divided into squares of about 300 meters. The model predicted the occurrence of several crimes across the city a week in advance with an accuracy of 90%. The model was also trained and tested on data for seven other US major cities with a similar performance level.

Read More: Staqu’s AI Systems Can Now Spot Crimes And Listen To Gunshots

According to Chattopadhyay, the data used by the model will be biased, but efforts have been taken to reduce the effect of bias. He added that the AI does not identify suspects but only potential sites of crime. 

The predictions of the AI model could be used to inform policy at a high level instead of being used directly to allocate police resources. The team has also publicly released the data and algorithm used in the process so that other researchers may investigate the results.

The team of developers also used the data to look for areas where human bias affects policing. The model analyzed the number of arrests following crimes in neighborhoods in Chicago with various socioeconomic levels. The results showed that crimes in wealthier city sections resulted in more arrests than in poorer areas, suggesting bias in the police response.

Advertisement

Modular closes $30M Seed Round to Provide a Unified Platform for AI-based System Development

modular $30M seed round for unified ai platform

A startup founded by former Google and Apple engineers, Modular, closed a whopping $30M seed round led by Google Ventures along with participation from SV Angel, The Factory, and Greylock. The investment will be used to realize the vision of a streamlined and platform-oriented AI system development.

Tim Davis and Chris Lattner, Modular’s co-founders, believe that AI has the potential to transform companies into ‘Big Tech’ companies. However, the software developed for the same is outdated and unable to keep up with the AI progress. Secondly, despite several AI development frameworks maintained by companies such as Google, the AI’s progress is insufficient because of tooling and infrastructure restrictions.

Read More: PyTorch announces the release of PyTorch 1.12

Modular plans to change that with this seed funding. Lattner said, “The industry is struggling to maintain and scale fragmented, custom toolchains that differ across research and production, training and deployment, server and edge.” He added, “AI can change the world, but not until the fragmentation can be healed and the global developer community can focus on solving real problems, not on the infrastructure itself.”  

Modular’s solution is to provide a platform that unifies AI framework frontends via “composable” standard components. It will supposedly enable developers to plug in custom hardware to train AI and deploy those systems to devices or servers. Lattner said the platform would let developers “seamlessly scale [the systems] across hardware so that deploying the latest AI research into production ‘just works.’”

Providing tools for obtaining, categorizing, and manipulating the data required to train AI systems and procedures for creating, deploying, and monitoring AI, Modular stands in contrast to the developing MLOps category of companies.

Advertisement

PyTorch announces the release of PyTorch 1.12

pytorch releases pytorch 1.12

The PyTorch team announced several new updates along with the release of PyTorch 1.12, comprising 3124 commits and 433 contributors. These updates would include PyTorch Vision models on CPU, beta versions of AWS S3 integration, PyTorch on Intel Xeon Scalable processors with Bfloat 16, and FSDP API.

The update would enable functional APIs to apply module computation with given parameters. PyTorch introduced a new Beta release for TorchArrow, a library for ML preprocessing over batch data. Currently, it provides a Python-based DataFrame interface with a high-performing CPU backend. 

PyTorch also introduced (Beta) Complex32 and Complex Convulsions. The library is compatible with complex numbers, complex modules, complex autograd, and numerous complex operations with Fast Fourier Transform (FFT) operators. In PyTorch, complex numbers are already used by several libraries, such as torchaudio and ESPN. PyTorch 1.12 now adds complex convolutions and the experimental complex32 data type, which allows for half-precision FFT computations.

The update also introduced a Beta version of Forward-Mode Automatic Differentiation that allows computations based on directional derivatives. The 1.12 version of PyTorch enhances the operator coverage for forward-mode AD.

Read More: New Android App Uses AI To Determine Coffee’s Roast Level

Besides API updates, there are several performance enhancements with nvFuser, a deep learning compiler. For Volta and subsequent CUDA accelerators, Torchscript is changing its default fuser in PyTorch 1.12 to nvFuser, which supports a wider variety of functions and is quicker than NNC, the prior fuser for CUDA devices.

When running vision models, memory formats have a significant impact on performance. 1.12 version explains the fundamentals of memory formats and shows how popular PyTorch vision models run faster with Channels Last on Intel® Xeon® Scalable processors. Especially with the latest precision numeric format like Bfloat 16, the performance is improved by many folds.

The new version also comes with Fully Sharded Data-Parallel (FSDP) API. PyTorch 1.11 had a prototype of the same with menial features. In this beta release for PyTorch 1.12. It adds:

Universal sharding strategy for API, mixed-precision policies, transformer auto wrapping policy, and faster model initialization. 

For more details, you can check out the official release.

Advertisement

ICLR, NeurIPS, and ICML are the top three Publications for Artificial Intelligence, According to Google’s Scholar Metrics 2022

2022 version google scholar metrics

Google announced the release of 2022 version of Scholar Metrics to enhance the visibility and influence of scholarly publications. For the artificial intelligence subcategories, ICLR, NeurIPS, and ICML are the top three publications.

With Google Scholar Metrics, authors can easily assess the popularity and impact of recent works in scholarly publications. In order to assist authors as they decide where to publish their new study, Scholar Metrics compiles recent citations to a wide range of publications. Based on these citations and top publications’ h-index results, researchers can decide where to publish.

Users can start by browsing the top 100 publications in a number of languages, listed according to their five-year h-index and h-median criteria. Then clicking on a publication’s h-index number, users can read the articles and the citations that support the metrics, as well as see which articles in the publication were most frequently cited and by whom.

Read More: NVIDIA AI Enterprise Now Available via HPE Greenlake

Journals from websites that adhere to Google’s inclusion criteria are included in Scholar Metrics. Publications with less than 100 articles published between 2017 and 2021 or publications with zero citations over this time period are excluded.

One can also search for publications in specific categories like Sustainable Energy, Public Health, Business, Economics & Management, etc. The search can be made more specific by further looking in the “subcategories” and then opting for one. 

Advertisement

New Android App Uses AI To Determine Coffee’s Roast Level

android app uses ai to predict coffee roast

Researchers from Thailand’s the King Mongkut University of Technology Thonburi created an Android app using AI – Coffee Roast Intelligence, which determines coffee’s roast level. 

As the flavor of coffee is dependent on the extent of roasting of the beans, it is crucial to determine the degree of roasting. Coffee Roast Intelligence displays the roast level and returns the roasting percentage for class prediction. 

The app analyzes image inputs of coffee beans via the CNN model, compares them with images from an image database, and then deciphers which of the beans belong to unroasted, light, medium, and dark roasted. For each of the four categories, the researchers used 1,200 photos from coffees roasted at JJ Mall Jatujak to create their image database. They used a Laos Typica Bolaven for the light roast, a Doi Chaang for the medium roast, and a Brazil Cerrado for the dark roast.

Read More: Calantic AI Marketplace by Bayer analyzes CT and MRI Scans

Although preliminary results are encouraging, experts point out that the software is not yet able to take into account many origins, which “may alter the color.” It does not account for the variation in bean density, roast duration, age, temperature, elevation, and many other factors. 

Another possible problem is that the app only determines the roast level based on the coffee bean’s exterior color. However, uneven growth and blistering during the roasting process could result in the exterior becoming more roasted than the interior, resulting in inaccurate results.

Nevertheless, the developers are optimistic that they can overcome these gaps as the dataset is growing since. For now, the app offers many benefits, especially for roasters. It could be a game-changer to rapidly take a snapshot of the coffee as it is roasting to identify the roast level.

Advertisement

NVIDIA AI Enterprise Now Available via HPE Greenlake

nvidia ai enterprise available via hpe greenlake

The NVIDIA AI Enterprise is an end-to-end, cloud-native software suite for data analytics. The cloud is now available via the HPE GreenLake platform. The software suite can be used everywhere, from a data center to the cloud, and is fully supported by NVIDIA. Additionally, developers may streamline development and deployment and quickly create high-performing AI solutions using the cloud-native AI tools and frameworks platform.

With HPE GreenLake now offering NVIDIA AI Enterprise in a few countries, IT departments are spared the necessity of setting up the infrastructure needed to execute AI workloads. The risk, time, effort, and expense associated with developing, deploying, and maintaining an enterprise AI platform for IT personnel are minimized with HPE’s on-premise cloud service to access the NVIDIA AI Enterprise software package.

With secure, self-service provisioning and monitoring via a unified control pane, the HPE GreenLake platform gives businesses centralized control and analytics to manage resources, costs, and capacity across both on-premises and cloud deployments.

Read More: AI-powered BirdNET App Identifies Birds by Sound Alone

The software suite is deployed on NVIDIA-Certified HPE ProLiant DL380 and DL385 servers running VMware vSphere with Tanzu. HPE GreenLake enables customers to acquire NVIDIA AI Enterprise on a pay-per-use basis, with the flexibility to scale up or down and tailor it to their needs. For training or inference workloads, customers can choose from predefined packages. NVIDIA Ampere architecture GPUs, VMware vSphere with Tanzu, and NVIDIA AI Enterprise software are all included in the packages.

NVIDIA was a platinum sponsor at the HPE Discover, June 28-30, 2022, in Vegas, where the company gave more insight into the availability. 

Advertisement

AI-powered BirdNET App Identifies Birds by Sound Alone

birdnet app identifies birds by sound

According to new research at Cornell University, the AI-powered BirdNET app can successfully identify more than 3000 birds by sound alone while generating reliable scientific information. 

The research contends that because the BirdNET app doesn’t require knowledge of bird identification, it lowers the barrier to citizen science. The user can keep an ear out for birds, then tap the app to capture them. It employs artificial intelligence to recognize the species automatically based on sound, documenting the identification for later use in the study. 

Connor Wood, a researcher at the K. Lisa Yang Centre for Conservation Bioacoustics at the university, said, “The most exciting part of this work is how simple it is for people to participate in bird research and conservation.” Wood added that the innovation has led to active participation worldwide. 

Read More: Pony.ai sets its ADC made with NVIDIA DRIVE Orin for mass production

Stefan Kahl, the co-author of the study and the technical developer of the app, said, “Our guiding design principles were that we needed an accurate algorithm and a simple user interface. Otherwise, users would not return to the app.” 

The authors chose four test scenarios where conventional research had already produced solid results to see if the app could create reliable scientific data. Their work demonstrates, for instance, that the ranges of the brown thrasher during migration and the known distribution pattern of song types among white-throated sparrows were successfully duplicated using data from the BirdNET app. Validating the reliability of the app was one of the most important purposes to extend its usability for all wildlife and soundscapes in the long term.  

The Cornell Lab of Ornithology toolkit includes the BirdNET app and is available for iOS and Android platforms.

Advertisement

Calantic AI Marketplace by Bayer analyzes CT and MRI Scans

calantic ai marketplace for ct mri

The radiology division at the drugmaker giant, Bayer, is expanding its portfolio of MRI and CT scan devices by adding digital and AI-powered apps for imaging. The launch of a virtual marketplace is a part of this expansion.

The new cloud-based platform, called Calantic Digital Solutions, aims to automate various routine tasks across the radiologist’s workflow in a field that is experiencing a shortage of qualified specialists. Vendor-neutral software for accelerating patient reviews, automatically classifying critical cases, and highlighting scans of dangerous lesions will be available in the marketplace. 

Gerd Kruger, Bayer’s Radiology Head, said that with Calantic Digital Solutions, the company would be a part of the fastest-growing segment in radiology. He also emphasized the company’s mission to provide an ecosystem of third-party products and services integrated with Bayer’s products and services to deliver disease-oriented solutions for radiologists.

Read More: Euclid raises $27M in Series B Funding for its AI-based Heart Disease Diagnosis Software 

Following regulatory clearances, the platform’s initial rollouts are anticipated for the U.S. and European markets. It will be grouped by body area and diagnostic method and begin with disorders of the chest and nervous system, including identifying nodules in lung tissue that may be malignant and examining cerebral hemorrhages and strokes. The business declared that it would eventually introduce more disease-specific solutions.

The introduction of Calantic intends to propel the company closer to its stated objectives by beating the segment’s average annual growth rate, which is estimated to be 5 percent annually through 2030.

With technology advancements in the industry, the number of automated systems for processing imaging scans has skyrocketed. Another AI-enabled imaging platform AIDOC gained popularity by raising $110M to expand its technology.

Advertisement