Wednesday, November 12, 2025
ad
Home Blog Page 324

Nvidia’s MLPerf Benchmark Show Impressive Results In AI Training

Nvidia's high performance benchmark

Nvidia announced that their graphics processor units (GPU) based systems perform 3 to 5 times better while training artificial intelligence models, according to their latest MLPerf benchmarks.

The MLPerf benchmark is backed by Alibaba, Google, Facebook AI, Nvidia, and Intel and is managed by MLCommons Association, making the tests transparent. MLPerf gives users a clean source of information before buying the product. These benchmarks are based on AI workloads and scenarios like natural-language processing, covering computer vision, recommendation systems, and reinforcement learning. The training benchmarks are focused on time to train AI models. 

NVIDIA has set new performance records by training models in the least amount of time across all eight benchmarks in the commercially available submissions category with its A100 GPU.

Nvidia ran the tests on Selene, the fastest AI supercomputer worldwide, based on Nvidia DGX SuperPOD architecture. Scaling is the most critical part of AI, but Nvidia obtained magnificent results in chip-to-chip comparisons. Overall, the results depict that the performance rose by 6.5x in 2.5 years, and 3 to 5 times than last year. 

Read more: NVIDIA Finalizes GPU Direct Storage 1.0 To Accelerate Artificial Intelligence

The MLPerf results depict the performance of various NVIDIA-based AI platforms with numerous new and innovative systems. They span from entry-level edge servers to AI supercomputers with thousands of GPUs. ”Our ecosystem offers customers choices in a wide range of deployment models — from instances that are rentable by the minute to on-prem servers and managed services — providing the most value per dollar in the industry” said Nvidia. 

All the software used is available from the MLPerf repository, making it accessible for everyone to reproduce the benchmark results. Nvidia will continually add this code into the deep learning frameworks and containers available on NGC, the software hub for GPU applications. 

Advertisement

Seoul Introduces AI-Integrated CCTV To Prevent Suicides On Bridges

AI prevents suicide

The Seoul metropolitan government rolled out artificial intelligence integrated CCTV cameras to prevent suicides on bridges. This was done under the new pilot project to address suicides on bridges. They are installed on 10 major bridges in and around Seoul, South Korea. 

Seoul metropolitan government has been operating CCTV surveillance and response system to monitor the bridges and proactively respond to suicide attempts around the clock since 2012. The recent pilot project has been to monitor suicides on bridges spanning the Han River, which is the largest waterway bisecting the city. 

The Seoul metropolitan government partnered with The  Seoul Institute of Technology and the Seoul Fire and Disaster Headquarters (SFDH) for the pilot project with the main objective of improving the current suicide detection system. The artificial intelligence system is trained with tones of data from dispatch reports, CCTV footage, data from bridge sensors, information from people who had previously attempted suicide, report history, phone calls and text messages. The officials mentioned that under this project they have been analyzing data from SFDH since April 2020.  

Read more: Medidata Acorn AI Synthetic Control Arm Named “Best AI-Based Solution For Healthcare” By 2021 AI Breakthrough Awards

The AI system learns suicidal behavioural patterns from the previous data and analyzes the information received from the cameras and sensors. If it detects any suspicious behaviour or dangerous situation, it immediately alerts rescue teams. And the rescue teams will be there at the point under three minutes. “We believe that the new video surveillance system will allow our teams to detect cases a bit faster and help us to answer a call more quickly,” said Kim Hyeong-gil, head of a relief squad.

This project has been taking new forms because South Korea has recorded the highest number of suicides in 2019. According to studies, approximately 486 people attempt suicide on bridges spanning the Han River every year and authorities rescue about 96.63 per cent. The government also informed that the number of rescue dispatches surged about 30 per cent in 2020 compared to the year before which predominantly gave rise to this project.

Advertisement

Cloud-Based Employee Training Software Articulate Rises $1.5 Billion

articulate rises $1.5B

Articulate, the cloud-based employee training software, has raised $1.5 Billion in series A funding round today. The enterprise is associated with many A-listed companies like Amazon, Oracle, and Morgan Stanley. 

This New York-based enterprise was founded in 2002 to serve 106,000 companies. Its most in-demand app is Articulate 360, which is used for authoring training courses that give companies the leverage of exporting to the web or hosting on their learning management system (LMS). This provides many assets such as photos, videos, templates,on-demand training that is live supported. Rise.com is another program designed for small and mid-size businesses that don’t use LMS.  

“Every enterprise has to train employees, every function has its own unique training needs, and the organization as a whole has training needs around employee onboarding, compliance, and employee engagement,” Articulate president Lucy Suros said. She also added that many companies have historically done training in a very unscalable and are instructor-led. But Articulate enables employees to take all that training online regardless of their skill and position and focuses only on areas that require training.

Read more: DataRobot Raised $250 Million On $6 Billion Valuation

Articulate helps reduce training time by providing only essential areas. The platform can also save resources for massive companies that need to train thousands of employees every year. The ready-made course components can be used for mid and small-sized businesses, which can save some resources that would usually be invested in creating custom training materials.

Articulate stands at $3.75 billion post-funding, General Atlantic joined with Blackstone Growth and ICONIQ Growth in the investment, making a big chunk of the funding. Articulate will be using the new capital to gain more customers and expand by three times over the coming years.

Advertisement

Olive Raised $400M In A Funding Round For Artificial Intelligence-Based Health Tech

Olive Raised $400M In A Funding Round For Artificial Intelligence-Based Health Tech

Olive on Thursday announced that it had raised $400 million in a funding round. This fresh funding has increased the company’s valuation to $4 billion. 

The automation company’s funding round was led by Vista Equity Partners, which is a global leader of investment in software enterprises along with Base10 Partners Advancement Initiative.

The company will also contribute to scholarship and financial aid awards for America’s historically black colleges and universities, which will be known as The Olive Scholarship.

Read More: Medidata Acorn AI Synthetic Control Arm Named “Best AI-Based Solution For Healthcare” By 2021 AI Breakthrough Awards

Chief Medical Officer of Olive, Dr.YiDing Yu, said that artificial intelligence could be a boon for healthcare companies as it has the ability to reduce the time consumed to deliver an efficient flow of information. 

“Olive is the leading force for rapid product development to better empower the humans in healthcare, which is being hired at health systems and insurance companies across the country at lightning speed,” said the CEO of Olive, Sean Lane. 

He further added that the widespread adoption of their technology makes it evident that the healthcare industry is now looking forward to adopting artificial intelligence-powered services.

Last year, the company raised $1.5 billion, after which it acquired Empiric Health with the aim of accelerating the provider payments process using the firm’s self-developed platform, ‘Olive Assure.’ 

Olive is an automation company that is addressing healthcare’s most prominent issues through automation. The company helps in delivering hospitals, payers, and health systems increased revenue, lower costs and greater capacity. 

Healthcare workers are essentially working with outdated technology that creates a lack of shared knowledge and accurate data. Olive is driving connections to shine new light on healthcare processes, improving operations today so everyone can benefit from a healthier industry tomorrow.

Advertisement

WHO Issues Guidelines for Ethical Use Of Artificial Intelligence

WHO Issues Guidelines for Ethical Use Of Artificial Intelligence

The World Health Organization (WHO) recently published a document mentioning six guidelines for ethical use of artificial intelligence in the healthcare industry. 

This guideline comes after an intensive two-year research conducted by more than twenty scientists. The report highlights how doctors can use artificial intelligence to treat patients in underdeveloped regions of the world. 

But it also points out that technology is not a quick solution for health challenges, especially in low and middle-income countries. Governments and regulators should carefully analyze where and how artificial intelligence is used in healthcare. 

Read More: GitHub’s New Copilot Programming Uses GPT-3 To Generate Code

The World Health Organization said that it hopes the six principles can be the foundation for how governments, developers, and regulators approach the technology. 

In the document, six guidelines mentioned by WHO are : 

  • Protect autonomy: Humans should have the final say on all health decisions. The decisions should not be made entirely by machines, and doctors should be able to override them at any time. Artificial intelligence should not be used to guide someone’s medical care without their consent, 
  • Promote human safety: Developers should continuously monitor any artificial intelligence tools to ensure they’re working as they’re supposed to and not causing harm.
  • Ensure transparency: Developers should publish information about the design of AI tools. One common criticism of the systems is that they’re “black boxes,” and it’s too hard for researchers and doctors to know how they make decisions. The WHO wants to see enough transparency to be fully audited and understood by users and regulators.
  • Foster accountability: When something goes wrong with an AI technology — like if a decision made by a tool leads to patient harm — there should be mechanisms determining who is responsible (like manufacturers and clinical users).
  • Ensure equity: That means making sure tools are available in multiple languages, that they’re trained on diverse sets of data. In the past few years, scrutiny of standard health algorithms has found that some have racial bias built-in.
  • Promote sustainable artificial intelligence: Developers should regularly update their tools, and institutions should have ways to adjust if a tool seems ineffective. Institutions or companies should also only introduce mechanisms that can be repaired, even in under-resourced health systems.

There are numerous potential ways artificial intelligence can be used in the healthcare industry. There are applications in development that use artificial intelligence to screen medical images such as mammograms, devices that help people monitor their health, tools that scan patient health records to predict if they might get sicker, and systems that help track disease outbreaks. 

“The appeal of technological solutions and the promise of technology can lead to overestimation of the benefits and dismissal of the challenges and problems that new technologies such as artificial intelligence may introduce,” The report mentioned.

Advertisement

Google AI To Fuse Artificial Intelligence And Art

Google fuses AI and art

Google AI joins Douglas Coupland to fuse artificial intelligence and slogan writing. The project is called Slogans For Class 2030, focuses on the first generation of young people whose lives are fully interconnected with artificial intelligence. 

This idea was brought into life by initially training the AI model with Coupland’s 30 years of written work (which is over a million words) for the model to familiarize with the author’s style of writing. Later, the ongoing topics were picked from social media posts and were added to the training sets to build short-form, topical statements. After the model was trained, Couplands text along with the curated topics was the inspiration to create twenty-five Slogans for the Class of 2030

“I would comb through ‘data dumps’ where characters from one novel were speaking with those in other novels in ways that they might actually do. It felt like I was encountering a parallel universe Doug,” Coupland says. He also added that the statements from the project appeared like gems. Though they weren’t written by him they were still him since they wouldn’t have existed without him.

Read more: AI Will Now Judge Artist’s Popularity In Italian Museums

A usual pattern seen in Coupland’s work is investigating the human condition through the eye of pop culture. This was the focus on the class of 2030. Coupland’s work is aimed towards inspiring students in their early teens who will be graduating in 2030. The motive is to help teens considering their future career paths, Coupland hopes that this project will trigger conversation on the vast possibilities in the various fields and familiarize them with the fact that AI does not have to be strictly scientific, it can be artful.

All the 25 thought-provoking and visually rich digital slogans are available on Google Arts & Culture along with other behind-the-scenes material. Artificial intelligence is evolving in areas that humans could have never imagined like art, emotions and socializing. This is only a small step in the evolution, there are many more routes to go in the forthcoming years. 

Advertisement

DARPA Wants AI That Can Learn From Other’s Experiences

DARPA experience learning AI

The Defense Advanced Research Programs Agency (DARPA) wants artificial intelligence models with lifelong learning capabilities from other AI systems’ experience. DARPA is putting up $1 million per project under Shared-Experience Lifelong Learning.

Humans advance quicker and better when learned from other’s experiences, researchers at DARPA want to impose this into artificial intelligence models. DARPA inaugurated a new artificial intelligence exploration opportunity to pursue work on “the technical domain of lifelong learning by agents” in AI, according to the SAM.gov website. 

The concept of lifelong learning is not new in AI research. According to the announcement, the ongoing research is focused on learning from the same system rather than populations of LL agents that know each other’s experiences. Shared-Experience Lifelong Learning (ShELL) program extends LL approaches to large numbers of originally identical agents for shared learning. Sharing of experiences could significantly reduce the amount of training required by individual agents. 

Read more: Google Announces New Cloud TPU VMs For AI Workloads

The selected proposals should be completed over two phases, Phase I, which is of 6-months, the first 2 months for a detailed research plan, month 4 for preliminary design, and month 6 on the feasibility study of the model with a funding of $300,000. The shortlisted projects from Phase II will develop a proof-of-concept over 12 months with a $700,000 budget.

The proposers have to address three major sections in their bids; Content, a brief description of shareable and non-sharable knowledge and knowledge that needs to be incorporated, and the one that needs to be ignored by SHell agents. Communications: When and how the knowledge should be shared. Computation: Ensure that the LL algorithms have enough computing power through a mix of edge and cloud resources.

The proposals must contain the mentioned sections and should be uploaded in DARPA’s BAA Portal. DARPA will acknowledge receipt of complete submissions via email and assign identifying numbers that should be used in all further correspondence regarding those submissions. The awarding will be done on 24th September.

Advertisement

Axis Bank Ties Up With AWS To Accelerate Digital Transformation

axis bank on AWS

Axis Bank ties up with AWS to transfer 70% of its data into cloud computing in the upcoming 24 months. This was done to accelerate their data transformation program to reduce cost, improve agility, and customer service. 

As part of the agreement, AWS will provide the bank with containers for data storage and databases to compute and build new digital financial services to bring advanced banking experiences to customers. With this, Axis bank customers will be able to open accounts in under 6 minutes with instant digital payments, helping the bank increase customer satisfaction by 35 percent.

“Cloud is transforming the financial industry,” said Puneet Chandok, President, Commercial Business, AWS India and South Asia, AISPL. He also added that AWS is delighted to help Axis Bank build and grow a suite in the digital banking space that will be evolving with technology changes, introduce new payment modes, support consumer and business needs in India.

Read more:Deutsche Bank Releases Paper On The Usage Of AI In Security Services

With Amazon Elastic Kubernetes Service (EKS), customers will be able to start, run, and scale Kubernetes applications on AWS or on-premises. This was designed using microservices that support any application architecture, irrespective of scale, load, or complexity. Using Amazon’s document database service DocumentDB, the bank will be running its financial transactions securely across its digital bank accounts. To scale workload and support 10 million real-time payments through UPI, Axis bank will use Amazon Elastic Compute Cloud (Amazon EC2), a web service that provides secure, resizable compute capacity in the cloud. This will ensure reliability and consistency in performance. 

Axis Bank believes that having a cloud-native, design-centric engineering capability is critical to emerging in digital financing. The bank has put over 800 people into its digital projects, with an in-house engineering and design team of 130 people. Last year, Axis Bank decided to deploy all new customer-facing applications on AWS. Today, 15 percent of the bank’s applications are operated on the cloud, in the next 3 years, the bank aims to have 70% of its operations on the cloud.

Advertisement

NVIDIA Finalizes GPU Direct Storage 1.0 To Accelerate Artificial Intelligence

NVIDIA Finalizes GPU Direct Storage 1.0 To Accelerate Artificial Intelligence

NVIDIA recently finalized the launch of its new GPU direct storage after a year of beta testing to accelerate artificial intelligence in the high performance computing industry. 

NVIDIA’s Magnum IO GPU direct storage driver allows users to bypass the server CPU to exchange data directly between the storage and high performance GPU memory. 

NVIDIA officials said that this new GPU direct storage reduces CPU utilization up to three times, which enables the CPU to focus on processing-intensive applications rather than graphic processing. 

Read More: Wekaio Announced Support Of NVIDIA’S Turbocharged HGX™ Supercomputing Platform

The company announced the integration of Magnum IO Direct storage software with its HGX AI supercomputing platform along with the new NDR 400G InfiniBand networking and A100 80 GB PCIe GPU in the digital conference of ISC High Performance 2021. 

NVIDIA collaborated with numerous industry leaders like IBM, Dell, WekaIO, and Micron to develop this new cutting-edge technology. IBM recently announced they have updated its storage architecture for NVIDIA DGX Pod and is committed to supporting the next generation of DGX Pod with ESS 3200, which would double the data transfer speed up to 77 GB per second by the end of this year. 

Jenseng Huang, CEO and Founder of NVIDIA, said, “The high performance computing revolution has started in academia and is rapidly extending across a broad range of industries.” 

He also mentioned that crucial dynamics are driving exponential advancement that has made high performance computing a valuable tool for several industries. 

Jeff Denworth, Co-founder and CMO of NVIDIA, said that the use of GPU direct storage in projects like Pytorch has allowed vast data to feed a standard Postgres database about 80 times faster than a conventional network-attached storage system could. 

“We have been pleasantly surprised by the number of projects that we are being engaged on for this new technology,” he said. 

Advertisement

Google AI Introduces A Dataset To Study Gender Bias Translation

google translate

Google AI announced the development of a new dataset to curb gender bias in machine translation. The dataset was developed by picking up contexts from surrounding sentences or passages. 

Although Neural Machine Translations (NMT) advances paved the way for natural and smooth translation, the gender stereotype is due to the data they were trained on. As the conventional NMT methods translate sentences individually and do not include gender information explicitly, the bias is being observed.

To overcome the bias, Google AI will be using Translated Wikipedia Biographies dataset to evaluate training models. “Our intent with this release is to support long-term improvements on ML systems focused on pronouns and gender in translation by providing a benchmark in which translation accuracy can be measured pre and post-model changes,” mentioned Google AI in a blog post. 

Read more: Measuring Weirdness In AI-Based Language-Translations

The dataset was developed by picking up a group of instances with identical representation across the globe and genders. These were extracted from biographies on Wikipedia based on occupation, profession, and activity. For an unbiased selection, occupations were chosen in such a way that they were gender associated. To overcome the geography-based bias, all the instances were divided based on geographical diversity. The dataset is diverse, with entries from individuals from more than 90 countries across the world. 

The new dataset opens a new basis of evaluation for gender bias reduction in machine translations by referring to a subject with known gender. This computation is flexible in English translation since English pronouns are profoundly gender-specific. This computation method has resulted in a 67% reduction in errors on context-aware models versus previous models. 

Advertisement