Wednesday, January 14, 2026
ad
Home Blog Page 325

Axis Bank Ties Up With AWS To Accelerate Digital Transformation

axis bank on AWS

Axis Bank ties up with AWS to transfer 70% of its data into cloud computing in the upcoming 24 months. This was done to accelerate their data transformation program to reduce cost, improve agility, and customer service. 

As part of the agreement, AWS will provide the bank with containers for data storage and databases to compute and build new digital financial services to bring advanced banking experiences to customers. With this, Axis bank customers will be able to open accounts in under 6 minutes with instant digital payments, helping the bank increase customer satisfaction by 35 percent.

“Cloud is transforming the financial industry,” said Puneet Chandok, President, Commercial Business, AWS India and South Asia, AISPL. He also added that AWS is delighted to help Axis Bank build and grow a suite in the digital banking space that will be evolving with technology changes, introduce new payment modes, support consumer and business needs in India.

Read more:Deutsche Bank Releases Paper On The Usage Of AI In Security Services

With Amazon Elastic Kubernetes Service (EKS), customers will be able to start, run, and scale Kubernetes applications on AWS or on-premises. This was designed using microservices that support any application architecture, irrespective of scale, load, or complexity. Using Amazon’s document database service DocumentDB, the bank will be running its financial transactions securely across its digital bank accounts. To scale workload and support 10 million real-time payments through UPI, Axis bank will use Amazon Elastic Compute Cloud (Amazon EC2), a web service that provides secure, resizable compute capacity in the cloud. This will ensure reliability and consistency in performance. 

Axis Bank believes that having a cloud-native, design-centric engineering capability is critical to emerging in digital financing. The bank has put over 800 people into its digital projects, with an in-house engineering and design team of 130 people. Last year, Axis Bank decided to deploy all new customer-facing applications on AWS. Today, 15 percent of the bank’s applications are operated on the cloud, in the next 3 years, the bank aims to have 70% of its operations on the cloud.

Advertisement

NVIDIA Finalizes GPU Direct Storage 1.0 To Accelerate Artificial Intelligence

NVIDIA Finalizes GPU Direct Storage 1.0 To Accelerate Artificial Intelligence

NVIDIA recently finalized the launch of its new GPU direct storage after a year of beta testing to accelerate artificial intelligence in the high performance computing industry. 

NVIDIA’s Magnum IO GPU direct storage driver allows users to bypass the server CPU to exchange data directly between the storage and high performance GPU memory. 

NVIDIA officials said that this new GPU direct storage reduces CPU utilization up to three times, which enables the CPU to focus on processing-intensive applications rather than graphic processing. 

Read More: Wekaio Announced Support Of NVIDIA’S Turbocharged HGX™ Supercomputing Platform

The company announced the integration of Magnum IO Direct storage software with its HGX AI supercomputing platform along with the new NDR 400G InfiniBand networking and A100 80 GB PCIe GPU in the digital conference of ISC High Performance 2021. 

NVIDIA collaborated with numerous industry leaders like IBM, Dell, WekaIO, and Micron to develop this new cutting-edge technology. IBM recently announced they have updated its storage architecture for NVIDIA DGX Pod and is committed to supporting the next generation of DGX Pod with ESS 3200, which would double the data transfer speed up to 77 GB per second by the end of this year. 

Jenseng Huang, CEO and Founder of NVIDIA, said, “The high performance computing revolution has started in academia and is rapidly extending across a broad range of industries.” 

He also mentioned that crucial dynamics are driving exponential advancement that has made high performance computing a valuable tool for several industries. 

Jeff Denworth, Co-founder and CMO of NVIDIA, said that the use of GPU direct storage in projects like Pytorch has allowed vast data to feed a standard Postgres database about 80 times faster than a conventional network-attached storage system could. 

“We have been pleasantly surprised by the number of projects that we are being engaged on for this new technology,” he said. 

Advertisement

Google AI Introduces A Dataset To Study Gender Bias Translation

google translate

Google AI announced the development of a new dataset to curb gender bias in machine translation. The dataset was developed by picking up contexts from surrounding sentences or passages. 

Although Neural Machine Translations (NMT) advances paved the way for natural and smooth translation, the gender stereotype is due to the data they were trained on. As the conventional NMT methods translate sentences individually and do not include gender information explicitly, the bias is being observed.

To overcome the bias, Google AI will be using Translated Wikipedia Biographies dataset to evaluate training models. “Our intent with this release is to support long-term improvements on ML systems focused on pronouns and gender in translation by providing a benchmark in which translation accuracy can be measured pre and post-model changes,” mentioned Google AI in a blog post. 

Read more: Measuring Weirdness In AI-Based Language-Translations

The dataset was developed by picking up a group of instances with identical representation across the globe and genders. These were extracted from biographies on Wikipedia based on occupation, profession, and activity. For an unbiased selection, occupations were chosen in such a way that they were gender associated. To overcome the geography-based bias, all the instances were divided based on geographical diversity. The dataset is diverse, with entries from individuals from more than 90 countries across the world. 

The new dataset opens a new basis of evaluation for gender bias reduction in machine translations by referring to a subject with known gender. This computation is flexible in English translation since English pronouns are profoundly gender-specific. This computation method has resulted in a 67% reduction in errors on context-aware models versus previous models. 

Advertisement

Google Joins O-RAN Alliance To Develop Artificial Intelligence-Powered 5G

google joins O-RAN alliance

On Monday, Google joined the O-RAN Alliance to help develop artificial intelligence-powered 5G networks. The alliance aims to drive innovative change in the telecommunication industry by enabling hybrid and multi-cloud solutions.

In 2020 Google announced its comprehensive strategy for the telecommunications industry. Since then, it has been working closely with customers, partners, and industry bodies globally to help transform the telecommunication industry. Joining Jio and Nvidia was also a part of this strategy. And now it has united with the O-RAN alliance, a Radio Access Network (RAN) industry, aiming to enhance networks using artificial intelligence.

Employing the 5G network will make cloud, software, and network come under one platform, due to which communication service providers (CSP) encourage cloud networking in IT enterprises. With this alliance, Google Cloud’s solutions will assist CSP developers in building and scaling new applications across any environment by providing telecom platforms like Anthos to develop flexible deployment models across a wide range of RAN use cases.

Read more: Jio Collaborates With Google Cloud To Enable 5G Technology

Not only cloud networking but the alliance was focused on eliminating the use of various transmitters and receivers in network operation and replacing them with artificial intelligence-powered automation. Google will be working with the O-RAN alliance to enable cloud-native intelligent networks that are secure, self-driving, and self-healing in the areas of machine learning, massive data processing, and geospatial analytics to efficiently design, manage, and operate RAN intelligent controllers.

The O-RAN Alliance has more than 200 members, including SpaceX. Regardless, Google’s sign-up marks a significant milestone as Google operates one of the world’s most sophisticated networks with an extensive array of subsea internet cables. This would be an added advantage for the group to reach its goals. O-RAN alliance works with SpaceX to link its network to the StarLink constellation of low-Earth orbit internet satellites. 

Advertisement

Facebook’s Makes Virtual Environment More Interactive With Habitait 2.0

Habitat 2.0

On Wednesday, Facebook released Habitat 2.0, a virtual environment that is used to teach physical world interaction to robots. This is the better version of the original Habitat simulator released in 2019. 

Training artificial intelligence models in simulation is a great way to teach robots to accomplish tasks in the real world. Virtual environments give AI the leverage of practicing the same task thousands of times without physically interacting with the real space. With this aim, Facebook initially designed Habitat in 2019, but it had the limitation of not interacting with the objects. It would know where the spoons are but couldn’t get the spoon to you. Now with version 2.0, this has been made possible.

Habitat 2.0 has simulations for various virtual spaces, such as offices, two-story homes, warehouses, and more. An infrared depth capture system that records the exact shape of every object like chairs, dishwashers, windows, and cabinets was used to make these virtual environments perfectly realistic. 

Habitat 2.0 used a new data set called ReplicaCAD, which is a reconstruction of Replica, Facebook Reality Lab’s data for 3D environments. “In ReplicaCAD, previously static 3D scans have been converted to individual 3D models with physical parameters, collision proxy shapes, and semantic annotations that can enable training for movement and manipulation for the first time,” mentioned Facebook AI in their blog

Read more: Facebook AI Open Sources A New Data Augmentation Library

The fabrication of Habitat 2.0 was done to prioritize speed/performance over a more extensive range of simulation capabilities. For faster simulation, the platform uses a navigation mesh instead of complete ground contact for the robot’s movement. Hence the platform does not support non-rigid dynamics that cannot be meshed, such as deformable liquids, films, cloths, and ropes. This makes the Habitat 2.0 simulator twice faster than most 3D simulators. According to the Facebook AI researchers, Habitat can perform a 6-month simulation in just 2 days. 

For AI researchers to work on virtual environments, Facebook developed the Habitat-Matterport 3D Research Dataset (HM3D) in close collaboration with Matterport Inc, a 3D virtual spaces creator. HM3B consists of 1,000 open-source and Habitat-compatible 3D scans of apartments, homes, offices, and many more.

Advertisement

Lollipop AI Launched Online Grocery Marketplace Where Users Can Build Their Own Recipes

Lollipop AI Launched Online Grocery Marketplace Where Users Can Build Their Own Recipes

Lollipop AI is a United Kingdom-based online grocery marketplace that has recently launched its public beta, where the user can build their own recipes. This artificial intelligence-powered platform enables users to prepare a meal plan from recipes. 

The application automatically gathers ingredients required for the recipe and adds them to the basket. According to experts, this platform can help reduce food wastage, improve health and culinary skills. 

The company also plans to partner with BBC Good Food and Sainsbury’s for its future plans. The website will be free to use, along with an option of a premium subscription. Lollipop AI said that the first ten thousand users of the beta version will be given a lifetime free subscription of the premium tier. 

Read More: Artificial Intelligence Startup LLENA(AI) Partnered With Southern University For Food Research Initiative

Tom Foster-Carter, the founder and CEO of Lollipop AI, got this idea of developing an artificial intelligence-powered grocery shopping platform when he was taking care of his newborn child and had to spend hours sourcing ingredients for meals online. 

In a statement, he said that this intelligent approach would save several hours a week for a typical household. Initially, the website would target health conscious individuals by providing healthy meal options to help them achieve their weight loss goals. 

Recently, Foster-Carter said, “This is just the beginning. The plan is to have all your food requirements in one place. We will allow you to order your restaurant kits deliveries from us.” He also added that they would launch a cooking companion app to assist customers while cooking with the ingredients they ordered from Lollipop. 

In the United States, Jupiter.co is also planning to launch a similar artificial intelligence powered platform, claiming it to be ‘groceries on autopilot.’

Advertisement

Tableau Adds New AI Powered Tools For Business Intelligence

tableau uses AI

Tableau today added a new artificial intelligence-powered tool to extend augmented analytics features. This was added to empower more people with the right technology regardless of their skill and role, the company says.

Researchers have found that a strong data culture can create many benefits, including increased collaboration, innovation, and measurable value like trust and accountability, irrespective of the industry or sector. To empower everyone with data visualization, Tableau took this step. Francois Ajenstat, chief product officer at Tableau, said that “Tableau’s mission has always been to help people to see and understand data. We started by introducing the self-service revolution in business intelligence, and with each product release, we want to make it even easier and more intuitive for people to solve problems with data and for organizations to become data-driven.”

The new features for augmented analytics include ask data that helps people answer business-related questions using natural language, regardless of their role and skill. It also guides users on how to ask the most relevant questions and dashboard integration for an integrated, simple, and personalized experience. Explain Data is another feature that assists in answering the ‘why’ in data with the help of robust statistical methods and machine learning to surface understandable explanations behind data points. These features are made available for Tableau Server and Tableau Online. 

Read more: Artificial Intelligence Startup LLENA(AI) Partnered With Southern University For Food Research Initiative

The added Ask Data for Salesforce feature helps salesforce users ask any query in their business in Tableau CRM using natural language and semantic search. The users can get instant answers tailored to their context in the form of insights, reports, and recommended dashboards. To filter the most critical insights from the reports, the users can use the Einstein Discovery for Salesforce Reports feature. It helps reduce the barrier to machine learning by enabling direct access to the associated Einstein Discovery story, where users can analyze further.

The Ask and Explain Data features are now available in Tableau 2021.2. Einstein Discovery for reports is only available with the Salesforce Summer ’21 release. Ask Data for Salesforce will be available early next year. 

Advertisement

Symphony Partners With Artificial Intelligence-Powered Fintech Saphyre

Symphony Partners With Artificial Intelligence-powered Fintech Saphyre

Financial markets’ infrastructure platform Symphony has announced its partnership with fintech Saphyre to integrate patented artificial intelligence technology into pre and post-trade workflows for front, middle, and back office staff. 

This partnership will create an integrated platform for real-time notifications updates for all onboarding and maintenance-related tasks and ready-to-trade statuses of allocations, trades, funds, and settlements. 

Symphony’s platform has in-built end-to-end security and compliance features that centralizes communication for both internal and external workflow participants. At the same time, the in-chat notifications drive transparency throughout the process. 

Read More: IBM Broadens 5G Deals With Telefonica And Verizon With Cloud And Artificial Intelligence

More than 500 market participants have already shown trust in this platform. Brad Levy, CEO of Symphony, said, “We are committed to tackle pain points in financial services workflows. This partnership with Saphyre will allow us to innovate with the front, middle and back office staff as the industry looks to revolutionize pre and post trade workflows to create a golden source for the democratization of that data.” 

He further mentioned that to make a program successful requires a diversified team and interoperability between technology partners and industry participants. He also said that they are very excited to partner with Saphyre to boost the process of digital adoption in market infrastructure.  

The integration of the two companies provides a scalable framework to develop a complete suite of ready to use applications. Some of their available applications in the market include ‘Ready To Trade’ (RTT), which provides real-time status per fund, per broker, per trading instrument, per market, and net asset value termination event alerts, which automatically calculates NAV termination. 

The companies have announced the launch of other new applications very soon. CEO of Saphyre, Gabino Roche, said, “The partnership and production integration with Symphony achieves a demand that the industry has been crying for like interoperability of technology and data. The commitment, camaraderie, and excitement by both companies show in the final delivered product.”

Advertisement

Medidata Acorn AI Synthetic Control Arm Named “Best AI-Based Solution For Healthcare” By 2021 AI Breakthrough Awards

Medidata Acorn AI Synthetic Control Arm Named “Best AI-Based Solution For Healthcare” By 2021 AI Breakthrough Awards

Medidata recently announced that it had been named the best artificial intelligence-based solution for healthcare by AI Breakthrough Awards in 2021. 

The AI Breakthrough Awards honors excellence and recognizes innovation in a range of artificial intelligence and machine learning related categories, including artificial intelligence platforms, deep learning, health-tech, natural language processing, smart robotics, business intelligence, industry-specific artificial intelligence applications, and many more.

In this crowded artificial intelligence market during the COVID-19 pandemic situation, Medidata has revolutionized the world of clinical trials through its unique approach to external controls with its Synthetic Control Arm. It is a type of external control, which is generated from patient data during clinical trials. 

Read More: Xactly Forecasting Wins 2021 Artificial Intelligence Breakthrough Award

The main reason for the success of this solution is the vast data pool of over seven million patients, which has helped to improvise Medidata’s synthetic control arm. 

Arnaud Chatterjee, Senior Vice President of Medidata Acron AI products, said, “Because patients who are seeking treatment through clinical trials have often already considered standard of care treatments and found them unpleasing, many patients view an investigational drug as an opportunity for something better, especially with rare and life-threatening diseases.” 

He also mentioned that the possibility of being placed in a control group could discourage patients from participating in clinical trials. However, at Acron AI, patients are allowed more comprehensive access to potentially life-saving therapies by utilizing data and analytics capabilities.  

The United States’s Food and Drug Administration (FDA) has recently given permission to Medidata for conducting a registration-based third phase of clinical trials of their groundbreaking artificial intelligence-powered technology.  

Medidata is a Now York-based health tech company founded in 1999. It is a subsidiary of Dassault Systemes, which uses its 3D Experience platform to transform the healthcare industry.

Advertisement

GitHub’s New Copilot Programming Uses GPT-3 To Generate Code

github's copilot

GitHub released a new programming assistant tool called Copilot that uses the GPT-3 algorithm to generate software code automatically. It uses context cues to suggest the code, if the users are not satisfied with the generated code, they can flip through alternatives or generate code manually. 

Copilot can detect when developers start writing a snippet of code and then complete the rest of the code automatically. Github says that the tool can produce half a dozen lines of code to meet the snippet. Copilot can understand various programming languages and is compatible with any project. 

Copilot not only completes the snippets but also helps with component integration and testing. Integration is only a few lines of work, but understanding the component that is being integrated takes time. Copilot does this task automatically and hence saves the overall time required. Copilot can automatically check the reliability of the applications by analyzing the code via automatically finding the open-source testing module that is suitable for assessing rather than adding it to the code base.

Read more: Microsoft’s Power Apps Will Allow You To Generate Code With GPT-3

Copilot generates code recommendations using OpenAI’s Codex, an artificial intelligence model based on GPT-3. The codex was trained on billions of lines of open-source code from GitHub, which allowed the model to learn common code patterns that it harnessed to generate programming suggestions. 

Initially, Copilot will be available at no charge in its open-source Visual Studio code editor for both cloud and local use. In the later years, Microsoft and OpenAI are planning to open access to the Codex model to help other enterprises incorporate code generation features into their products.

Advertisement