Home Blog Page 326

Cloud-Based Employee Training Software Articulate Rises $1.5 Billion

articulate rises $1.5B

Articulate, the cloud-based employee training software, has raised $1.5 Billion in series A funding round today. The enterprise is associated with many A-listed companies like Amazon, Oracle, and Morgan Stanley. 

This New York-based enterprise was founded in 2002 to serve 106,000 companies. Its most in-demand app is Articulate 360, which is used for authoring training courses that give companies the leverage of exporting to the web or hosting on their learning management system (LMS). This provides many assets such as photos, videos, templates,on-demand training that is live supported. Rise.com is another program designed for small and mid-size businesses that don’t use LMS.  

“Every enterprise has to train employees, every function has its own unique training needs, and the organization as a whole has training needs around employee onboarding, compliance, and employee engagement,” Articulate president Lucy Suros said. She also added that many companies have historically done training in a very unscalable and are instructor-led. But Articulate enables employees to take all that training online regardless of their skill and position and focuses only on areas that require training.

Read more: DataRobot Raised $250 Million On $6 Billion Valuation

Articulate helps reduce training time by providing only essential areas. The platform can also save resources for massive companies that need to train thousands of employees every year. The ready-made course components can be used for mid and small-sized businesses, which can save some resources that would usually be invested in creating custom training materials.

Articulate stands at $3.75 billion post-funding, General Atlantic joined with Blackstone Growth and ICONIQ Growth in the investment, making a big chunk of the funding. Articulate will be using the new capital to gain more customers and expand by three times over the coming years.

Advertisement

Olive Raised $400M In A Funding Round For Artificial Intelligence-Based Health Tech

Olive Raised $400M In A Funding Round For Artificial Intelligence-Based Health Tech

Olive on Thursday announced that it had raised $400 million in a funding round. This fresh funding has increased the company’s valuation to $4 billion. 

The automation company’s funding round was led by Vista Equity Partners, which is a global leader of investment in software enterprises along with Base10 Partners Advancement Initiative.

The company will also contribute to scholarship and financial aid awards for America’s historically black colleges and universities, which will be known as The Olive Scholarship.

Read More: Medidata Acorn AI Synthetic Control Arm Named “Best AI-Based Solution For Healthcare” By 2021 AI Breakthrough Awards

Chief Medical Officer of Olive, Dr.YiDing Yu, said that artificial intelligence could be a boon for healthcare companies as it has the ability to reduce the time consumed to deliver an efficient flow of information. 

“Olive is the leading force for rapid product development to better empower the humans in healthcare, which is being hired at health systems and insurance companies across the country at lightning speed,” said the CEO of Olive, Sean Lane. 

He further added that the widespread adoption of their technology makes it evident that the healthcare industry is now looking forward to adopting artificial intelligence-powered services.

Last year, the company raised $1.5 billion, after which it acquired Empiric Health with the aim of accelerating the provider payments process using the firm’s self-developed platform, ‘Olive Assure.’ 

Olive is an automation company that is addressing healthcare’s most prominent issues through automation. The company helps in delivering hospitals, payers, and health systems increased revenue, lower costs and greater capacity. 

Healthcare workers are essentially working with outdated technology that creates a lack of shared knowledge and accurate data. Olive is driving connections to shine new light on healthcare processes, improving operations today so everyone can benefit from a healthier industry tomorrow.

Advertisement

WHO Issues Guidelines for Ethical Use Of Artificial Intelligence

WHO Issues Guidelines for Ethical Use Of Artificial Intelligence

The World Health Organization (WHO) recently published a document mentioning six guidelines for ethical use of artificial intelligence in the healthcare industry. 

This guideline comes after an intensive two-year research conducted by more than twenty scientists. The report highlights how doctors can use artificial intelligence to treat patients in underdeveloped regions of the world. 

But it also points out that technology is not a quick solution for health challenges, especially in low and middle-income countries. Governments and regulators should carefully analyze where and how artificial intelligence is used in healthcare. 

Read More: GitHub’s New Copilot Programming Uses GPT-3 To Generate Code

The World Health Organization said that it hopes the six principles can be the foundation for how governments, developers, and regulators approach the technology. 

In the document, six guidelines mentioned by WHO are : 

  • Protect autonomy: Humans should have the final say on all health decisions. The decisions should not be made entirely by machines, and doctors should be able to override them at any time. Artificial intelligence should not be used to guide someone’s medical care without their consent, 
  • Promote human safety: Developers should continuously monitor any artificial intelligence tools to ensure they’re working as they’re supposed to and not causing harm.
  • Ensure transparency: Developers should publish information about the design of AI tools. One common criticism of the systems is that they’re “black boxes,” and it’s too hard for researchers and doctors to know how they make decisions. The WHO wants to see enough transparency to be fully audited and understood by users and regulators.
  • Foster accountability: When something goes wrong with an AI technology — like if a decision made by a tool leads to patient harm — there should be mechanisms determining who is responsible (like manufacturers and clinical users).
  • Ensure equity: That means making sure tools are available in multiple languages, that they’re trained on diverse sets of data. In the past few years, scrutiny of standard health algorithms has found that some have racial bias built-in.
  • Promote sustainable artificial intelligence: Developers should regularly update their tools, and institutions should have ways to adjust if a tool seems ineffective. Institutions or companies should also only introduce mechanisms that can be repaired, even in under-resourced health systems.

There are numerous potential ways artificial intelligence can be used in the healthcare industry. There are applications in development that use artificial intelligence to screen medical images such as mammograms, devices that help people monitor their health, tools that scan patient health records to predict if they might get sicker, and systems that help track disease outbreaks. 

“The appeal of technological solutions and the promise of technology can lead to overestimation of the benefits and dismissal of the challenges and problems that new technologies such as artificial intelligence may introduce,” The report mentioned.

Advertisement

Google AI To Fuse Artificial Intelligence And Art

Google fuses AI and art

Google AI joins Douglas Coupland to fuse artificial intelligence and slogan writing. The project is called Slogans For Class 2030, focuses on the first generation of young people whose lives are fully interconnected with artificial intelligence. 

This idea was brought into life by initially training the AI model with Coupland’s 30 years of written work (which is over a million words) for the model to familiarize with the author’s style of writing. Later, the ongoing topics were picked from social media posts and were added to the training sets to build short-form, topical statements. After the model was trained, Couplands text along with the curated topics was the inspiration to create twenty-five Slogans for the Class of 2030

“I would comb through ‘data dumps’ where characters from one novel were speaking with those in other novels in ways that they might actually do. It felt like I was encountering a parallel universe Doug,” Coupland says. He also added that the statements from the project appeared like gems. Though they weren’t written by him they were still him since they wouldn’t have existed without him.

Read more: AI Will Now Judge Artist’s Popularity In Italian Museums

A usual pattern seen in Coupland’s work is investigating the human condition through the eye of pop culture. This was the focus on the class of 2030. Coupland’s work is aimed towards inspiring students in their early teens who will be graduating in 2030. The motive is to help teens considering their future career paths, Coupland hopes that this project will trigger conversation on the vast possibilities in the various fields and familiarize them with the fact that AI does not have to be strictly scientific, it can be artful.

All the 25 thought-provoking and visually rich digital slogans are available on Google Arts & Culture along with other behind-the-scenes material. Artificial intelligence is evolving in areas that humans could have never imagined like art, emotions and socializing. This is only a small step in the evolution, there are many more routes to go in the forthcoming years. 

Advertisement

DARPA Wants AI That Can Learn From Other’s Experiences

DARPA experience learning AI

The Defense Advanced Research Programs Agency (DARPA) wants artificial intelligence models with lifelong learning capabilities from other AI systems’ experience. DARPA is putting up $1 million per project under Shared-Experience Lifelong Learning.

Humans advance quicker and better when learned from other’s experiences, researchers at DARPA want to impose this into artificial intelligence models. DARPA inaugurated a new artificial intelligence exploration opportunity to pursue work on “the technical domain of lifelong learning by agents” in AI, according to the SAM.gov website. 

The concept of lifelong learning is not new in AI research. According to the announcement, the ongoing research is focused on learning from the same system rather than populations of LL agents that know each other’s experiences. Shared-Experience Lifelong Learning (ShELL) program extends LL approaches to large numbers of originally identical agents for shared learning. Sharing of experiences could significantly reduce the amount of training required by individual agents. 

Read more: Google Announces New Cloud TPU VMs For AI Workloads

The selected proposals should be completed over two phases, Phase I, which is of 6-months, the first 2 months for a detailed research plan, month 4 for preliminary design, and month 6 on the feasibility study of the model with a funding of $300,000. The shortlisted projects from Phase II will develop a proof-of-concept over 12 months with a $700,000 budget.

The proposers have to address three major sections in their bids; Content, a brief description of shareable and non-sharable knowledge and knowledge that needs to be incorporated, and the one that needs to be ignored by SHell agents. Communications: When and how the knowledge should be shared. Computation: Ensure that the LL algorithms have enough computing power through a mix of edge and cloud resources.

The proposals must contain the mentioned sections and should be uploaded in DARPA’s BAA Portal. DARPA will acknowledge receipt of complete submissions via email and assign identifying numbers that should be used in all further correspondence regarding those submissions. The awarding will be done on 24th September.

Advertisement

Axis Bank Ties Up With AWS To Accelerate Digital Transformation

axis bank on AWS

Axis Bank ties up with AWS to transfer 70% of its data into cloud computing in the upcoming 24 months. This was done to accelerate their data transformation program to reduce cost, improve agility, and customer service. 

As part of the agreement, AWS will provide the bank with containers for data storage and databases to compute and build new digital financial services to bring advanced banking experiences to customers. With this, Axis bank customers will be able to open accounts in under 6 minutes with instant digital payments, helping the bank increase customer satisfaction by 35 percent.

“Cloud is transforming the financial industry,” said Puneet Chandok, President, Commercial Business, AWS India and South Asia, AISPL. He also added that AWS is delighted to help Axis Bank build and grow a suite in the digital banking space that will be evolving with technology changes, introduce new payment modes, support consumer and business needs in India.

Read more:Deutsche Bank Releases Paper On The Usage Of AI In Security Services

With Amazon Elastic Kubernetes Service (EKS), customers will be able to start, run, and scale Kubernetes applications on AWS or on-premises. This was designed using microservices that support any application architecture, irrespective of scale, load, or complexity. Using Amazon’s document database service DocumentDB, the bank will be running its financial transactions securely across its digital bank accounts. To scale workload and support 10 million real-time payments through UPI, Axis bank will use Amazon Elastic Compute Cloud (Amazon EC2), a web service that provides secure, resizable compute capacity in the cloud. This will ensure reliability and consistency in performance. 

Axis Bank believes that having a cloud-native, design-centric engineering capability is critical to emerging in digital financing. The bank has put over 800 people into its digital projects, with an in-house engineering and design team of 130 people. Last year, Axis Bank decided to deploy all new customer-facing applications on AWS. Today, 15 percent of the bank’s applications are operated on the cloud, in the next 3 years, the bank aims to have 70% of its operations on the cloud.

Advertisement

NVIDIA Finalizes GPU Direct Storage 1.0 To Accelerate Artificial Intelligence

NVIDIA Finalizes GPU Direct Storage 1.0 To Accelerate Artificial Intelligence

NVIDIA recently finalized the launch of its new GPU direct storage after a year of beta testing to accelerate artificial intelligence in the high performance computing industry. 

NVIDIA’s Magnum IO GPU direct storage driver allows users to bypass the server CPU to exchange data directly between the storage and high performance GPU memory. 

NVIDIA officials said that this new GPU direct storage reduces CPU utilization up to three times, which enables the CPU to focus on processing-intensive applications rather than graphic processing. 

Read More: Wekaio Announced Support Of NVIDIA’S Turbocharged HGX™ Supercomputing Platform

The company announced the integration of Magnum IO Direct storage software with its HGX AI supercomputing platform along with the new NDR 400G InfiniBand networking and A100 80 GB PCIe GPU in the digital conference of ISC High Performance 2021. 

NVIDIA collaborated with numerous industry leaders like IBM, Dell, WekaIO, and Micron to develop this new cutting-edge technology. IBM recently announced they have updated its storage architecture for NVIDIA DGX Pod and is committed to supporting the next generation of DGX Pod with ESS 3200, which would double the data transfer speed up to 77 GB per second by the end of this year. 

Jenseng Huang, CEO and Founder of NVIDIA, said, “The high performance computing revolution has started in academia and is rapidly extending across a broad range of industries.” 

He also mentioned that crucial dynamics are driving exponential advancement that has made high performance computing a valuable tool for several industries. 

Jeff Denworth, Co-founder and CMO of NVIDIA, said that the use of GPU direct storage in projects like Pytorch has allowed vast data to feed a standard Postgres database about 80 times faster than a conventional network-attached storage system could. 

“We have been pleasantly surprised by the number of projects that we are being engaged on for this new technology,” he said. 

Advertisement

Google AI Introduces A Dataset To Study Gender Bias Translation

google translate

Google AI announced the development of a new dataset to curb gender bias in machine translation. The dataset was developed by picking up contexts from surrounding sentences or passages. 

Although Neural Machine Translations (NMT) advances paved the way for natural and smooth translation, the gender stereotype is due to the data they were trained on. As the conventional NMT methods translate sentences individually and do not include gender information explicitly, the bias is being observed.

To overcome the bias, Google AI will be using Translated Wikipedia Biographies dataset to evaluate training models. “Our intent with this release is to support long-term improvements on ML systems focused on pronouns and gender in translation by providing a benchmark in which translation accuracy can be measured pre and post-model changes,” mentioned Google AI in a blog post. 

Read more: Measuring Weirdness In AI-Based Language-Translations

The dataset was developed by picking up a group of instances with identical representation across the globe and genders. These were extracted from biographies on Wikipedia based on occupation, profession, and activity. For an unbiased selection, occupations were chosen in such a way that they were gender associated. To overcome the geography-based bias, all the instances were divided based on geographical diversity. The dataset is diverse, with entries from individuals from more than 90 countries across the world. 

The new dataset opens a new basis of evaluation for gender bias reduction in machine translations by referring to a subject with known gender. This computation is flexible in English translation since English pronouns are profoundly gender-specific. This computation method has resulted in a 67% reduction in errors on context-aware models versus previous models. 

Advertisement

Google Joins O-RAN Alliance To Develop Artificial Intelligence-Powered 5G

google joins O-RAN alliance

On Monday, Google joined the O-RAN Alliance to help develop artificial intelligence-powered 5G networks. The alliance aims to drive innovative change in the telecommunication industry by enabling hybrid and multi-cloud solutions.

In 2020 Google announced its comprehensive strategy for the telecommunications industry. Since then, it has been working closely with customers, partners, and industry bodies globally to help transform the telecommunication industry. Joining Jio and Nvidia was also a part of this strategy. And now it has united with the O-RAN alliance, a Radio Access Network (RAN) industry, aiming to enhance networks using artificial intelligence.

Employing the 5G network will make cloud, software, and network come under one platform, due to which communication service providers (CSP) encourage cloud networking in IT enterprises. With this alliance, Google Cloud’s solutions will assist CSP developers in building and scaling new applications across any environment by providing telecom platforms like Anthos to develop flexible deployment models across a wide range of RAN use cases.

Read more: Jio Collaborates With Google Cloud To Enable 5G Technology

Not only cloud networking but the alliance was focused on eliminating the use of various transmitters and receivers in network operation and replacing them with artificial intelligence-powered automation. Google will be working with the O-RAN alliance to enable cloud-native intelligent networks that are secure, self-driving, and self-healing in the areas of machine learning, massive data processing, and geospatial analytics to efficiently design, manage, and operate RAN intelligent controllers.

The O-RAN Alliance has more than 200 members, including SpaceX. Regardless, Google’s sign-up marks a significant milestone as Google operates one of the world’s most sophisticated networks with an extensive array of subsea internet cables. This would be an added advantage for the group to reach its goals. O-RAN alliance works with SpaceX to link its network to the StarLink constellation of low-Earth orbit internet satellites. 

Advertisement

Facebook’s Makes Virtual Environment More Interactive With Habitait 2.0

Habitat 2.0

On Wednesday, Facebook released Habitat 2.0, a virtual environment that is used to teach physical world interaction to robots. This is the better version of the original Habitat simulator released in 2019. 

Training artificial intelligence models in simulation is a great way to teach robots to accomplish tasks in the real world. Virtual environments give AI the leverage of practicing the same task thousands of times without physically interacting with the real space. With this aim, Facebook initially designed Habitat in 2019, but it had the limitation of not interacting with the objects. It would know where the spoons are but couldn’t get the spoon to you. Now with version 2.0, this has been made possible.

Habitat 2.0 has simulations for various virtual spaces, such as offices, two-story homes, warehouses, and more. An infrared depth capture system that records the exact shape of every object like chairs, dishwashers, windows, and cabinets was used to make these virtual environments perfectly realistic. 

Habitat 2.0 used a new data set called ReplicaCAD, which is a reconstruction of Replica, Facebook Reality Lab’s data for 3D environments. “In ReplicaCAD, previously static 3D scans have been converted to individual 3D models with physical parameters, collision proxy shapes, and semantic annotations that can enable training for movement and manipulation for the first time,” mentioned Facebook AI in their blog

Read more: Facebook AI Open Sources A New Data Augmentation Library

The fabrication of Habitat 2.0 was done to prioritize speed/performance over a more extensive range of simulation capabilities. For faster simulation, the platform uses a navigation mesh instead of complete ground contact for the robot’s movement. Hence the platform does not support non-rigid dynamics that cannot be meshed, such as deformable liquids, films, cloths, and ropes. This makes the Habitat 2.0 simulator twice faster than most 3D simulators. According to the Facebook AI researchers, Habitat can perform a 6-month simulation in just 2 days. 

For AI researchers to work on virtual environments, Facebook developed the Habitat-Matterport 3D Research Dataset (HM3D) in close collaboration with Matterport Inc, a 3D virtual spaces creator. HM3B consists of 1,000 open-source and Habitat-compatible 3D scans of apartments, homes, offices, and many more.

Advertisement