Amazon Web Services (AWS), a leading cloud service provider, open-sourced its EC2 Trn 1 instances built for leveraging high performance powered by the AWS-designed Trainium Chips. For workloads like NLP, semantic search, recommendation engines, fraud detection, and others, Trn1 instances on AWS offer the quickest time to train popular machine learning models.
Many companies have developed, trained, and deployed machine learning models to power applications that revolutionize their operations and customer experiences. To increase accuracy, these machine learning models are getting more complicated and consuming larger amounts of training data.
The models are run across thousands of accelerators, and as a result, the cost of training them increases. With up to 50% less cost to train deep learning models than the most recent GPU-based P4d instances, new Trn1 instances powered by the Trainium processors provide the best pricing performance and the fastest machine learning model training on AWS.
Read More: Cadence Plans to Apply Big Data to Optimize Workloads
Trn1 instances are built on the AWS Nitro System, a combination of AWS-designed hardware and software advancements that streamline the delivery of isolated multi-tenancy, private networking, and quick local storage.
David Brown, VP of Amazon EC2, said that the company is looking forward to enhancing AWS Inferentia, its machine learning chip, and AWS Trainium, its 2nd-gen machine learning chip. He also said, “Trn1 instances powered by AWS Trainium will help our customers reduce their training time from months to days while being more cost efficient.”