ORACLE
AWS announced new machine learning services accessible to every developer looking to up their coding. [shutterstock: 1058815598, Phonlamai Photo]
[shutterstock: 1058815598, Phonlamai Photo]
Artificial Intelligence and Robotic Process Automation Press Release

AWS Announces New Machine Learning Services

Amazon Web Services announced 13 new machine learning capabilities and services, across all layers in the machine learning stack, to help put machine learning in the hands of even more developers.

AWS introduced new Amazon SageMaker features. They make it easier for developers to train and deploy machine learning models – including automatic data labeling and reinforcement learning.

AWS revealed new services, framework enhancements, and a custom chip to speed up machine learning training and inference, while reducing cost. Furthermore, it announced new artificial intelligence (AI) services that can extract text from virtually any document, read medical information, and provide customized personalization, recommendations, and forecasts using the same technology used by Amazon.

Last but certainly not least, AWS will help developers get rolling with machine learning with AWS DeepRacer. DeepRacer is a new 1/18th scale autonomous model race car for developers, driven by reinforcement learning.

These announcements continue the drum beat of machine learning innovation from AWS. Customers using these new services and capabilities include Adobe, BMW, and Formula 1.

“We want to help all of our customers embrace machine learning, no matter their size, budget, experience, or skill level,” said Swami Sivasubramanian, Amazon Machine Learning. “The announcements remove significant barriers to the successful adoption of machine learning. That’s because they reduce the cost of machine learning training and inference. Furthermore, they introduce new SageMaker capabilities that make it easier for developers to build, train, and deploy machine learning models in the cloud and at the edge, and delivering new AI services based on our years of experience at Amazon.”

Improvements

Most machine learning models are trained by an algorithm that finds patterns in large amounts of data. The model can then make predictions on new data in a process called ‘inference’. Developers use machine learning frameworks to define these algorithms, train models, and infer predictions. Frameworks (such as TensorFlow, Apache MXNet, and PyTorch), allow developers to design and train sophisticated models, often using multiple GPUs to reduce training times.

Most developers use more than one of these frameworks in their day-to-day work. AWS announced significant improvements for developers building with all of these popular frameworks, by improving performance and reducing cost for both training and inference.

  • New Amazon Elastic Compute Cloud (EC2) GPU instances. The new P3dn.24xl instances are one of the most powerful machine learning training processors available in the cloud. Consequently, they allow developers to train models with more data in less time.
  • AWS-Optimized TensorFlow framework. When training with large amounts of data, developers who choose to use TensorFlow have found that it’s challenging to scale TensorFlow across many GPUs, which often results in low utilization of these GPUs and longer training times for large training jobs.
  • Amazon Elastic Inference. Training rightfully receives a lot of attention. However, inference actually accounts for the majority of the cost and complexity for running machine learning in production. Amazon Elastic Inference allows developers to dramatically decrease inference costs.

Source:
Amazon

About the author

E-3 Magazine

Articles published through E-3 Magazine International. This includes press releases by our partners as well as articles and reports from the E-3 team of journalists.

Add Comment

Click here to post a comment

Push

Our Authors