AWS Celebrates 5 Years of Innovation with Amazon SageMaker

AWS Celebrates 5 Years of Innovation with Amazon SageMaker


In just 5 years, tens of thousands of customers have tapped Amazon SageMaker to create millions of models, train models with billions of parameters, and generate hundreds of billions of monthly predictions.

The seeds of a machine learning (ML) paradigm shift were there for decades, but with the ready availability of virtually infinite compute capacity, a massive proliferation of data, and the rapid advancement of ML technologies, customers across industries now have access to its transformational benefits. To harness this opportunity and take ML out of the research lab and into the hands of organizations, AWS created Amazon SageMaker. This year, we celebrate the 5-year anniversary of Amazon SageMaker, our flagship fully managed ML service, which was launched at AWS re:Invent 2017 and went on to become one of the fastest-growing services in AWS history.

AWS launched Amazon SageMaker to break down barriers to ML and democratize access to cutting-edge technology. Today, that success might have seemed inevitable, but in 2017, ML still required specialized skills typically possessed by a limited group of developers, researchers, PhDs, or companies that built their business around ML. Previously, developers and data scientists had to first visualize, transform, and preprocess data into formats that algorithms could use to train models, which required massive amounts of compute power, lengthy training periods, and dedicated teams to manage environments that often spanned multiple GPU-enabled servers—and a healthy amount of manual performance tuning. Additionally, deploying a trained model within an application required a different set of specialized skills in application design and distributed systems. As datasets and variables grew, companies had to repeat this process to learn and evolve from new information as older models became outdated. These challenges and barriers meant ML was out of reach to most except for well-funded organizations and research institutions.

The dawn of a new era in machine learning

That’s why we introduced Amazon SageMaker, our flagship ML managed service that enables developers, data scientists, and business analysts to quickly and easily prepare data, and build, train, and deploy high-quality ML models at scale. In the past 5 years, we’ve added more than 250 new features and capabilities, including the world’s first integrated development environment (IDE) for ML, debuggers, model monitors, profilers, AutoML, a feature store, no-code capabilities, and the first purpose-built continuous integration and continuous delivery (CI/CD) tool to make ML less complex and more scalable in the cloud and on edge devices.

In 2021, we pushed democratization even further to put ML within reach of more users. Amazon SageMaker enables more groups of people to create ML models, including the no-code environment in Amazon SageMaker Canvas for business analysts without ML experience, as well as a no-setup, no-charge ML environment for students to learn and experiment with ML faster.

Today, customers can innovate with Amazon SageMaker through a choice of tools—IDEs for data scientists and a no-code interface for business analysts. They can access, label, and process large amounts of structured data (tabular data) and unstructured data (photo, video, and audio) for ML. With Amazon SageMaker, customers can reduce training times from hours to minutes with optimized infrastructure. Finally, customers you can automate and standardize machine learning operations (MLOps) practices across their your organization to build, train, deploy, and manage models at scale.

New features for the next generation of innovation

Moving forward, AWS continues to aggressively develop new features that can help customers take ML further. For example, Amazon SageMaker multi-model endpoints (MMEs) allows customers to deploy thousands of ML models on a single Amazon SageMaker endpoint and lower costs by sharing instances provisioned behind an endpoint across all the models. Until recently, MMEs were supported only on CPUs, but, Amazon SageMaker MMEs now support GPUs. Customers can use Amazon SageMaker MME to deploy deep learning models on GPU instances and save up to 90% of the cost by deploying thousands of deep learning models to a single multi-model endpoint. Amazon SageMaker has also expanded support for compute-optimized Amazon Elastic Compute Cloud (Amazon EC2) instances powered by AWS Graviton 2 and Graviton 3 processors, which are well suited for CPU-based ML inference, so customers can deploy models on the optimal instance type for their workloads.

Amazon SageMaker customers are unleashing the power of machine learning

Every day, customers of all sizes and across all industries are turning to Amazon SageMaker to experiment, innovate, and deploy ML models in less time and at lower cost than ever. As a result, conversations are now shifting from the art of the possible to unleashing new levels of productivity with ML. Today, customers such as Capital One and Fannie Mae in financial services, Philips and AstraZeneca in healthcare and life sciences, Conde Nast and Thomson Reuters in media, NFL and Formula 1 in sports, Amazon and Mercado Libre in retail, and Siemens and Bayer in the industrial sector use ML services on AWS to accelerate business innovation. They join tens of thousands of other Amazon SageMaker customers using the service to manage millions of models, train models with billions of parameters, and make hundreds of billions of predictions every month.

More innovations await. But in the meantime, we pause to toast the many successes our customers have achieved.

Thomson Reuters

Thomson Reuters, a leading provider of business information services, taps the power of Amazon SageMaker  to create more intuitive services for their customers.

“We’re continually seeking solid AI-based solutions that deliver a long-term positive return on investment,” said Danilo Tommasina, Director of Engineering at Thomson Reuters Labs. “Amazon SageMaker is central to our AI R&D work. It allows us to effectively bring research into mature and highly automated solutions. With Amazon SageMaker Studio, researchers and engineers can focus on solving business problems with all the tools needed for their ML workflow in a single IDE. We perform all of our ML development activities, including notebooks, experiment management, ML pipeline automation, and debugging right from within Amazon SageMaker Studio.”

Salesforce

Salesforce, the world’s leading CRM platform, recently announced new integrations that will enable to use Amazon SageMaker alongside Einstein, Salesforce’s AI technology.

“Salesforce Einstein is the first comprehensive AI for CRM and enables every company to get smarter and more predictive about their customers through an integrated set of AI technologies for sales, marketing, commerce, service, and IT,” said Rahul Auradkar, EVP of Einstein and Unified Data Services at Salesforce. “One of the biggest challenges companies face today is that their data is siloed. It is difficult to bring data together to deliver customer engagement in real time across all touch points and glean meaningful business insights. Powered by Genie, Salesforce’s real-time customer data platform, the Salesforce and Amazon SageMaker integration enables data teams with seamless access to unified and harmonized customer data for building and training ML models in Amazon SageMaker. And once deployed, these Amazon SageMaker models can be used with Einstein to power predictions and insights across the Salesforce Platform. As AI evolves, we continue to enhance Einstein with bring-your-own-modeling (BYOM) to meet developers and data scientists where they work.”

Meta AI

Meta AI is an artificial intelligence laboratory that belongs to Meta Platforms Inc.

“Meta AI has collaborated with AWS to enhance torch.distributed to help developers scale their training using Amazon SageMaker and Trainium-based instances,” said Geeta Chauhan, Applied AI Engineering Manager at Meta AI. “With these enhancements, we’ve seen a reduction in training time for large models based on our tests. We are excited to see Amazon SageMaker support PyTorch distributed training to accelerate ML innovation.”

Tyson Foods Inc.

Tyson Foods Inc., one of the world’s largest meat processors and marketers, relies on Amazon SageMaker, Amazon SageMaker Ground Truth, and AWS Panorama to improve efficiencies.

“Operational excellence is a key priority at Tyson Foods,” said Barret Miller, Senior Manager of Emerging Technology at Tyson Foods Inc. “We use computer vision powered by ML on AWS to improve production efficiency, automate processes, and improve time-consuming or error-prone tasks. We collaborated with the Amazon Machine Learning Solutions Lab to create a state-of-the-art object detection model using Amazon SageMaker Ground Truth and AWS Panorama. With this solution, we receive near-real-time insights that help us produce the inventory we need while minimizing waste.”

Autodesk

AutoCAD is a commercial computer-aided design and drafting software application from Autodesk. AutoCAD relies on Amazon SageMaker to optimize its generative design process.

“We wanted to empower AutoCAD customers to be more efficient by providing personalized, in-the-moment usage tips and insights, ensuring the time they spend in AutoCAD is as productive as possible,” said Dania El Hassan, Director of Product Management for AutoCAD, at Autodesk. “Amazon SageMaker was an essential tool that helped us provide proactive command and shortcut recommendations to our users, allowing them to achieve powerful new design outcomes.”

Torc.ai

With the help of Amazon SageMaker and the Amazon SageMaker distributed data parallel (SMDDP) library, Torc.ai, an autonomous vehicle leader since 2005, is commercializing self-driving trucks for safe, sustained, long-haul transit in the freight industry.

“My team is now able to easily run large-scale distributed training jobs using Amazon SageMaker model training and the Amazon SageMaker distributed data parallel (SMDDP) library, involving terabytes of training data and models with millions of parameters,” said Derek Johnson, Vice President of Engineering at Torc.ai. “Amazon SageMaker distributed model training and the SMDDP have helped us scale seamlessly without having to manage training infrastructure. It reduced our time to train models from several days to a few hours, enabling us to compress our design cycle and bring new autonomous vehicle capabilities to our fleet faster than ever.”

LG AI Research

LG AI Research aims to lead the next era of AI by using Amazon SageMaker to train and deploy ML models faster.

“We recently debuted Tilda, the AI artist powered by EXAONE, a super giant AI system that can process 250 million high-definition image-text pair datasets,” said Seung Hwan Kim, Vice President and Vision Lab Leader at LG AI Research. “The multi-modality AI allows Tilda to create a new image by itself, with its ability to explore beyond the language it perceives. Amazon SageMaker was essential in developing EXAONE, because of its scaling and distributed training capabilities. Specifically, due to the massive computation required to train this super giant AI, efficient parallel processing is very important. We also needed to continuously manage large-scale data and be flexible to respond to newly acquired data. Using Amazon SageMaker model training and distributed training libraries, we optimized distributed training and trained the model 59% faster—without major modifications to our training code.”

Mueller Water Products

Mueller Water Products manufactures engineered valves, fire hydrants, pipe connection and repair products, metering products, leak detection solutions, and more. It used Amazon SageMaker to develop an innovative ML solution to detect water leaks faster.

“We are on a mission to save 7.7 billion gallons of water loss by 2027,” said Dave Johnston, Director of Smart Infrastructure at Mueller Water Products. “Thanks to ML models built on Amazon SageMaker, we have improved the precision of EchoShore-DX, our acoustic-based anomaly detection system. As a result, we can inform utility customers faster when a leak is occurring. This solution has saved an estimated 675 million gallons of water in 2021. We are excited to continue to use AWS ML services to further enhance our technology portfolio and continue driving efficiency and sustainability with our utility customers.”

Canva

Canva, maker of the popular online design and publishing tool, relies on the power of Amazon SageMaker for rapid implementation.

“For Canva to grow at scale, we needed a tool to help us launch new features without any delays or issues,” said Greg Roodt, Head of Data Platforms at Canva. “Amazon SageMaker’s adaptability allowed us to manage more tasks with fewer resources, resulting in a faster, more efficient workload. It gave our engineering team confidence that the features they launch will scale to their use case. With Amazon SageMaker, we deployed our text-to-image model in 2 weeks using powerful managed infrastructure, and we look forward to expanding this feature to our millions of users in the near future.”

Inspire

Inspire, a consumer-centric healthcare information service, relies on Amazon SageMaker to deliver actionable insights for better care, treatments, and outcomes.

“Our content recommendation engine is a major driver of our value proposition,” said Brian Loew, Chief Executive Officer and founder of Inspire. “We use it to direct our users (who live with particular conditions) to relevant and specific posts or articles. With Amazon SageMaker, we can easily build, train, and deploy deep learning models. Our sophisticated ML solution—based on Amazon SageMaker—helps us improve our content recommendation engine’s ability to suggest relevant content to 2 million registered users, pulling from our library of 1.5 billion words on 3,600 conditions. Amazon SageMaker has enabled us to accurately connect patients and caregivers with more personalized content and resources—including rare disease information and treatment pathways.”

ResMed

ResMed is a leading provider of cloud-connected solutions for people with sleep apnea, COPD, asthma, and other chronic conditions. In 2014, ResMed launched MyAir, a personalized therapy management platform and application, for patients to track sleep therapy.

“Prior to Amazon SageMaker, all MyAir users received the same messages from the app at the same time, regardless of their condition,” said Badri Raghavan, Vice President of Data Science at ResMed. “Amazon SageMaker has enabled us to interact with patients through MyAir based on the specific ResMed device they use, their waking hours, and other contextual data. We take advantage of several Amazon SageMaker features to train model pipelines and choose deployment types, including near-real-time and batch inferences, to deliver tailored content. Amazon SageMaker has enabled us to achieve our goal of embedding ML capabilities worldwide by deploying models in days or weeks, instead of months.”

Verisk

Verisk provides expert data-driven analytic insights that help business, people, and societies become stronger, more resilient, and sustainable. It uses Amazon SageMaker to streamline ML workflows.

“Verisk and Vexcel are working closely together to store and process immense amounts of data on AWS, including Vexcel’s ultra-high resolution aerial imagery data that is captured in 26 countries across the globe,” said Jeffrey C. Taylor, President at Verisk 3D Visual Intelligence. “Amazon SageMaker helps us streamline the work that the ML and MLOps teams do, allowing us to focus on serving the needs of our customers, including real property stakeholders in insurance, real estate, construction, and beyond.”

Smartocto BV

With the help of Amazon SageMaker, Smartocto BV provides content analytics driven by ML to 350 newsrooms and media companies around the world.

“As the business was scaling, we needed to simplify the deployment of our ML models, reduce time to market, and expand our product offering,” said Ilija Susa, Chief Data Officer at Smartocto. “However, the combination of open-source and cloud solutions to self-host our ML workloads was increasingly time-consuming to manage. We migrated our ML models to Amazon SageMaker endpoints and, in less than 3 months, launched Smartify, a new AWS-native solution. Smartify uses Amazon SageMaker to provide predictive editorial analytics in near real time, which helps customers improve their content and expand their audiences.”

Visualfabriq

Visualfabriq offers a revenue management solution with applied artificial intelligence capabilities to some of the world’s leading consumer packaged goods companies. It uses Amazon SageMaker to improve the performance and accuracy of ML models at scale.

“We wanted to adapt our technology stack to improve performance and scalability and make models easier to add, update, and retrain,” said Jelle Verstraaten, Team Lead for Demand Forecast, Artificial Intelligence, and Revenue Growth Management at Visualfabriq. “The biggest impact of the migration to Amazon SageMaker has been a significant performance improvement for our solution. By running inferences on dedicated servers, instead of web servers, our solution is more efficient, and the costs are consistent and transparent. We improved the response time of our demand forecast service—which predicts the impact of a promotional action on a retailer’s sales volume—by 200%, and deployed a scalable solution that requires less manual intervention and accelerates new customer onboarding.”

Sophos

Sophos, a worldwide leader in next-generation cybersecurity solutions and services, uses Amazon SageMaker to train its ML models more efficiently.

“Our powerful technology detects and eliminates files cunningly laced with malware,” said Konstantin Berlin, Head of Artificial Intelligence at Sophos. “Employing XGBoost models to process multiple-terabyte-sized datasets, however, was extremely time-consuming—and sometimes simply not possible with limited memory space. With Amazon SageMaker distributed training, we can successfully train a lightweight XGBoost model that is much smaller on disk (up to 25 times smaller) and in memory (up to five times smaller) than its predecessor. Using Amazon SageMaker automatic model tuning and distributed training on Spot Instances, we can quickly and more effectively modify and retrain models without adjusting the underlying training infrastructure required to scale out to such large datasets.”

Northwestern University

Students from Northwestern University in the Master of Science in Artificial Intelligence (MSAI) program were given a tour of Amazon SageMaker Studio Lab before using it during a hackathon.

“Amazon SageMaker Studio Lab’s ease of use enabled students to quickly apply their learnings to build creative solutions,” said Mohammed Alam, Deputy Director of the MSAI program. “We expected students to naturally hit some obstacles during the short 5-hour competition. Instead, they exceeded our expectations by not only completing all the projects but also giving impressive presentations in which they applied complex ML concepts to important real-world problems.”

Rensselaer Polytechnic Institute

Rensselaer Polytechnic Institute (RPI), a New York technological research university, uses Amazon SageMaker Studio to help students quickly learn ML concepts.

“RPI owns one of the most powerful supercomputers in the world, but AI has a steep learning curve,” said Mohammed J. Zaki, Professor of Computer Science. “We needed a way for students to start cost-effectively. Amazon SageMaker Studio Lab’s intuitive interface enabled our students to get started quickly and provided a powerful GPU, enabling them to work with complex deep learning models for their capstone projects.”

Hong Kong Institute of Vocational Education

The IT department of the Hong Kong Institute of Vocational Education (Lee Wai Lee) uses Amazon SageMaker Studio Lab to offer students opportunities to work on real-world ML projects.

“We use Amazon SageMaker Studio Lab in basic ML and Python-related courses that give students a solid foundation in many cloud technologies,” said Cyrus Wong, Senior Lecturer. “Amazon SageMaker Studio Lab enables our students to get hands-on experience with real-world data science projects, without getting bogged down in setups or configurations. Unlike other vendors, this is a Linux machine for students, enabling them to do many more coding exercises.”

MapmyIndia

MapmyIndia, India’s leading provider of digital maps, geospatial software, and location-based Internet of Things (IoT) technologies, uses Amazon SageMaker to build, train, and deploy its ML models.

“MapmyIndia and our global platform, Mappls, offer robust, highly accurate, and worldwide AI and computer-vision-driven satellite- and street-imagery-based analytics for a host of use cases, such as measuring economic development, population growth, agricultural output, construction activity, street sign detection, land segmentation, and road change detection,” said Rohan Verma, Chief Executive Officer and Executive Director at MapmyIndia. “Our ability to create, train, and deploy models with speed and accuracy sets us apart. We are glad to partner with AWS for our AI/ML offerings and are excited about Amazon SageMaker’s ability to scale this rapidly.”

SatSure

SatSure, an India-based leader in decision intelligence solutions using Earth observation data to generate insights, relies on Amazon SageMaker to prepare and train petabytes of ML data.

“We use Amazon SageMaker to crunch petabytes of EO, GIS, financial, textual, and business datasets, using its AI/ML capabilities to innovate and scale our models quickly,” said Prateep Basu, Chief Executive Officer at SatSure. “We have been using AWS since 2017, and we have helped financial institutions lend to more than 2 million farmers across India, Nigeria, and the Philippines, while monitoring 1 million square kilometers on a weekly basis.”

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW





Source link

admin

Leave a Reply

Your email address will not be published. Required fields are marked *