Podchaser Logo
Home
23: Let’s Talk AWS SageMaker for ML Model Deployment

23: Let’s Talk AWS SageMaker for ML Model Deployment

Released Wednesday, 17th June 2020
Good episode? Give it some love!
23: Let’s Talk AWS SageMaker for ML Model Deployment

23: Let’s Talk AWS SageMaker for ML Model Deployment

23: Let’s Talk AWS SageMaker for ML Model Deployment

23: Let’s Talk AWS SageMaker for ML Model Deployment

Wednesday, 17th June 2020
Good episode? Give it some love!
Rate Episode

In this episode, we talk about Amazon SageMaker and how it can help with ML model development including model building, training and deployment. We cover 3 advantages in each of these 3 areas. We cover points such as:1. Host ML endpoints for deploying models to thousands or millions of users.2. Saving costs for model training using SageMaker.3. Use CloudWatch logs with SageMaker endpoints to debug ML models. 4. Use preconfigured environments or models provided by AWS.5. Automatically save model artifacts in AWS S3 as you train in SageMaker. 6. Use of version control for SageMaker notebooks with Github.and more… Please rate, subscribe and share this episode with anyone who might find SageMaker useful in their work. I feel that SageMaker is a great tool and want to share about it with data scientists. For comments/feedback/questions or if you think I have missed something in the episode, please reach out to me at LinkedIn: https://www.linkedin.com/in/sanketgupta107/

---

Send in a voice message: https://podcasters.spotify.com/pod/show/the-data-life-podcast/messageSupport this podcast: https://podcasters.spotify.com/pod/show/the-data-life-podcast/support

Show More

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features