DataTau logo

DataTau

new | ask | show | submit
login


This is a practical workshop with the goals of learning the following concepts: - How to setup MLFLow, a tool for ML experiment tracking and model deploying, from zero to hero. - How to track ML experiments with MLFLow - How to put models to production with MLFLow. - How to deploy models to production in AWS Sagemaker with just a couple lines of code. - How to setup Apache Airflow, a powerful tool to design, schedule and monitor workflows. - How to create workflows that take advantage of deployed models.















Yeah, so many models made from a data source that is a .csv that someone updates manually twice a year, no way to put that to production

Some great points made. In my experience, screwups tend to start when - the "prototype" / minimum viable product can't scale. N^2 algorithms have a cost -- and it can be high! - the data source is not-repeatable/updating/lacking lineage

Yeah, so many models made from a data source that is a .csv that someone updates manually twice a year, no way to put that to production