For small orgs this makes sense, but this really depends on how big your data sets are that you're training against and how your ML Ops / Data Ops is set up.
GPUs are better run close to your data. If you're training on-prem then your data needs to be on-prem too.
GPUs are better run close to your data. If you're training on-prem then your data needs to be on-prem too.