Making your ML workflow seemless even when you do not have a local GPU.
This is my goto for the moment for development, I may add additional backend support, as needed going forward.
Ideally it should be easy to switch between providers as needed, whether cheaper compute with old GPUs for prototyping, or a heavy compute run for training a model.
I want to be able to support all use cases while keeping costs controlled.