You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 29, 2023. It is now read-only.
Description As a Machine Learning Engineer, I would like to be able to request GPU resources for stages in a workflow, so that tensor-based machine learning (e.g. PyTorch), can benefit from hardware based acceleration for training and serving.
Tasks
manual PoC based on resources listed below.
add a gpu_request config parameter for all stage types.
extend bodywork.k8s.batch_jobs and bodywork.k8s.service_deployments to request GPU resources.
think of a functional test - e.g. trying to run a deployment that uses PyTorch, explicitly checking for GPU availability and printing to stdout?
Description
As a Machine Learning Engineer, I would like to be able to request GPU resources for stages in a workflow, so that tensor-based machine learning (e.g. PyTorch), can benefit from hardware based acceleration for training and serving.
Tasks
gpu_request
config parameter for all stage types.bodywork.k8s.batch_jobs
andbodywork.k8s.service_deployments
to request GPU resources.Resources
The text was updated successfully, but these errors were encountered: