Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(ai): create custom device map for FLUX pipeline #298

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

ad-astra-video
Copy link
Collaborator

Adds 2 GPU device map to run FLUX model that is too large to run on GPUs under 32gb of VRAM in FP16. 2 GPU device map is used by including a DEVICE_MAP field in optimization_flags in aimodels.json. The transformer is put on GPU 0 and remaining pipeline components are put on GPU 1.

This could be enhanced to be automatic by querying the hardware information available to the runner container and picking the best configuration based on available GPUs.

Question:
Should setting this device map automatically put the SafetyChecker to device 1? I think it should since it is necessary for 24GB VRAM or less but wanted to get thoughts on this.

Devices map setting and inference logs:

add to aimodels.json with GPU 0 24gb VRAM and GPU 1 16gb VRAM

"optimization_flags": {
    "DEVICE_MAP": "FLUX_DEVICE_MAP_2_GPU",
    "SAFETY_CHECKER_DEVICE": "cuda:1"
}

logs with timing of each step

2024-12-02 13:35:00,482 - app.pipelines.device_maps.flux - INFO - encode_prompt took: 0.6097350120544434 seconds
2024-12-02 13:35:00,492 - app.pipelines.device_maps.flux - INFO - prompt embeds conversion took: 0.009104251861572266 seconds
100%|██████████| 25/25 [00:17<00:00,  1.46it/s]
2024-12-02 13:35:17,620 - app.pipelines.device_maps.flux - INFO - transformer took: 17.128628969192505 seconds
2024-12-02 13:35:17,623 - app.pipelines.device_maps.flux - INFO - latents conversion took: 0.0027382373809814453 seconds
2024-12-02 13:35:18,473 - app.pipelines.device_maps.flux - INFO - vae decode took: 0.8499293327331543 seconds
2024-12-02 13:35:18,886 INFO:     172.17.0.1:38002 - "POST /text-to-image HTTP/1.1" 200 OK

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant