Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

openai/clip-vit-large-patch14 not detected #6321

Closed
alexblattner opened this issue Dec 25, 2023 · 13 comments
Closed

openai/clip-vit-large-patch14 not detected #6321

alexblattner opened this issue Dec 25, 2023 · 13 comments
Labels
bug Something isn't working stale Issues that haven't received updates

Comments

@alexblattner
Copy link

alexblattner commented Dec 25, 2023

Describe the bug

In my cog environment, for some reason, it refuses to detect the already downloaded clip.

Reproduction

clip_model=CLIPModel.from_pretrained("openai/clip-vit-large-patch14",local_files_only=True,cache_dir="model_cache")
tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14",local_files_only=True,cache_dir="model_cache")
config = AutoConfig.from_pretrained("config.json", local_files_only=True,cache_dir="model_cache")
self.pipe = StableDiffusionPipeline.from_single_file(
            "./poselabs.safetensors",
            load_safety_checker=False,
            cache_dir="model_cache",
            clip_model=clip_model,
            tokenizer=tokenizer,
            safety_checker=None,
            local_files_only=True
)

Logs

Traceback (most recent call last):
File "/root/.pyenv/versions/3.11.7/lib/python3.11/site-packages/transformers/utils/hub.py", line 430, in cached_file
resolved_file = hf_hub_download(
^^^^^^^^^^^^^^^^
File "/root/.pyenv/versions/3.11.7/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/root/.pyenv/versions/3.11.7/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1362, in hf_hub_download
raise LocalEntryNotFoundError(
huggingface_hub.utils._errors.LocalEntryNotFoundError: Cannot find the requested files in the disk cache and outgoing traffic has been disabled. To enable hf.co look-ups and downloads online, set 'local_files_only' to False.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/.pyenv/versions/3.11.7/lib/python3.11/site-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 792, in convert_ldm_clip_checkpoint
config = CLIPTextConfig.from_pretrained(config_name, local_files_only=local_files_only)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.pyenv/versions/3.11.7/lib/python3.11/site-packages/transformers/models/clip/configuration_clip.py", line 141, in from_pretrained
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.pyenv/versions/3.11.7/lib/python3.11/site-packages/transformers/configuration_utils.py", line 622, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.pyenv/versions/3.11.7/lib/python3.11/site-packages/transformers/configuration_utils.py", line 677, in _get_config_dict
resolved_config_file = cached_file(
^^^^^^^^^^^^
File "/root/.pyenv/versions/3.11.7/lib/python3.11/site-packages/transformers/utils/hub.py", line 470, in cached_file
raise EnvironmentError(
OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like openai/clip-vit-large-patch14 is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/.pyenv/versions/3.11.7/lib/python3.11/site-packages/cog/server/worker.py", line 185, in _setup
run_setup(self._predictor)
File "/root/.pyenv/versions/3.11.7/lib/python3.11/site-packages/cog/predictor.py", line 66, in run_setup
predictor.setup()
File "/src/predict.py", line 59, in setup
self.pipe = StableDiffusionRubberPipeline.from_single_file(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.pyenv/versions/3.11.7/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/root/.pyenv/versions/3.11.7/lib/python3.11/site-packages/diffusers/loaders/single_file.py", line 263, in from_single_file
pipe = download_from_original_stable_diffusion_ckpt(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.pyenv/versions/3.11.7/lib/python3.11/site-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 1650, in download_from_original_stable_diffusion_ckpt
text_model = convert_ldm_clip_checkpoint(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.pyenv/versions/3.11.7/lib/python3.11/site-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 794, in convert_ldm_clip_checkpoint
raise ValueError(
ValueError: With local_files_only set to True, you must first locally save the configuration in the following path: 'openai/clip-vit-large-patch14'.
ⅹ Model setup failed

System Info

  • diffusers version: 0.25.0.dev0
  • Platform: Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
  • Python version: 3.10.12
  • PyTorch version (GPU?): 2.1.2+cu121 (True)
  • Huggingface_hub version: 0.19.4
  • Transformers version: 4.36.2
  • Accelerate version: 0.25.0
  • xFormers version: not installed
  • Using GPU in script?:
  • Using distributed or parallel set-up in script?:

Who can help?

@yiyixuxu @DN6 @sayakpaul @patrickvonplaten

@alexblattner alexblattner added the bug Something isn't working label Dec 25, 2023
@alexblattner alexblattner reopened this Dec 25, 2023
@shixinlishixinli
Copy link

i meet the same problem, where to save configuration file ?

@sayakpaul
Copy link
Member

More minimal reproduction is preferred. We don't what StableDiffusionRubberPipeline is.

Can we keep the reproduction complete, minimal, and within the scope of diffusers?

@alexblattner
Copy link
Author

@sayakpaul I updated it. There's virtually no difference between StableDiffusionRubberPipeline and StableDiffusionPipeline. I forgot to remove 1 word. Also, I may have found a solution but am unsure. I just removed local_files_only=True from from_single_file in the pipeline. It worked, but this doesn't seem right to me.

@sayakpaul
Copy link
Member

Cc: @DN6

@shixinlishixinli
Copy link

shixinlishixinli commented Dec 25, 2023

i fix this bug . i put the SD model in "diffusers/scripts/stabilityai/stable-diffusion-2" this directory. and change the name by "mv stable-diffusion-2-1-base/ stable-diffusion-2".
and then i can successfully run convert_original_stable_diffusion_to_diffusers.py

@DN6
Copy link
Collaborator

DN6 commented Jan 2, 2024

@alexblattner I'm unable to reproduce this. Would you be able to share how you're setting your cache directory and can you confirm the CLIP model is downloaded there? If you run tree -L 2 in your cache directory you should see the following structure.

.
└── models--openai--clip-vit-large-patch14
    ├── blobs
    ├── refs
    └── snapshots

@alexblattner
Copy link
Author

@DN6 are you using local_files_only?

model_cache

  • models--openai--clip-vit-large-patch14
    • .no_exist
    • blobs
    • refs
    • snapshots

Copy link

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

@github-actions github-actions bot added the stale Issues that haven't received updates label Jan 26, 2024
@DN6
Copy link
Collaborator

DN6 commented Jan 26, 2024

@alexblattner from_single_file has been updated. Could you try again to see if you're facing the issue with local_files_only?

@alexblattner
Copy link
Author

@DN6 I ended up physically downloading the model and it worked. It's odd that it never has despite all the different previous runs. I hope the newest version does that though

@alexisrolland
Copy link
Contributor

Hello @alexblattner

I am facing a similar issue when loading from_single_file with argument local_file_only=True

OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.

I can switch local_file_only=False to download missing components but it's not ideal. Here is what is downloaded:

tokenizer_config.json: 100%|██████████| 905/905 [00:00<00:00, 2.44MB/s]
vocab.json: 100%|██████████| 961k/961k [00:00<00:00, 3.28MB/s]
merges.txt: 100%|██████████| 525k/525k [00:00<00:00, 6.13MB/s]
special_tokens_map.json: 100%|██████████| 389/389 [00:00<00:00, 720kB/s]
tokenizer.json: 100%|██████████| 2.22M/2.22M [00:00<00:00, 7.26MB/s]
config.json: 100%|██████████| 4.52k/4.52k [00:00<00:00, 7.44MB/s]
tokenizer_config.json: 100%|██████████| 904/904 [00:00<00:00, 1.81MB/s]
vocab.json: 100%|██████████| 862k/862k [00:00<00:00, 1.74MB/s]
merges.txt: 100%|██████████| 525k/525k [00:00<00:00, 6.27MB/s]
special_tokens_map.json: 100%|██████████| 389/389 [00:00<00:00, 894kB/s]
tokenizer.json: 100%|██████████| 2.22M/2.22M [00:00<00:00, 7.82MB/s]
config.json: 100%|██████████| 4.88k/4.88k [00:00<00:00, 8.67MB/s]

I am running diffusers in a Linux Docker container and I would like to avoid downloading anything. Could you please tell me:

  • Where are the missing components downloaded? What is the full path?
  • Is it possible to place the components in a custom path and load them from there? How?

Thank you

@DN6
Copy link
Collaborator

DN6 commented Feb 5, 2024

@alexisrolland Would you mind creating a new issue for this question since this one is closed?

@alexisrolland
Copy link
Contributor

Thank you for your message! Actually I found a solution through this thread:

#6836

The trick was to have the openai and lion CLIP models placed at the root alongside my main Python script that uses diffusers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working stale Issues that haven't received updates
Projects
None yet
Development

No branches or pull requests

5 participants