Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Emu2 and Emu2_chat #47

Merged
merged 12 commits into from
Jan 13, 2024
Merged

Add Emu2 and Emu2_chat #47

merged 12 commits into from
Jan 13, 2024

Conversation

SparksJoe
Copy link
Contributor

No description provided.

@@ -17,6 +17,11 @@
'llava_v1_7b': 'Please set your local path to LLaVA-7B-v1.1 here, the model weight is obtained by merging LLaVA delta weight based on vicuna-7b-v1.1 in https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md with vicuna-7b-v1.1. '
}

emu_model_path_map={
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can remove this and place the model_map in emu.py, no need to add a model_path_map arg.

@@ -43,6 +48,8 @@
'cogvlm-grounding-generalist':partial(CogVlm, name='cogvlm-grounding-generalist',tokenizer_name ='lmsys/vicuna-7b-v1.5'),
'cogvlm-chat':partial(CogVlm, name='cogvlm-chat',tokenizer_name ='lmsys/vicuna-7b-v1.5'),
'sharedcaptioner':partial(SharedCaptioner, model_path='Lin-Chen/ShareCaptioner'),
'emu2':partial(Emu, name='emu2', model_path_map=emu_model_path_map),
'emu_chat':partial(Emu, name='emu_chat', model_path_map=emu_model_path_map),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be emu2_chat?

self.kwargs = {'max_length': 128}


def generate(self, image_path, prompt, dataset=None):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can use interleave_generate to implement generate, seems they can be formulates to the same form.

low_cpu_mem_usage=True,
trust_remote_code=True).to(device).eval()
self.model = model
self.kwargs = {'max_length': 128}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean max_new_tokens? This arg is not used during generation.

By the way, use can pass generation kwargs via **kwargs in __init__, so remember to add them. Besides, remember to use self.kwargs during generation.

@kennymckormick kennymckormick merged commit 86732dc into open-compass:main Jan 13, 2024
@SparksJoe SparksJoe deleted the Emu2 branch January 15, 2024 05:42
shan23chen pushed a commit to shan23chen/VLMEvalKit that referenced this pull request Oct 3, 2024
* Add Emu2

* update

* update

* -update

* update

* update

* update

* update

* update

---------

Co-authored-by: llllIlllll <“[email protected]”>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants