-
Notifications
You must be signed in to change notification settings - Fork 209
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Emu2 and Emu2_chat #47
Conversation
vlmeval/config.py
Outdated
@@ -17,6 +17,11 @@ | |||
'llava_v1_7b': 'Please set your local path to LLaVA-7B-v1.1 here, the model weight is obtained by merging LLaVA delta weight based on vicuna-7b-v1.1 in https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md with vicuna-7b-v1.1. ' | |||
} | |||
|
|||
emu_model_path_map={ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can remove this and place the model_map in emu.py, no need to add a model_path_map arg.
vlmeval/config.py
Outdated
@@ -43,6 +48,8 @@ | |||
'cogvlm-grounding-generalist':partial(CogVlm, name='cogvlm-grounding-generalist',tokenizer_name ='lmsys/vicuna-7b-v1.5'), | |||
'cogvlm-chat':partial(CogVlm, name='cogvlm-chat',tokenizer_name ='lmsys/vicuna-7b-v1.5'), | |||
'sharedcaptioner':partial(SharedCaptioner, model_path='Lin-Chen/ShareCaptioner'), | |||
'emu2':partial(Emu, name='emu2', model_path_map=emu_model_path_map), | |||
'emu_chat':partial(Emu, name='emu_chat', model_path_map=emu_model_path_map), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should be emu2_chat?
vlmeval/vlm/emu.py
Outdated
self.kwargs = {'max_length': 128} | ||
|
||
|
||
def generate(self, image_path, prompt, dataset=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can use interleave_generate to implement generate, seems they can be formulates to the same form.
vlmeval/vlm/emu.py
Outdated
low_cpu_mem_usage=True, | ||
trust_remote_code=True).to(device).eval() | ||
self.model = model | ||
self.kwargs = {'max_length': 128} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean max_new_tokens? This arg is not used during generation.
By the way, use can pass generation kwargs via **kwargs
in __init__
, so remember to add them. Besides, remember to use self.kwargs during generation.
* Add Emu2 * update * update * -update * update * update * update * update * update --------- Co-authored-by: llllIlllll <“[email protected]”>
No description provided.