You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
The text was updated successfully, but these errors were encountered:
------------------ 原始邮件 ------------------
发件人: xlg-go ***@***.***>
发送时间: 2024年12月18日 09:25
收件人: vllm-project/vllm ***@***.***>
抄送: nansanhao ***@***.***>, Author ***@***.***>
主题: Re: [vllm-project/vllm] [Bug]: Qwen2vl vllm grounding任务效果不如transformers推理 (Issue #11254)
你装了flash-atten吗?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
Your current environment
The output of `python collect_env.py`
使用transformers推理没问题,使用vllm 部署模型在grounding任务上位置非常不准。
猜测是这个类似的问题:huggingface/transformers#33487
transformers Version: 4.45.2,按照这个改完后
vllm Version: 0.6.4.post1
Model Input Dumps
无
🐛 Describe the bug
无
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: