Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Issues with vLLM tool call functionality leading to abnormal requests #11284

Open
1 task done
yumc2573 opened this issue Dec 18, 2024 · 0 comments
Open
1 task done
Labels
bug Something isn't working

Comments

@yumc2573
Copy link

Your current environment

The output of `python collect_env.py`
Your output of `python collect_env.py` here

Model Input Dumps

No response

🐛 Describe the bug

I am encountering issues while using the tool call capability of vLLM. Some requests are behaving abnormally, and the log indicates the following error:
20241218-135723

Additionally, my startup script is as follows:
-d vllm/vllm-openai:v0.6.3.post1
--host 0.0.0.0 --port 30000
--model /llm/models/Qwen2.5-32B-Instruct
--served-model-name qwen2.5-32b-instruct
--dtype auto
--tensor-parallel-size 2
--gpu-memory-utilization 0.90
--enable-prefix-caching
--enable-auto-tool-choice
--tool-call-parser hermes

Currently, I am using vLLM version:vllm/vllm-openai:v0.6.3.post1

Could you please provide guidance on how to resolve this issue? Any help would be greatly appreciated.

Thank you!

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@yumc2573 yumc2573 added the bug Something isn't working label Dec 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant