-
-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Function calling with Qwen & Streaming ('NoneType' object has no attribute 'get') #9874
Comments
cc @K-Mistele |
Thanks for the ping @DarkLight1337 |
Please check #9908 :) |
Stream output, if the function has no parameters, an error will be reported directly |
Yeah, this is what I'm thinking too. #9908 (comment) |
thanks for the answer, sorry for the delay, I answered in the PR here #9908 (comment) but basically, yes, if the argument is blank, it doesn't work |
Your current environment
The output of `python collect_env.py`
Model Input Dumps
No response
🐛 Describe the bug
vLLM Version
v0.6.3.post1
Model
Qwen2.5-7B-Instruct
Docker command for vLLM
command: --host 0.0.0.0 --model /hf/Qwen-Qwen2.5-7B-Instruct --max-model-len 32768 --gpu_memory_utilization 0.9 --enable-auto-tool-choice --tool-call-parser hermes
Parsing from my own fastapi
vLLM error
vllm | ERROR 10-30 14:55:01 hermes_tool_parser.py:337] Error trying to handle streaming tool call. vllm | ERROR 10-30 14:55:01 hermes_tool_parser.py:337] Traceback (most recent call last): vllm | ERROR 10-30 14:55:01 hermes_tool_parser.py:337] File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/tool_parsers/hermes_tool_parser.py", line 226, in extract_tool_calls_streaming vllm | ERROR 10-30 14:55:01 hermes_tool_parser.py:337] function_name: Union[str, None] = current_tool_call.get("name") vllm | ERROR 10-30 14:55:01 hermes_tool_parser.py:337] ^^^^^^^^^^^^^^^^^^^^^ vllm | ERROR 10-30 14:55:01 hermes_tool_parser.py:337] AttributeError: 'NoneType' object has no attribute 'get'
Please note that everything works if
Any guidance ?
Thanks in advance everyone
PS: I have seen the posts from #9693 but my issue seems different since i actually use a "supported" model.
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: