Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unexpected token : in JSON at position 426 #12

Closed
AtzaMan opened this issue Apr 4, 2024 · 6 comments
Closed

Unexpected token : in JSON at position 426 #12

AtzaMan opened this issue Apr 4, 2024 · 6 comments
Labels
bug Something isn't working

Comments

@AtzaMan
Copy link

AtzaMan commented Apr 4, 2024

After running the following command :

npm start -- --key="sk-_your-token_" -o deobfuscated.js obfuscated.js

I get the following error:

SyntaxError: Unexpected token : in JSON at position 426 at JSON.parse (<anonymous>) at codeToVariableRenames (file:///C:/Users/Alexander/Documents/GitHub/humanify/src/openai/openai.ts:76:59) at processTicksAndRejections (node:internal/process/task_queues:95:5) at async file:///C:/Users/Alexander/Documents/GitHub/humanify/src/openai/openai.ts:21:23 at async Promise.all (index 4) at async mapPromisesParallel (file:///C:/Users/Alexander/Documents/GitHub/humanify/src/openai/run-promises-in-parallel.ts:17:5) at async client.createChatCompletion.model (file:///C:/Users/Alexander/Documents/GitHub/humanify/src/openai/openai.ts:20:5) at async file:///C:/Users/Alexander/Documents/GitHub/humanify/src/index.ts:68:25

@jehna
Copy link
Owner

jehna commented Jun 19, 2024

I'm pretty sure this is since openai does not guarantee that the function calls are valid json. Should probably implement a quick retry logic for it 🤔

@jehna jehna added the bug Something isn't working label Jun 19, 2024
@0xdevalias
Copy link

0xdevalias commented Jun 20, 2024

Can you use the new JSON mode or tool choice or similar to force it?

  • https://platform.openai.com/docs/guides/text-generation/json-mode
    • A common way to use Chat Completions is to instruct the model to always return a JSON object that makes sense for your use case, by specifying this in the system message. While this does work in some cases, occasionally the models may generate output that does not parse to valid JSON objects.

      To prevent these errors and improve model performance, when using gpt-4o, gpt-4-turbo, or gpt-3.5-turbo, you can set response_format to { "type": "json_object" } to enable JSON mode. When JSON mode is enabled, the model is constrained to only generate strings that parse into valid JSON object.

    • https://platform.openai.com/docs/api-reference/chat/create#chat-create-response_format
      • response_format
        An object specifying the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106.

        Setting to { "type": "json_object" } enables JSON mode, which guarantees the message the model generates is valid JSON.

        Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

  • https://platform.openai.com/docs/guides/function-calling
    • https://platform.openai.com/docs/guides/function-calling/function-calling-behavior
      • Function calling behavior
        The default behavior for tool_choice is tool_choice: "auto". This lets the model decide whether to call functions and, if so, which functions to call.

        We offer three ways to customize the default behavior depending on your use case:

        • To force the model to always call one or more functions, you can set tool_choice: "required". The model will then select which function(s) to call.
        • To force the model to call only one specific function, you can set tool_choice: {"type": "function", "function": {"name": "my_function"}}.
        • To disable function calling and force the model to only generate a user-facing message, you can set tool_choice: "none"

@0xdevalias
Copy link

0xdevalias commented Jul 3, 2024

Can you use the new JSON mode or tool choice or similar to force it?

I'm not sure what version of the SDK response_format: { "type": "json_object" } / tool_choice became available in. This project seems to currently be using openai 3.3.0, whereas the latest version is 4.52.3 (at time of writing). I created a more specific issue about upgrading the library, which may end up being a prerequisite to using tool_choice/similar:

@jehna
Copy link
Owner

jehna commented Aug 7, 2024

OpenAI API now guarantees structured output:

https://openai.com/index/introducing-structured-outputs-in-the-api/

(should fix this issue properly)

@0xdevalias
Copy link

0xdevalias commented Aug 12, 2024

This should now be fixed in v2 since there's the long awaited JSON mode with the new structured outputs. Please take a look and repoen if anything comes up

Originally posted by @jehna in #22 (comment)

See also:

@jehna jehna closed this as completed Aug 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants