Skip to content

Commit

Permalink
update readme and strings
Browse files Browse the repository at this point in the history
  • Loading branch information
ej52 committed Nov 4, 2023
1 parent c90c94e commit bcc96ea
Show file tree
Hide file tree
Showing 2 changed files with 37 additions and 4 deletions.
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,13 +39,13 @@ Options for Ollama Conversation can be set via the user interface, by taking the

| Option | Description |
| ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Prompt Template | The starting text for the AI language model to generate new text from. This text can include information<br> about your Home Assistant instance, devices, and areas and is written using Home Assistant Templating. |
| Prompt Template | The starting text for the AI language model to generate new text from. This text can include information about your Home Assistant instance, devices, and areas and is written using Home Assistant Templating. |
| Completion Model | The model used to generate response. |
| Context Size | Sets the size of the context window used to generate the next token. |
| Maximum Tokens | The maximum number of words or “tokens” that the AI model should generate in its completion of the prompt. |
| Temperature | The temperature of the model. A higher value (e.g., 0.95) will lead to more unexpected results, while a <br>lower value (e.g. 0.5) will be more deterministic results. |
| Top K | Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers,<br> while a lower value (e.g. 10) will be more conservative. |
| Top P | Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5)<br> will generate more focused and conservative text. |
| Temperature | The temperature of the model. A higher value (e.g., 0.95) will lead to more unexpected results, while a lower value (e.g. 0.5) will be more deterministic results. |
| Top K | Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. |
| Top P | Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. |


## Contributions are welcome!
Expand Down
33 changes: 33 additions & 0 deletions custom_components/ollama_conversation/strings.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
{
"config": {
"step": {
"user": {
"data": {
"base_url": "[%key:common::config_flow::data::url%]"
}
}
},
"error": {
"cannot_connect": "[%key:common::config_flow::error::cannot_connect%]",
"invalid_auth": "[%key:common::config_flow::error::invalid_auth%]",
"invalid_url": "Invalid URL",
"unknown": "[%key:common::config_flow::error::unknown%]"
}
},
"options": {
"step": {
"init": {
"title": "Ollama configuration",
"data": {
"prompt": "Prompt Template",
"chat_model": "Completion Model",
"ctx_size": "Context Size",
"max_tokens": "Maximum Tokens",
"temperature": "Temperature",
"top_p": "Top P",
"top_k": "Top K"
}
}
}
}
}

0 comments on commit bcc96ea

Please sign in to comment.