Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logs output to console if we're not on speak mode #3715

Merged
merged 4 commits into from
May 17, 2023
Merged

Logs output to console if we're not on speak mode #3715

merged 4 commits into from
May 17, 2023

Conversation

Zorinik
Copy link
Contributor

@Zorinik Zorinik commented May 2, 2023

I had a PR approved for this some weeks ago, but I think it got lost the very same night against another PR that did a major refactor. I think it is an important point so I am proposing it again, feel free to approve or discard of course.

Background

I noticed that if we're not on speak mode, what the assistant want to say gets lost because it is not logged to console, sometimes losing precious info

Changes

I added an "else" so that if the assistant doesn't speak directly (because we're not in speak mode), at least it is printed to console like the rest of the output.

PR Quality Checklist

  • My pull request is atomic and focuses on a single change.
  • I have thoroughly tested my changes with multiple different prompts.
  • I have considered potential risks and mitigations for my changes.
  • I have documented my changes clearly and comprehensively.
  • I have not snuck in any "extra" small tweaks changes

@vercel
Copy link

vercel bot commented May 2, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
docs ✅ Ready (Inspect) Visit Preview 💬 Add feedback May 17, 2023 7:08pm

@github-actions github-actions bot added the size/s label May 2, 2023
@codecov
Copy link

codecov bot commented May 2, 2023

Codecov Report

Patch coverage: 25.00% and project coverage change: -0.01 ⚠️

Comparison is base (42a5a0c) 62.68% compared to head (f7cedd4) 62.67%.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #3715      +/-   ##
==========================================
- Coverage   62.68%   62.67%   -0.01%     
==========================================
  Files          74       74              
  Lines        3398     3400       +2     
  Branches      494      495       +1     
==========================================
+ Hits         2130     2131       +1     
  Misses       1120     1120              
- Partials      148      149       +1     
Impacted Files Coverage Δ
autogpt/logs.py 82.71% <25.00%> (-0.41%) ⬇️

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

@k-boikov
Copy link
Contributor

k-boikov commented May 2, 2023

Do we have an example where SPEAK actually has some more information value than THOUGHTS? Because I think with printing THOUGHTS we really don't need to print SPEAK. The idea in the prompt for SPEAK is to be used only for say_text imo.

@collijk
Copy link
Contributor

collijk commented May 3, 2023

you don't need this if/else. Just log the thoughts. The typewriter log has internal speak logic: https://github.com/Significant-Gravitas/Auto-GPT/blob/master/autogpt/logs.py#L90

@Zorinik
Copy link
Contributor Author

Zorinik commented May 4, 2023

Do we have an example where SPEAK actually has some more information value than THOUGHTS? Because I think with printing THOUGHTS we really don't need to print SPEAK. The idea in the prompt for SPEAK is to be used only for say_text imo.

The LLM doesn't really know if we have speak mode enabled or not, so it often happens that it puts relevant info in that section, because that is the section that communicates with the user. I actually had the LLM asked me things over several tasks or claryfing what she was about to do better than the previous thoughts. That's why I think that info should not be lost, no reason to lose some of the output of the LLM.

@p-i-
Copy link
Contributor

p-i- commented May 5, 2023

This is a mass message from the AutoGPT core team.
Our apologies for the ongoing delay in processing PRs.
This is because we are re-architecting the AutoGPT core!

For more details (and for infor on joining our Discord), please refer to:
https://github.com/Significant-Gravitas/Auto-GPT/wiki/Architecting

@k-boikov k-boikov added this to the v0.3.2-release milestone May 14, 2023
@k-boikov k-boikov self-assigned this May 15, 2023
@vercel vercel bot temporarily deployed to Preview May 15, 2023 23:08 Inactive
@vercel vercel bot temporarily deployed to Preview May 17, 2023 19:08 Inactive
@k-boikov k-boikov merged commit 19767ca into Significant-Gravitas:master May 17, 2023
ppetermann pushed a commit to ppetermann/Auto-GPT that referenced this pull request May 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

4 participants