Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create/improve examples for implementing evaluation procedures #137

Open
HeleNoir opened this issue Nov 4, 2022 · 1 comment
Open

Create/improve examples for implementing evaluation procedures #137

HeleNoir opened this issue Nov 4, 2022 · 1 comment
Assignees
Labels
example Add example of runnable algorithm

Comments

@HeleNoir
Copy link
Collaborator

HeleNoir commented Nov 4, 2022

To properly develop the evaluation, suitable examples are of help. I'm thinking of using examples\coco.rs as a common template and adding an example that will be used in the Tech Report.

@HeleNoir HeleNoir added the example Add example of runnable algorithm label Nov 4, 2022
@HeleNoir HeleNoir self-assigned this Nov 4, 2022
@HeleNoir
Copy link
Collaborator Author

I came across an issue: the COCO bindings seem to not include the final_target_fvalue1 function, which returns the known optimum. While the optimizer should not know this value and this is not a problem for benchmarking, we still need the value for our experimental analysis as it includes additional comparisons.

@luleyleo Could you please look into it?

In addition, there is #138 and I would like to somehow log problem-related values, e.g. the known optimum or if the target has been hit. I will try to solve this myself, but I might need advice at some point.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
example Add example of runnable algorithm
Projects
None yet
Development

No branches or pull requests

1 participant