-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Abnormal phenomenon of reservoir state and 'balanced_accuracy_score' in tutorial.py #44
Comments
I'll do a verification run to test, but on my current version of the code, I haven't seen the issue. Can you let me know which version of the code you are running on, and have you check and ensure that the connectivity data is correctly formatted and linked to the right function? |
I downlaoded the code based on readme file last week, I think it is the newest version. git clone https://github.com/netneurolab/conn2res.git
cd conn2res
pip install .
cd ..
git clone -b v0.0.1 https://github.com/neurogym/neurogym.git
cd neurogym
pip install -e . The connectivity data is a very sparse and symmetric matrix, I downloaded data.zip from https://zenodo.org/record/4776453#.Yd9AuS_72N8 and selected:
in data.zip file and put them under the path:
In addition to this, I didn't do anything else in tutorial.py # load connectivity data of one subject
conn = Conn(subj_id=0)
# scale conenctivity weights between [0,1] and normalize by spectral its
# radius
conn.scale_and_normalize()
# instantiate an Echo State Network object
esn = EchoStateNetwork(w=conn.w, activation_function=activation) |
Thank you for clarifying. I have also run the same simulation on my local machine and I did also get the same results as yours. I tried graphing with detailed alpha values, and it turned out that with different alpha values, you will get different reservoir dynamics: the dynamics you got are from For your information, you can refer to the ESN's "Echo-state Property" for an explanation of the reservoir's chaotic dynamics. Let me know if you have any questions! |
Hello, For me, the behavior initially described by @YuZe-01 also happens in When I run it with alpha=0.95 I get In my case, the scores are similar to the ones in the repository, but sometimes almost the same, but perhaps that is normal. For alpha=0.95 for example: @bachnguyenTE Have you used a seed? If so, could you tell us which? |
I would assume that what you guys have seen from a paper is only a sample run of the repository. I have extensively experimented with models in the repository, and I can say that the model is highly unstable. Unless you have a lot of control over your RNGs or you have air-tight data sampling, it would be hard to get reproducible results. @YuZe-01, the nature of echo-state models and reservoir models, in general, is that they are almost entirely dependent on the reservoir. Particularly you have to maintain the "echo-state property": you might have seen that from @Deskt0r, the graph that you got is what I mentioned, so double-check your readout and reservoir dynamics as well. I didn't use any seed to reproduce, and my performance is also a bit different from what you see in the paper. They might have a different reservoir producing these results, or it might purely be a difference in hardware configuration or the parameters of the readout ridge regression layer. I attached an image of over 3200 runs that shows you guys how much variance you'd get. The variance of the ridge readout is generally insane, so if any of you guys have improvements in mind, I'd love to hear them! |
Hey guys, Thanks @bachnguyenTE for your prompt reply to this issue! I'm not sure if the differences you are observing might be due to the data. The data that I used for the figures in the paper are the ones specifically in this repo on Zenodo: Maybe try with this data, and see how it goes. Download the folder in the Zenodo repo, and place it inside the folder of the code repo. In any case, as @bachnguyenTE mentioned, reservoir dynamics tend to vary a lot, specially around alpha~1.0. It is normal and expected that reservoir dynamics vary for different values of alpha. In reservoir computing it is always good practice to run simulations several times to get a distribution of values, rather than a single value. @YuZe-01 and @Deskt0r, I would suggest try with this new data and make sure that the arguments of all functions are exactly the same as in the tutorial! Good luck :) |
@estefanysuarez Sorry for replying you so late. I have downloaded the new data according to the link you gave. I ran the 'PerceptualDecisionMaking' task for 100 times and the final performance is shown below. One of the reservior state is shown below with alpha=0.9. The reservior's state of each run is different actually because the model is highly unstable as @bachnguyenTE mentioned before. And I also find an interesting thing, because as I know, the connectivity matrix is always positive number, the 'conn.scale_and_normalize()' function won't change it into negative number, in 'esn.simulate()', the code just add like this |
Hi @YuZe-01 , I apologize for my late reply too! |
Hi,
When I ran the tutorial.py with 'PerceptualDecisionMaking', the output reservoir state is quite starnge that it seems the value of each node doesn't change.
Meanwhile, the performance of 'balanced_accuracy_score' also had a difference with paper, which had a drop of nearly more than 10%.
I just changed the 'adjusted' in 'balanced_accuracy_score' to 'True' and changed the file path of 'connectivity.npy', 'cortical.npy' and 'rsn_mapping.npy' to 'examples\data\human', which were downloaded with the guidance of closed issues 'Where can I get the three files connectivity.npy, cortical.npy, rsn_mapping.npy?'
The text was updated successfully, but these errors were encountered: