Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix data model error in 3Box integration #5955

Merged
merged 6 commits into from
Feb 11, 2020

Conversation

think-in-universe
Copy link
Contributor

@think-in-universe think-in-universe commented Feb 6, 2020

Description

Fix the testing failure in #5947, when backing up profile data to 3Box.

Refers/Fixes

Fix the testing failure in #5947

Testing

Tested locally with GitCoin development environment.

However, I failed to fund issues in the local environment and need extra testing from anyone could mock the data in their dev environment, especially for the active_work and portfolio fields in the backup profile data.

@codecov
Copy link

codecov bot commented Feb 6, 2020

Codecov Report

Merging #5955 into master will decrease coverage by 1.28%.
The diff coverage is 29.02%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #5955      +/-   ##
==========================================
- Coverage   30.22%   28.94%   -1.29%     
==========================================
  Files         248      271      +23     
  Lines       21186    23678    +2492     
  Branches     3068     3441     +373     
==========================================
+ Hits         6403     6853     +450     
- Misses      14507    16552    +2045     
+ Partials      276      273       -3
Impacted Files Coverage Δ
app/app/thumbnail_processors.py 33.33% <ø> (ø) ⬆️
app/dashboard/management/commands/calc_profile.py 0% <ø> (ø) ⬆️
.../dashboard/management/commands/migrate_profiles.py 0% <ø> (ø) ⬆️
app/bounty_requests/admin.py 100% <ø> (ø) ⬆️
...ard/management/commands/create_activity_records.py 0% <ø> (ø) ⬆️
app/app/log_filters.py 75% <ø> (ø) ⬆️
app/avatar/admin.py 77.14% <ø> (ø) ⬆️
app/dashboard/signals.py 60% <ø> (ø) ⬆️
app/app/wsgi.py 0% <ø> (ø) ⬆️
app/avatar/views.py 14.08% <ø> (ø) ⬆️
... and 100 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d040197...eaf3c75. Read the comment docs.

@octavioamu
Copy link
Contributor

nice, worked on my local
image

@loganbek
Copy link

loganbek commented Feb 7, 2020

Does this still require testing? If so, I can do it.

@think-in-universe
Copy link
Contributor Author

@octavioamu cool.

@loganbek thanks~ I'd prefer we do more local testing if possible, especially to confirm the active_work and portfolio fields are stored correctly.

I'm still trying to create some bounties in my local env...

@loganbek
Copy link

loganbek commented Feb 7, 2020

I'll do some local testing and get back to you.

@think-in-universe
Copy link
Contributor Author

@loganbek awesome. thanks

may need to change active_work data from

web/app/dashboard/views.py

Lines 2432 to 2436 in 460c466

active_work = []
interests = profile.active_bounties
for interest in interests:
bounties = Bounty.objects.filter(interested=interest, current_bounty=True)
active_work.extend(bounties)

to below if it doesn't work

web/app/dashboard/views.py

    active_work = Bounty.objects.none()
    interests = profile.active_bounties
    for interest in interests:
        active_work = active_work | Bounty.objects.filter(interested=interest, current_bounty=True)

@loganbek
Copy link

loganbek commented Feb 7, 2020

@think-in-universe heads up, I'm hitting pip BUIDL issue will retry local test tom.

@think-in-universe
Copy link
Contributor Author

@loganbek sure. no problem.

not sure why pip build fails. I re-setup the environment yesterday and the pip build completed in a few minutes.

@octavioamu
Copy link
Contributor

@loganbek are you using docker?

@octavioamu
Copy link
Contributor

@think-in-universe not sure how the data should look but here is my data, check please if that is right https://share.vidyard.com/watch/hCn7h5myiyNupHvYSGSPK8?

@think-in-universe
Copy link
Contributor Author

@octavioamu cool. thanks! I have watched the recorded video and I think most of the data are correct.

But I noticed that portfolio and active_work are empty lists. Could you please create some bounties locally, among which complete at least one and start at least one, and then check whether the two fields are correct?

I have been trying locally for creating bounties but were blocked at connecting to IPFS for quite some time, so haven't been able to proceed yet....

@think-in-universe
Copy link
Contributor Author

@octavioamu maybe the command in the doc is helpful for creating bounties locally but it failed for me again for IPFS connection issues: https://docs.gitcoin.co/mk_setup/#optional-import-bounty-data-from-web3-to-your-database

@think-in-universe
Copy link
Contributor Author

think-in-universe commented Feb 7, 2020

@octavioamu I'm able to create bounties locally now after reconfigure IPFS daemon locally, but not able to complete bounties due to some unknown contract error ...

The active_work works well with the latest commit, and the portfolio should also probably work well though not tested here. It would be awesome if you could help with local testing for the portfolio fields, but I think it should be generally OK to be merged now.

@thelostone-mc thelostone-mc changed the base branch from master to stable February 11, 2020 14:24
@octavioamu octavioamu merged commit 8be78f0 into gitcoinco:stable Feb 11, 2020
@octavioamu
Copy link
Contributor

Seems is not working again now is flagged on production as alpha so you will able to see if you are an alpha tester https://gitcoin.co/settings/account?cb=alphatester

@owocki
Copy link
Contributor

owocki commented Feb 11, 2020

from our internal debuggin thread

octavioamu  11 minutes ago
not sure why worked on my local I think is related to the serializer as production have more data than locals
Owocki  11 minutes ago
seems that the backup HTTP request takes too long https://bits.owocki.com/E0uq5rqZ

bits.owocki.combits.owocki.com
Screen Shot 2020-02-11 at 10.27.10 AM.png(1 MB)
https://p200.p0.n0.cdn.getcloudapp.com/items/E0uq5rqZ/Screen+Shot+2020-02-11+at+10.27.10+AM.png

Owocki  11 minutes ago
i vote that we move it to a celery job if its a long running job

octavioamu  11 minutes ago
yes could be that 2

Owocki  11 minutes ago
otherwise itll 502 in prod (this looks like whats happening actulaly)

octavioamu  10 minutes ago
not sure if about long runnig job since is all in front end side

octavioamu  10 minutes ago
sorry the serialization you mean

octavioamu  9 minutes ago
yep not sure if about time serializating or just problem on the way is serializating, anyway moving the job under a celery job will be a problem since the user client is the one sending the data

Owocki  4 minutes ago
hmm.. well we can either chunk it and manage a request/response cycle in the frontend from there or move it in a celery job

Owocki  4 minutes ago
or maybe we disable it for users of a certain power-user ness… it might work for the average web3/gitcoin user

Owocki  2 minutes ago
or just fix the serializer speed :slightly_smiling_face:

the issue is the size of the data i think

@octavioamu
Copy link
Contributor

My guesses, is the data on production bigger and the serializer is failing. Or as it is bigger is taking to long to serialize

@think-in-universe
Copy link
Contributor Author

think-in-universe commented Feb 12, 2020

@octavioamu @owocki

we received the 500 error when exporting data:

POST https://gitcoin.co/api/v0.1/profile/backup 500

so it's probably still a serialization / data exporting error in production.

could anyone please send me the error message on server and I'll take a look first?

so far I don't think it's related to size of data at least in my test.

If we want to move that into a celery job, we may need a python binding client library or call that 3box sync process with 3box.js via node.js on server, which could be done in next step. I discussed with 3box team on this earlier.

@think-in-universe
Copy link
Contributor Author

and it might be faster to test, and accelerate the feedback loop, if we could efficiently mock or import the profile data in local dev env. I think we still didn't test all cases locally without high quality test data.

@think-in-universe
Copy link
Contributor Author

think-in-universe commented Feb 12, 2020

please find me on Discord with robertyan#8312 if more instant communication is needed to resolve this issue faster.

and if anyone could please send me the error message on Django server, we should be able to fix the issue in production sooner.

@think-in-universe
Copy link
Contributor Author

think-in-universe commented Feb 12, 2020

oh sorry, I didn't check the screenshot https://bits.owocki.com/E0uq5rqZ from @owocki. According to the screenshot, it would probably be data size issue for these active users when exporting data from database. We could either move the 3box backup process into a celery job, or split the call to /api/v0.1/profile/backup with a model parameter with several parallel calls. To implement the latter one, it would be nice we could log the time frame of the serialization process, and then do a bit change in dashboard/views.py/profile_backup for that.

And in the meanwhile, there're other serialization issues in my profile that we may need to fix.

@owocki
Copy link
Contributor

owocki commented Feb 12, 2020 via email

@octavioamu
Copy link
Contributor

@think-in-universe check the new Owocki screenshot https://bits.owocki.com/7KuRBgA6

@think-in-universe
Copy link
Contributor Author

think-in-universe commented Feb 12, 2020

thanks @octavioamu @owocki

with the help of the screenshot, I submitted another PR for fixing: #5981

but the data size issue is not tackled yet. it would be helpful if we could collect more detailed data for the query timeout. for the solutions, an easy one might be split the request/response and execute that in parallel, but it's still possible the serialization process might takes too long for a certain data field in this case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants