Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

handling memory allocation limit for query eval #1506

Open
r3v4s opened this issue Jan 9, 2024 · 5 comments
Open

handling memory allocation limit for query eval #1506

r3v4s opened this issue Jan 9, 2024 · 5 comments

Comments

@r3v4s
Copy link
Contributor

r3v4s commented Jan 9, 2024

Description

In pr #267, max memory cap has been set to 1.5g for qeval with comment // higher limit for queries

I'm aware of increasing this limit can break rpc service (similar research against ethereum => paper & video

One of suggested countermeasure by author is, Performance anomaly detection plus security deposit which is for client(or d-app) to use rpc, it has to deposit certain amount money to rpc provider and if abnormal behavior has been detected deposit going to be confiscated.

However current limit may not be enough for certain use case.

For example, to get best result in dex(defi) it needs to search all over existing position. And to give user a estimated result, interface calls DrySwap over RPC using qeval to get it. This is where current limit can be insufficient.
( if positions are spread sparsely in all over range, bunch of iteration can be happen in qeval requests which can result panic, allocation limit reached )

Question

  1. Is current limit(1.5g) is well-known number(or calculated from certain formula)
  2. Does it have to be static value? Can't it be dynamic?

cc @mconcat @notJoon @dongwon8247 @zivkovicmilos

@zivkovicmilos
Copy link
Member

Based on discussions from our call today, the suggestions were:

  • optimize the SC itself so it minimizes gas usage
  • run a separate node that you can query, where the limit on this is much higher (since qevals are temporary, not committed to a transaction, and local to the node executing them)

I'm curious if you can provide us some more information:

  • The gas usage limits you're hitting, even when increasing this
  • The types of operations
  • The contracts (if this is public), and what calls you're doing -- this one is primarily for tracing and seeing what operations are eating gas usage the most for your use-case

@thehowl
Copy link
Member

thehowl commented Jan 18, 2024

Aside from what Milos said, I should point out that the real solution is necessarily not increasing the allocation limit, but rather addressing synchronous GC: #266.

This way, the "allocation" of a contract reflects how much memory it has actually stored, rather than a sum of all of the times it has attempted to store memory.

@notJoon
Copy link
Member

notJoon commented Jan 19, 2024

I should point out that the real solution is necessarily not increasing the allocation limit, but rather addressing synchronous GC

I also believe it's more generalizable in the long term to address this problem directly at the language level.

While 'external solutions' (I don't know if this is the right expression) such as optimization in SC, are still practical and valuable and should have considered in development process. But some approaches may to be cumbersome or necessitate distinct strategies for each project, which can be restrictive.

BTW has there any further discussion about GC related things?

@zivkovicmilos
Copy link
Member

BTW has there any further discussion about GC related things?

@petar-dambovaliev
Do you know what the latest status on the GC efforts is? I vaguely remember us discussing it, that it was temporarily tabled

@moul
Copy link
Member

moul commented Oct 15, 2024

This should be addressed by sync GC, which should have a reasonable default. We can also expect some users to run special RPC nodes with higher values. In other words, it makes sense to make it configurable while providing a reasonable default.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Backlog
Status: Triage
Development

No branches or pull requests

6 participants