You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, it is only possible to test a full PyTEAL smart contract (by using a sandbox/private network or using a light runtime - https://github.com/scale-it/algo-builder/tree/master/packages/runtime).
This allows to test each of the smart contract ABI methods but it does not allow to test internal subroutines/functions.
However, as smart contract become more and more complex, testing individual subroutines/function becomes more and more important.
This is currently not possible.
Solution
Currently PyTEAL expressions essentially build an AST that is then used to generate TEAL code via compileTeal.
We could add a function that take this same AST and instead evaluate it in a specific context.
Technically, it would mainly consist in adding a member function __eval__ to each opcode.
Some functions are pure and do not even need a context such as:
The evaluation function may start by only handling such pure function.
This would already allows to increase quite a lot coverage of testing of PyTEAL smart contracts.
But ideally, there should be a way to give additional context, such as: Txn/Gtxn values, Global values, account balances, ...
It is unclear to me what would be the best way to set this context: one way can be re-using the normal Python SDK and specify a group of transaction, but this may be too heavy.
Another way is really just providing manually each value, and the evaluation fails when one value is not provided.
There is also the question of how evaluation handle side effects such as logs and inner transactions.
Maybe they can just be handled as additional outputs.
In a third time, it would be great to have a unit testing framework built around this evaluation system.
This fine-grained unit testing is meant to complement, not replace, more coarse-grained unit testing done using a sandbox/private network or using a light runtime.
The former can be seen as what is usually called unit testing in web2 development, why the later can be seen as testing directly the API of a web2 backend service.
Dependencies
n/a
Urgency
Nice to have
The text was updated successfully, but these errors were encountered:
Hey there, in case it's helpful to you I worked on something similar here, though it's really just a wrapper around graviton. I didn't have time to make it ABI compliant, but I'm still using it for my own testing and it works pretty well for me.
Problem
Currently, it is only possible to test a full PyTEAL smart contract (by using a sandbox/private network or using a light runtime - https://github.com/scale-it/algo-builder/tree/master/packages/runtime).
This allows to test each of the smart contract ABI methods but it does not allow to test internal subroutines/functions.
However, as smart contract become more and more complex, testing individual subroutines/function becomes more and more important.
This is currently not possible.
Solution
Currently PyTEAL expressions essentially build an AST that is then used to generate TEAL code via
compileTeal
.We could add a function that take this same AST and instead evaluate it in a specific context.
Technically, it would mainly consist in adding a member function
__eval__
to each opcode.Some functions are pure and do not even need a context such as:
The evaluation function may start by only handling such pure function.
This would already allows to increase quite a lot coverage of testing of PyTEAL smart contracts.
But ideally, there should be a way to give additional context, such as:
Txn/Gtxn
values,Global
values, account balances, ...It is unclear to me what would be the best way to set this context: one way can be re-using the normal Python SDK and specify a group of transaction, but this may be too heavy.
Another way is really just providing manually each value, and the evaluation fails when one value is not provided.
There is also the question of how evaluation handle side effects such as logs and inner transactions.
Maybe they can just be handled as additional outputs.
In a third time, it would be great to have a unit testing framework built around this evaluation system.
This fine-grained unit testing is meant to complement, not replace, more coarse-grained unit testing done using a sandbox/private network or using a light runtime.
The former can be seen as what is usually called unit testing in web2 development, why the later can be seen as testing directly the API of a web2 backend service.
Dependencies
n/a
Urgency
Nice to have
The text was updated successfully, but these errors were encountered: