-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TSAdjoint #1
Comments
Hello Miguel! I had plans to make TSAdjoint and TSForward work with pyadjoint, but I don't think I would do it in less than two weeks. Modifying a bit petsc4py's heat example also does not work, so probably the problem is not on the firedrake-petsc interface side. Do you have experience with implicit solver and adjoint stuff with petsc4py? |
Ivan, I ran into the same issue last week, and switching to Crank-Nicolson (CN) seemed to work. In your example, I also had to change the time step to something closer to the CFL condition:
setting the time step to exactly
The stability issues are in the adjoint solve. Crank-Nicolson is unconditionally stable. The adjoint solve goes backwards in time so something in the stability might be different. I am going to share your code and these results to the petsc mail list if that is ok. I have experience with adjoint solvers, not so much using them in petsc4py. |
I think getting Could you please drop a link here for the mailing list thread once you send the message? |
I have found the bug in TS implementation in Now the future of pyadjoint block for TS Adjoint looks brighter to me. |
That's awesome! What are your thoughts for creating a pyadjoint block with TS Adjoint? Do you think it is ok to have one single block that both solves the transient problem and computes the cost function? I do no see another way. |
I think two separate cases need to be considered:
The second case requires additional wrappers for QuadratureTS to easily use expressions defined using Firedrake. |
I have been thinking about how to implement the case for objective integrals |
Do these matrices have to be exactly of type dense though? I think it can be any R = FunctionSpace(mesh, "R", 0)
vr = TestFunction(R)
i = u*vr*dx # now it is 1-form
# so we will get matrices when we assemble derivatives now
didu = assemble(derivative(i, u)) Assembled I've started writing an example of using QuadratureTS But there is a bug somewhere now and I get the error
Could you take a look? Maybe we'll get it working together. |
I am not getting this error, which python version are you using? I have only seeing assertion errors from python when using python debug version. I am getting another error though
I think it is because the Previous times that I have used a sparse matrix, I ran into an error caused by this line 303 in https://www.mcs.anl.gov/petsc/petsc-current/src/ts/impls/implicit/theta/theta.c.html#TSAdjointStepBEuler_Private |
In PETSc, a vector can be placed into a dense matrix without copying the values explicitly:
In your PR to petsc4py, you might want to test if |
You are right that QuadratureTS does not inherit everything from the original TS. It is supposed to be a light-weight container for holding the call-back functions. Can you elaborate a bit about the problem with dmhooks? I can add them to QuadratureTS if needed. |
What do you think of using
I am new at the DM management that Firedrake does to be honest. I think that they are attaching a context data structure so the |
For dense matrices, 'MatDenseGetColumn()' is just a convenience function that does 'MatDenseGetArray' and a bit point manipulation. See
Yes, it is not difficult to inherit any context from the original |
We can simply set the DM for the QuadratureTS with I now hit another error.
https://gitlab.com/petsc/petsc/-/blob/master/src/ts/impls/implicit/theta/theta.c#L942 |
It seems that calling |
Thank you, that was very useful! I didn't know that the RHS function is saved in the DM and not the TS object. I've updated the example. Now it seems to work, and something is computed. The correctness still needs to be verified though. And it doesn't work in parallel ( The idea to use Real function space to get the matrices currently fails as @salazardetroya mentioned (when assembled the matrices are of type
Also, petsc4py is missing a wrapper for |
Awesome! I am going to play with it and try to verify it. I coded up a taylor_test like the one in pyadjoint for that.
One can use
|
Hi Ivan, I have worked out an analytical example for which I can obtain the correct numerical sensitivities. It is in https://gist.github.com/salazardetroya/a1e7b834ab6386a0ee6c9206e4549d7f The problem is just a one dof ODE In the PETSc Manual, they explain an additional step to calculate the sensitivities after the adjoint solve (https://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf pag 153). This additional step seems to be necessary with I decided to work out this analytical example to narrow down the ways my code could fail. Once we understand better how the correct way to calculate the final sensitivities, I will expand to PDE's with a taylor test for derivatives checking. |
This makes sense to me now because those terminal conditions do not depend on the integrand, but only on a scalar function of just the solution at the final time step. This is clear in the manual.
I also understand this better. The additional step described in the manual (in the equation |
Ok, giving that The question now is to see why |
It is generally needed regardless of the TS method you use. But for certain cases (actually many cases in practice) you can skip it. Note that dJdf (lambda) and dJdp (mu) calculated by |
@IvanYashchuk looks like we're in good shape now. I understand what TSAdjoint is doing in terms of calculating the sensitivities and @caidao22 has pushed a PR to fix the issue with the |
petsc4py's master branch now has a wrapper for I think for the pyadjoint Block partial derivatives are what we actually need. If there is a dependence of initial condition on the control parameter, other pyadjoint blocks will take care of applying the chain rule. |
Hi @IvanYashchuk , I have updated the Now to refactor everything inside the DAESolver and think of ways to interface with pyadjoint. Would it be possible for me to push to this repo directly (but not to master)? It will be easier for me. |
Great progress! Sure, I will give you access if you prefer that. You can always just open a pull request from your fork to this repository. |
Thanks! Would you mind if I am the one taking a crack at implementing the pyadjoint block? I want to gain experience with software development. I am pretty familiar with pyadjoint and its inner workings. I have dug through the code many times so it should not take me too much time. Then I can open a PR and we can review it. |
Sure, go ahead then. |
Hi @IvanYashchuk, I refactored the adjoint functionality into the
|
I am actually not sure whether pyadjoint supports multiple output Blocks. Yes there is About single vector parameter. You don't need to calculate the sensitivities separately. There is |
The problem of doing the assembly manually on PETSc level is how to do the parallel partitioning. This is taken care of automatically by Firedrake when using a |
I mean we still have to convert assembled derivative functions to the dense matrix for PETSc Adjoint. # let's just stack the same djdf as for two different parameters
_djdf = fd.assemble(fd.derivative(j, f))
local_size = _djdf.dat.data.shape[0]
global_size = _djdf.ufl_function_space().dim()
djdf_transposed_mat = PETSc.Mat().createDense(
[[local_size + local_size, global_size + global_size], [1, 1]]
)
djdf_transposed_mat.setUp()
def form_djdf(ts, t, X, Amat):
dm = ts.getDM()
ctx = dmhooks.get_appctx(dm)
with ctx._x.dat.vec_wo as v:
X.copy(v)
ctx._time.assign(t)
fd.assemble(fd.derivative(j, f), tensor=_djdf)
Amat_array = Amat.getDenseArray()
with _djdf.dat.vec_ro as v:
# here we assign local parts to dense matrix
Amat_array[0:local_size, 0] = v.array[:]
Amat_array[local_size:(local_size+local_size), 0] = v.array[:]
Amat.assemble() With this approach, the user would need to bother creating the |
This sounds like a plan. What about |
In Firedrake the assembled derivative of a functional wrt to a |
Hi Ivan, this firedrake extension is awesome. Thanks for putting it together. I am interested in using TSAdjoint. I wonder if you have short-term plans to add it (less than two weeks or so). Otherwise, I'd be interested in making a contribution. I had the idea of writing a pyadjoint block so it can interact with pyadjoint as well. Although it is going to be tricky because TS/TSAdjoint also integrates the cost functions (somehow there are two blocks in one: the solve plus the cost function evaluation). Let me know what you think.
The text was updated successfully, but these errors were encountered: