-
-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Some improvements for Hermitian operators. #1
Conversation
Codecov Report
@@ Coverage Diff @@
## master JuliaLang/julia#1 +/- ##
==========================================
+ Coverage 96.85% 96.87% +0.01%
==========================================
Files 4 4
Lines 191 192 +1
==========================================
+ Hits 185 186 +1
Misses 6 6
Continue to review full report at Codecov.
|
Before we do this, we should start benchmarking and profiling to know if this is the problem. I think it would be best if each of these points are different PRs just to keep things clean. Definitely worth considering. |
Note: please do not use Project and Manifest TOMLs for now. These are not compatible with the current METADATA repository. |
Changing spacing should be a different PR. From a quick check of the diff I have no idea what code was actually changed. |
Ok! I will split it into a couple of PRs starting with JuliaLang/julia#2 |
Thanks for the work! Some comments:
This would be very nice to have indeed. My original plan is to add support for skew-Hermitian
Having a switch between this and happy-breakdown would be great. If I remember correctly Neisen & Wright also uses happy-breakdown in their matlab code so keeping the old way can make more meaningful comparison/benchmarking.
The other side of this issue is that for non-Hermitian cases,
When we started out this summer vector and operator norms were not yet separated in Julia. Back then I used a custom I agree with your comment that this can be inconvenient and since the operator doesn't really change we're better off passing in its norm as a value. I think in the kwargs we can default it to |
Yeah, so I will make a separate PR for this, once we are finished with JuliaLang/julia#4.
My latest idea about this is that we should have two separate
Well, about this one, I have less of an idea what we can do to improve caching, but that the Hermitian case is non-allocating is vital to me.
If one uses the stopping criterion proposed by Saad, the norm of the operator is not so important, but rather the norm of the subspace vectors (the betas). Hence why I prefer that stopping criterion. |
I have made some improvements to the exponentiation for the (skew-)Hermitian case:
KrylovSubspace
can now have real coefficients, even if the basis vectors are complex. Useful for Hermitian matrices.exp(-im*dt*H)*psi
, whereH
is Hermitian. We do this by building the Krylov space using the Lanczos method forH
, diagonalizing the resulting real symmetric tridiagonal matrix and then scaling the eigenvalues by-im*dt
.To consider:
AbstractVector
. If one could leave it up to the user to provide function for operator–vector multiplication, dot products, etc, I think it would be very useful.SymTridiagonal
matrix is diagonalized, which under-the-hood callsdstegr
from LAPACK. Looking into the Julia standard library, every call to thedstegr
wrapper callsdstegr
twice, the first time to query for necessary workspace size and allocating this. I hacked around this is Magnus.jl sub_exp.jl, but this should be tackled in a nicer way. I did open a ticket, but I never did much about it 😳opnorm
, since it can be tricky to implement for complicated linear maps and costly to calculate, especially if you want to repeatedly exponentiate as you do in time-stepping. Admittedly, one could send in a bogus function as a kwarg that always returns a precomputed norm, but I don't really like it. This is conjunction with the stopping criterion in 1.Comments and ideas are welcome.