-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Normalize #192
base: main
Are you sure you want to change the base?
Normalize #192
Conversation
Co-authored-by: Matt Fishman <[email protected]>
Co-authored-by: Matt Fishman <[email protected]>
Co-authored-by: Matt Fishman <[email protected]>
src/normalize.jl
Outdated
function LinearAlgebra.normalize( | ||
alg::Algorithm"bp", | ||
tn::AbstractITensorNetwork; | ||
(cache!)=nothing, | ||
update_cache=isnothing(cache!), | ||
cache_update_kwargs=default_cache_update_kwargs(cache!), | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So it seems like a basic design question here is if normalizing should refer to treating the tn
as a state that should be normalized to 1 or as something such that you want the result of contract(tn)
to be 1.
It seems reasonable to define it such that tn
is a state where you want contract(norm_network(tn))
to be 1 as you do here, however it may be good to write it in terms of an inner function that takes a tensor network and returns a new one where the tensors are scaled such that contracting it is 1. I can't think of a good name for that right now, but for the time being I'll refer to it as rescale(tn::AbstractITensorNetwork)
, so scalar(rescale(tn)) == 1
for any input tn
, and the input has to be a closed network that evaluates to a scalar. Then we can just define normalize(tn) = ket_network(rescale(norm_network(tn)))
or something like that.
The current implementation feels a bit too "in the weeds" dealing with quadratic forms, bras, kets, etc. and seems like something that could be abstracted and generalized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also defining a function like rescale
then would be relevant for other kinds of networks like partition functions, where if you track the normalization factors then that gives the evaluation of the partition function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Relatedly, rescale(tn::AbstractITensorNetwork)
could be defined in two steps, one where it computes the local scale factors (I think there is already a function for that?) and then a next step where it just divides the factors by those scale factors, so the implementation could be a bit simpler by dividing it into multiple generic steps.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I see what you mean, that's a nice idea to split it apart like that. Will change it to do that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay I split it apart based on a rescale function.
@mtfishman Sorry I am only just getting back to this but realizing that with Antonio and @emstoudenmire doing a lot of loop correction stuff this is useful functionality. |
This PR adds support for normalizing tensor networks with either a BP backend or an exact backend.
Specifically given an
ITensorNetwork
tn
we can calltn_normalized = normalize(tn; alg)
to enforcetn_normalized * dag(tn_normalized) == 1
within the framework of the desired algorithm.This is particularly useful in the context of
alg = "bp"
as it stabilizes the fixed point of belief propagation such that the norm of the message tensors is stable when running subsequent bp iterations ontn_normalized
.@mtfishman this is a routine that I am calling frequently in
bp_alternating_update
and so I thought I would add it. I also think it is generally useful when doing things like TEBD to keep thebp_norm
more stable.