Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Normalize #192

Open
wants to merge 59 commits into
base: main
Choose a base branch
from
Open

Normalize #192

wants to merge 59 commits into from

Conversation

JoeyT1994
Copy link
Contributor

This PR adds support for normalizing tensor networks with either a BP backend or an exact backend.

Specifically given an ITensorNetwork tn we can call tn_normalized = normalize(tn; alg) to enforce tn_normalized * dag(tn_normalized) == 1 within the framework of the desired algorithm.

This is particularly useful in the context of alg = "bp" as it stabilizes the fixed point of belief propagation such that the norm of the message tensors is stable when running subsequent bp iterations on tn_normalized.

@mtfishman this is a routine that I am calling frequently in bp_alternating_update and so I thought I would add it. I also think it is generally useful when doing things like TEBD to keep the bp_norm more stable.

src/normalize.jl Outdated Show resolved Hide resolved
src/normalize.jl Outdated
Comment on lines 19 to 25
function LinearAlgebra.normalize(
alg::Algorithm"bp",
tn::AbstractITensorNetwork;
(cache!)=nothing,
update_cache=isnothing(cache!),
cache_update_kwargs=default_cache_update_kwargs(cache!),
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So it seems like a basic design question here is if normalizing should refer to treating the tn as a state that should be normalized to 1 or as something such that you want the result of contract(tn) to be 1.

It seems reasonable to define it such that tn is a state where you want contract(norm_network(tn)) to be 1 as you do here, however it may be good to write it in terms of an inner function that takes a tensor network and returns a new one where the tensors are scaled such that contracting it is 1. I can't think of a good name for that right now, but for the time being I'll refer to it as rescale(tn::AbstractITensorNetwork), so scalar(rescale(tn)) == 1 for any input tn, and the input has to be a closed network that evaluates to a scalar. Then we can just define normalize(tn) = ket_network(rescale(norm_network(tn))) or something like that.

The current implementation feels a bit too "in the weeds" dealing with quadratic forms, bras, kets, etc. and seems like something that could be abstracted and generalized.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also defining a function like rescale then would be relevant for other kinds of networks like partition functions, where if you track the normalization factors then that gives the evaluation of the partition function.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Relatedly, rescale(tn::AbstractITensorNetwork) could be defined in two steps, one where it computes the local scale factors (I think there is already a function for that?) and then a next step where it just divides the factors by those scale factors, so the implementation could be a bit simpler by dividing it into multiple generic steps.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I see what you mean, that's a nice idea to split it apart like that. Will change it to do that

Copy link
Contributor Author

@JoeyT1994 JoeyT1994 Dec 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay I split it apart based on a rescale function.

@JoeyT1994
Copy link
Contributor Author

JoeyT1994 commented Dec 10, 2024

@mtfishman Sorry I am only just getting back to this but realizing that with Antonio and @emstoudenmire doing a lot of loop correction stuff this is useful functionality.
It allows the rescaling of a state (whether a flat tensor network, or a tensor network state) based on a bp_cache so that all the bp free energy terms are 1 and the total bp free energy is therefore one --- which is a useful base for doing loop corrections.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants