Autocorrelation tutorial weights query #570
-
Hi (and Happy New Year!), I had a (hopefully quick) question about one of the spaghetti tutorials I've been working through to check I understand correctly, specifically the autocorrelation one [1]. I looked at the weights being passed to the Moran's I calculation (ntw.w_network or ntw.w_graph in the notebook), and it looks like this matrix has values based on the degree of the nodes at the ends of each edge in the network, so an edge that connects to 5 other edges has weights of 0.2 to each of those connecting edges. That means that the weights matrix is not symmetrical, e.g. edge (0, 1) has weight 0.25 to edge (1, 110) but (1, 110) has weight 0.2 back to (0, 1). If I understand, this means that Moran's I will treat connected edges as more distant (less effect on the I value) if they have a higher number of other connecting edges. What would be the rationale for this? Apologies if I'm asking a stupid question or have just misunderstood! I'm a computer scientist rather than a geographer and a Scala programmer more than a Python one so there's plenty of scope for me to misunderstand something basic... thanks, [1] https://pysal.org/spaghetti/notebooks/network-spatial-autocorrelation.html |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 2 replies
-
Hi, Simon! No question is a stupid question! Thanks for your interest in PySAL and making use of the In [1]: ntw.w_network.weights[(0,1)]
Out[1]: [1.0, 1.0, 1.0, 1.0]
In [2]: ntw.w_network.neighbors[(0,1)]
Out[2]: [(0, 2), (1, 110), (1, 127), (1, 213)]
In [3]: adjlist = ntw.w_network.to_adjlist()
In [4]: print(adjlist[adjlist["focal"] == (0,1)])
Out[4]:
focal neighbor weight
0 (0, 1) (0, 2) 1.0
1 (0, 1) (1, 110) 1.0
2 (0, 1) (1, 127) 1.0
3 (0, 1) (1, 213) 1.0 Are you seeing these values or something else? Which attribute of |
Beta Was this translation helpful? Give feedback.
-
Hi James, Thanks very much for the quick response! The results you are seeing are what I expected and reassures me that I'm correctly interpreting the notebook. I've played a bit more and I can see why we see different things, but I'm still not sure of the consequences. If I run "ntw.w_network.weights[(0,1)]" directly after creating the network (line [5] in the notebook online), I get the same results as you. But if I run the same command directly after running the Moran's I calculation (line [10] in the notebook), which is what I was doing before, I get: [0.25, 0.25, 0.25, 0.25]. Specifically, the line "moran = esda.moran.Moran(y, w, permutations=9999)" alters the network weights. I've tried restarting the kernel to check it's not some caching effect. I've also tried with a completely different network obtained via osmnx and I see the same. I'm using the same versions of spaghetti and esda as in the online notebook page. I guess this doesn't matter if you are only using the weights once, and possibly I was just making wrong assumptions about immutability. If this is expected behaviour, is there an easy way to reset the weights should you want to use them again? thanks, |
Beta Was this translation helpful? Give feedback.
-
Moran's I will row-standardize the weights by default. This is for ease of interpretation of the spatial lag inside the logic of the statistic. After calling Moran, you can reset the weights back to binary with Alternatively, you could set the |
Beta Was this translation helpful? Give feedback.
-
Simon, this was my oversight for not explaining precisely. The results are as expected because the weights are row-transformed by default when calculating the Moran's I statistic. Also, the original weights values are retained and can be accessed via |
Beta Was this translation helpful? Give feedback.
Moran's I will row-standardize the weights by default. This is for ease of interpretation of the spatial lag inside the logic of the statistic.
After calling Moran, you can reset the weights back to binary with
w.transform='b'
.Alternatively, you could set the
transformation='B'
argument in the call to Moran and no further transformation would be required if you needed binary weights.