You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wanted to use your method to analyse some visium data I had, but to make sure I could get it working, I ran it on the sagitall data you ran it on in your paper.
I ran all the code in the Data Loading ipy file and was running the exploratory ipy file when I got this error:
Apologies for the delay in responding. I unfortunately don't have the bandwidth to maintain this code base the way I would like. Am I correct to assume that your model training worked successfully and the error is only occuring when you try to do prediction? If so, it seems like you just have tensors whose shapes don't match up. You may want to check that you are using the same (older) version of packages since it's possible that new versions of tensorflow may have different defaults for the shapes.
Hi There.
I wanted to use your method to analyse some visium data I had, but to make sure I could get it working, I ran it on the sagitall data you ran it on in your paper.
I ran all the code in the Data Loading ipy file and was running the exploratory ipy file when I got this error:
InvalidArgumentError Traceback (most recent call last)
Cell In[68], line 9
7 except FileNotFoundError:
8 fit = sf.SpatialFactorization(J,L,Z,psd_kernel=ker,nonneg=True,lik="poi")
----> 9 fit.elbo_avg(Xtr,Dtr["Y"],sz=Dtr["sz"])
10 fit.init_loadings(Dtr["Y"],X=Xtr,sz=Dtr["sz"])
11 fit.elbo_avg(Xtr,Dtr["Y"],sz=Dtr["sz"])
File ~/nsf-paper/models/sf.py:307, in SpatialFactorization.elbo_avg(self, X, Y, sz, S, Ntot, chol)
305 kl_term = tf.reduce_sum(self.eval_kl_term(mu_z, Kuu_chol))
306 Mu = self.sample_predictive_mean(X, sz=sz, S=S, kernel=ker, mu_z=mu_z, Kuu_chol=Kuu_chol)
--> 307 eloglik = likelihoods.lik_to_distr(self.lik, Mu, self.disp).log_prob(Y)
308 return J*tf.reduce_mean(eloglik) - kl_term/Ntot
File ~/.local/lib/python3.10/site-packages/tensorflow_probability/python/distributions/distribution.py:1287, in Distribution.log_prob(self, value, name, **kwargs)
1275 def log_prob(self, value, name='log_prob', **kwargs):
1276 """Log probability density/mass function.
1277
1278 Args:
(...)
1285 values of type
self.dtype
.1286 """
-> 1287 return self._call_log_prob(value, name, **kwargs)
File ~/.local/lib/python3.10/site-packages/tensorflow_probability/python/distributions/distribution.py:1269, in Distribution._call_log_prob(self, value, name, **kwargs)
1267 with self._name_and_control_scope(name, value, kwargs):
1268 if hasattr(self, '_log_prob'):
-> 1269 return self._log_prob(value, **kwargs)
1270 if hasattr(self, '_prob'):
1271 return tf.math.log(self._prob(value, **kwargs))
File ~/.local/lib/python3.10/site-packages/tensorflow_probability/python/distributions/poisson.py:256, in Poisson._log_prob(self, x)
254 def _log_prob(self, x):
255 log_rate = self._log_rate_parameter_no_checks()
--> 256 log_probs = (self._log_unnormalized_prob(x, log_rate) -
257 self._log_normalization(log_rate))
258 if self.force_probs_to_zero_outside_support:
259 # Ensure the gradient wrt
rate
is zero at non-integer points.260 log_probs = tf.where(
261 tf.math.is_inf(log_probs),
262 dtype_util.as_numpy_dtype(log_probs.dtype)(-np.inf),
263 log_probs)
File ~/.local/lib/python3.10/site-packages/tensorflow_probability/python/distributions/poisson.py:296, in Poisson._log_unnormalized_prob(self, x, log_rate)
291 def _log_unnormalized_prob(self, x, log_rate):
292 # The log-probability at negative points is always -inf.
293 # Catch such x's and set the output value accordingly.
294 safe_x = tf.maximum(
295 tf.floor(x) if self.force_probs_to_zero_outside_support else x, 0.)
--> 296 y = tf.math.multiply_no_nan(log_rate, safe_x) - tf.math.lgamma(1. + safe_x)
297 return tf.where(
298 tf.equal(x, safe_x), y, dtype_util.as_numpy_dtype(y.dtype)(-np.inf))
File ~/.local/lib/python3.10/site-packages/tensorflow/python/ops/weak_tensor_ops.py:142, in weak_tensor_binary_op_wrapper..wrapper(*args, **kwargs)
140 def wrapper(*args, **kwargs):
141 if not ops.is_auto_dtype_conversion_enabled():
--> 142 return op(*args, **kwargs)
143 bound_arguments = signature.bind(*args, **kwargs)
144 bound_arguments.apply_defaults()
File ~/.local/lib/python3.10/site-packages/tensorflow/python/util/traceback_utils.py:153, in filter_traceback..error_handler(*args, **kwargs)
151 except Exception as e:
152 filtered_tb = _process_traceback_frames(e.traceback)
--> 153 raise e.with_traceback(filtered_tb) from None
154 finally:
155 del filtered_tb
File ~/.local/lib/python3.10/site-packages/tensorflow/python/framework/ops.py:5883, in raise_from_not_ok_status(e, name)
5881 def raise_from_not_ok_status(e, name) -> NoReturn:
5882 e.message += (" name: " + str(name if name is not None else ""))
-> 5883 raise core._status_to_exception(e) from None
InvalidArgumentError: {{function_node _wrapped__MulNoNan_device/job:localhost/replica:0/task:0/device:CPU:0}} Incompatible shapes: [1,2560,2000] vs. [2560,32285] [Op:MulNoNan] name:
I also tried it on my data, and got a similar error.
Thanks for your time,
Ross
The text was updated successfully, but these errors were encountered: