You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, it is possible to run batch inference using the ImageToBatch process object.
However, if we want to take advantage of the temporal dimension, for instance if your model includes the TimeDistributed layer in Keras.
To get that working with FAST, we need to do a simple trick in the model, where we transpose the batch and temporal dimensions. That way, when generating a batch of 20, we are actually using 20 patches in a bag or sequence.
However, this might fail, if the model has a fixed size on the temporal dimension, and I think it is only relevant for the last batch, which can be incomplete. I believe FAST still sends it to the network, but it might be a good idea to have an option to disregard the last batch (or incomplete batches), if it occures. To account for this potential problem.
However, in my scenario, I had trained my model with None shape for the temporal axis, so it did not prompt any errors, but if the model is trained on a bag size of a 100 patches, you might not want to give it 6 patches, as the results might be poor.
The text was updated successfully, but these errors were encountered:
Currently, it is possible to run batch inference using the ImageToBatch process object.
However, if we want to take advantage of the temporal dimension, for instance if your model includes the TimeDistributed layer in Keras.
To get that working with FAST, we need to do a simple trick in the model, where we transpose the batch and temporal dimensions. That way, when generating a batch of 20, we are actually using 20 patches in a bag or sequence.
However, this might fail, if the model has a fixed size on the temporal dimension, and I think it is only relevant for the last batch, which can be incomplete. I believe FAST still sends it to the network, but it might be a good idea to have an option to disregard the last batch (or incomplete batches), if it occures. To account for this potential problem.
However, in my scenario, I had trained my model with None shape for the temporal axis, so it did not prompt any errors, but if the model is trained on a bag size of a 100 patches, you might not want to give it 6 patches, as the results might be poor.
The text was updated successfully, but these errors were encountered: