[Question]: Does vertical_slash_sparse_attention supported to concatenate all batches into a single row for operation like flash_attn_2_cuda.varlen_fwd? #46
Labels
question
Further information is requested
Describe the issue
Does vertical_slash_sparse_attention/block_sparse_attention/streaming_forward supported to concatenate all batches into a single row for operation like flash_attn_2_cuda.varlen_fwd?
The text was updated successfully, but these errors were encountered: