You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ok, interesting!
Might be a change in the API that we'd need to account for. @gaya3dk2490 if you don't mind, you could skim the Spark changelogs if there's sth. in there regarding predicate push-down.
Maybe you can also find a corresponding change in the CSV reader (from which a lot of the code was taken).
Is there an existing issue for this?
Current Behavior
There is some weird behaviour when filtering columns on a dataframe produced by the excel reader.
I have some excel files, partitioned in Azure Storage account and I am trying to fire a simple read from Databricks (Run time 12.1, Spark 3.3.1)
Example Path on Storage account -
/landing/excel/version=x/day=x
where version and day will become partition columns on readI have
version=1
andversion=2
andday=1
as sample partitions.Below read stores 2 rows into dataframe df
schema inferred
Now, if you filter on the
df
produced forversion=1
, it always returns all resultsdf.filter(col("version") === 1)
returns 2 rows (version =1 and version =2 )Also tried the following variants
df.filter(col("version") === lit(1))
anddf.filter($"version" === 1)
Try filtering on a value of
version
that doesn't exist, returns all rowsdf.filter(col("version") === 100)
returns 2 rowsNote: Filters on other normal columns work fine, so there seems to be something wrong on predicate pushdown
Expected Behavior
Filter on dataframe partition columns should return only rows from that partition
Steps To Reproduce
Environment
Anything else?
No response
The text was updated successfully, but these errors were encountered: