-
-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce @flatten annotation #642
base: main
Are you sure you want to change the base?
Conversation
# Conflicts: # upickle/test/src/upickle/MacroTests.scala # upickleReadme/Readme.scalatex
upickle/implicits/src-2/upickle/implicits/internal/Macros.scala
Outdated
Show resolved
Hide resolved
@nox213 this looks great. Could you update the PR description with a summary of the implemented semantics, with examples, as well as a summary of the implementation strategy? That would make the PR much easier to review and merge |
Sure, I’ll tag you when I’m ready. |
upickle/implicits/src-2/upickle/implicits/internal/Macros.scala
Outdated
Show resolved
Hide resolved
@lihaoyi I polished the code a little and updated the PR. |
@nox213 at I high level, would it be possible to implement this without the |
I guess more generally I'd like to see whether we can land this without breaking binary compatibility. Breaking bincompat and releasing Mill 5.0.0 is an option, but it would be better if we didn't need to do that |
I think this could work:
|
For the Scala 2 vs Scala 3 approach, could we use the Scala 3 approach in Scala 2 as well? It's best to keep the two implementations mostly consistent, so future changes can be done more easily across both versions |
Can this be fixed by making it |
How hard would it be to generalize this to other collection types and key types?
|
Left some high level comments. It's a pretty invasive change so I expect it will take some time to discuss the changes, first at a high-level and then down to the lines of code being changed, just to set expectations. But overall the approach looks solid |
@lihaoyi Thanks for your review.
I'm not sure whether it's possible. I will investigate more on it.
I believe this is doable. For other collection types, I could store the type of collection that is flatten and then use that type to construct it from Reader. For keys, I didn't do it because I thought that's not the use cases. I will try this.
Sure, I will use the Scala 3 approach in Scala 2 (no |
The challenging part seems to be bincompat. Would the best approach be to implement @flatten without changing the existing function signatures then? And if changing the function signatures is unavoidable, the next best option would be to keep the existing functions (marking them as deprecated) and implement separate macros (e.g., a new Macro class in Scala 2). In this case, would it make sense to introduce a conditional branch in specific functions to decide whether to call the deprecated macro or the new one? For example, determining this based on the presence of the @flatten annotation. Did I understand this correctly? One thing I’m curious about: as I understand it, macro code is only called internally, so I assumed it would be safe in terms of bincompat. However, can even functions that are not called externally impact bincompat? |
For bincompat, we could leave the existing |
I think what you would need to do is take the type of the collection, resolve the For keys, I think you should be able to do the same thing: take the type, resolve the |
Thanks for your explanation and tips. To summarize the direction I need to take for the changes:
Is this the right direction to proceed? Lastly, for Macros2.scala, would it be better to apply private or [package] private as much as possible if those restrictions can be applied? |
@nox213 yes that's right. Making the new macros package private would indeed help, though some of the supporting runtime APIs may need to remain public so we won't be able to privatize everything |
Okay, I will work on it. |
d79bf49
to
f33336d
Compare
wip make scala2 works Delete some files Add more tests more tests scala3 writer fix polish clean up Add helper function scala3 macro readers fix bug test pass polish polish wip fix remove unused Use ListBuffer to preserve orders polish Keep existing macros for bincompat Keep formatting
fc3ba46
to
1ecc8df
Compare
@lihaoyi
|
Making a copy should work
Yes leaving the old methods deprecated should work as well |
@lihaoyi |
@nox213 I don't quite understand what you are asking |
If I define |
defining it separately for Scala 2 and Scala 3 is fine I think |
Okay, I will keep it as it is |
@lihaoyi First, for deriving a Writer of collection with non-string key, I did like below for Scala 2. val writeKey = if (keyType =:= typeOf[String]) q"key" else q"upickle.default.write(key)(implicitly[${c.prefix}.Writer[$keyType]])"
q"""
$select.foreach { case (key, value) =>
this.writeSnippetMappedName[R, $valueType](
ctx,
$writeKey,
implicitly[${c.prefix}.Writer[$valueType]],
value
)
}
""" Basically, I summoned Writer[Key] and called '{
${select.asExprOf[Iterable[(keyTpe, valueTpe)]]}.foreach { (k, v) =>
${self}.writeSnippetMappedName[R, valueTpe](
${ctx},
upickle.default.write(key)(summonInline[keyWriterTpe]), // this doesn't compile
summonInline[writerTpe],
v,
)
}
} |
What I tried was that val writeRef = Ref(Symbol.requiredMethod("upickle.default.write"))
(keyTpe0.asType, valueTpe0.asType, keyWriterTpe0.asType, valueWriterTpe0.asType) match {
case ('[keyTpe], '[valueTpe], '[keyWriterTpe], '[valueWriterTpe]) =>
val typeAppliedWrite = TypeApply(writeRef, List(TypeTree.of[keyTpe]))
val snippet =
if (keyTpe0 =:= TypeRepr.of[String])
... // be ommitted
else
'{
${select.asExprOf[Iterable[(keyTpe, valueTpe)]]}.foreach { (k, v) =>
${self}.writeSnippetMappedName[R, valueTpe](
${ctx},
${Apply(
Apply(
typeAppliedWrite,
List('{k}.asTerm, '{-1}.asTerm, '{false}.asTerm, '{false}.asTerm)
),
List(
'{summonInline[keyWriterTpe]}.asTerm
)
)
.asExprOf[String]},
summonInline[valueWriterTpe],
v,
)
}
} I used
Those two types are treated as different type because their path is different. I got stuck at this point. |
I misunderstood your comment. I tested with your fix but it still doesn't work. object FlattenTestWithType {
implicit def rw[T: RW]: RW[FlattenTestWithType[T]] = upickle.default.macroRW // compile error
} I’ve pushed the changes to support collections other than Map (iterable flattening is now possible) so far. a07839d |
I just saw your ping now. Let me know if there's still something I can help with |
closes #623
This PR introduces the
@flatten
annotation.Usage
The
@flatten
annotation can only be applied to:case class
es: Flatten fields of a nested case class into the parent structure.Iterable
: Flatten key-value pairs of a Iterable[(String, _)] into the parent structure.Nested flattening allows you to apply the
@flatten
annotation recursively to fields within nested case classes.The Reader also recognizes the
@flatten
annotation.For collection, during deserialization, all key-value pairs in the JSON that do not directly map to a specific field in the case class are attempted to be stored in the Map.
If a key in the JSON does not correspond to any field in the case class, it is stored in the collection.
If there are no keys in the JSON that can be stored in the collection, it is treated as an empty collection.
If a key’s value in the JSON cannot be converted to the Map’s value type (e.g., String), the deserialization fails.
Limitations
@flatten
annotation on aIterable
, the type of key must be String.Implementation Strategy
Writer Derivation
From my understanding, deriving a Writer for a case class involves implementing a dispatch function that iterates through each field of the case class. In the existing implementation, the Writer is generated by processing the information of each field in the type T being derived.
When deriving a Writer, the above snippet shows how the visit function is invoked for each field, passing the corresponding field’s Writer as an argument. If the field is a Map or a case class and needs to be flattened, additional processing is required:
Map
, iterate through all key-value pairs in the Map, calling visit for each pair:Reader Derivation
Deriving a Reader is more complex, especially with support for Map, as it introduces several edge cases. A Reader is essentially a Visitor for JSON, as I understand it. The process involves three main steps:
To support flattening, additional steps were required in these stages:
Flattened fields must be accounted for in the key-to-index mapping. For example:
Special Case for Maps: