-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Object Factories #23
Comments
I love the examples heh In clojure we have a function called assoc for that purpose: https://clojuredocs.org/clojure.core/assoc, everything is a list in clojure, like in other lisp languages, maps are lists with keys at even indexes and values at odd indexes. Therefore, duplicated keys is allowed. Optimizations aside (clojure maps are closer to maps than to lists in the generated jvm code), there are functions to handle the operations over maps.
This problem can be solved both syntactically and idiomatically, I'd prefer to first solve it with an idiom (function) and then create the sugar syntax on top of it once the semantic of the function is battle tested. Regarding the sugar syntax: Why not copy JS approach? My proposal is to have an assoc function that dedouplicates the keys (keeping the last occurence of each). Typing this function will be interesting. Then the %dw 2.0
fun assoc<T1, T2>(keyValuePairs: T1, newKeyValuePairs: T2): ??? = ???
// macro: { ...[A] } => assoc({}, A)
// macro: { [HEAD], ...[A] } => assoc(HEAD, A)
// macro: { [HEAD], ...[A], [~TAIL] } => assoc(HEAD, A) ++ TAIL
// macro: { ...[A], [~TAIL] } => assoc({}, A) ++ TAIL
{ ...agus, ...spouse }
// => assoc(assoc({}, agus), spouse) fun toJsonObject(value) = { ...value }
// => assoc({}, value) { x: 'a', ...objA }
// => assoc({ x: 'a' }, objA) { x: 'a', ...objA, test: 1, asd: 2 }
// => assoc({ x: 'a' }, objA) ++ { test: 1, asd: 2 } Considering Side note: as far as I remember, the fact that |
I keep on hitting with this idiom over and over. So I would like to continue with it until we found a proper solution ;). The other day I found a user doing {
(user - "description"),
(account - "description"),
"description" : "Some description taken from other variable"
} All this in the context of json. I see a few problems with this idiom.
This is a common idiom and we need to improve it. One of the problem that we have detected with the previous solutions is that
Let's suppose that user and account have duplicated keys. This operation has two effects.
So is still not ver declarative on the intentionality of what we want to do. Very declarative approach{
append user,
overwriteWith account
} This looks a little too verbose but it is very clear on what it does. A very short way{
++ user,
<< account
} We could use this, it is not very declarative. But each operator will have a concrete representation and it will be same as each operator so What do we think? |
I've been giving this a lot of thought.
Would be the same as this: You are using the same operators in different contexts and expecting to do exactly the same. In the second one I can clearly see the steps and I could also divide the problem into pieces in my mind and I get a clear path of execution. In the first one I feel like there's a lot of magic involved, What is I think I would prefer something like this:
Expressing something like append or merge with the contents of this object (key values). I know what we've discussed about Why?
Here you are still building an object with duplicate keys I would assume that only the second duplicated key would be replaced
This today it produces:
Let's say now we use the new
This would produce:
And I think it's easier to wrap your head around it and to think how this is working step by step, first you have all the fields in your object added in the order in which you specify each object expansion or field and then you are eliminating duplicates leaving the last one. If you are on the JS world you know that eventually you will have to eliminate your duplicates, and you know that using
Are those really the effects or is just one: |
Let's take a look at your example %dw 2.0
output application/json
var input1 = {"foo" : "bar", "foo" : "bar2"}
var input2 = {"foo" : "bar3", "bar": "test"}
{{
(input1),
(input2),
"foo": "lala"
}} This would produce: {
"bar": "test",
"foo": "lala"
} So we say {
"foo": "bar",
"bar": "test"
} So if your intention is to do a merge, extend u override kind of semantics one need to flip the order and for me that is very weird. For me the option of keeping the first one has two reasons.
For me the second reason is quite strong, though it doesn't mean we should not ignore them ;) But for me the questions are two.
We have explore the two options. Fix only the first, fix only the second, fix the first and that will give us a semantic for the second one. |
I think those are valid points as well and I agree with the consistency issue between the selector and the merge we might have to do that explicitly somehow (might be worthwhile researching a bit), I don't see how the option with
Going back to this case what happens if the user has two repeated keys and you are overriding that key with one inside the account object? Which one will be account replacing? The first one, the second one or both? You still have to make that choice implicitly right?
is account extending the empty object and then user adding its fields to it? The order also matters here doesn't it? Or this is some kind of indicator of which one is the important object you are extending? If we look at js style the order also matters There's also another thing to discuss is how much control over the construction of the object do we want to give the user, the first two options you would think are to let the first or let the last prevail, if we are talking about objects, but maybe you want also to control which one is going to prevail per key value. (this might be out of scope for this I'm just mentioning it but you can disregard this)
I agree with the problems of making the choice ourselves, thinking a bit further whether the most important field is the first that appears or the last depends on the semantics of that data and where that duplication was originated, and how as well. |
I'll just leave some thoughts on this.
|
100% Agreed, it seems like we are building those things only for this particular case, it seems similar to the feeling I get from
I like this approach the best really but I don't know if it goes the same line of thought as the first presented approach where you are kinda describing the structure of your object rather than operating with different objects (?). Maybe we should just let ppl build their objects with the () and {} and use a dedup afterwards In the case of the dedup should we also make explicit which key should prevail? Another example to think about:
In this case, which bar would you replace? Today if you use mergeWith
you get:
|
The more I read symbols, the more I am convinced to create functions with clear names inside the // this one could be an alias to shallowMergeWith(obj, obj)
{ a: 1, b: { value: true } } mergeWith { a: 2, b: { } }
// => { a: 2, b: { } } { a: 1, b: { value: true } } deepMergeWith { a: 2, b: { test: 1 } }
// => { a: 2, b: { value: true, test: 1 } } pickLast({ a: 1, a: 2, a: 3 })
// => { a: 1 } // objectToJson: returns json object, recursive. selects last value in duplicated keys, removes namespaces.
objectToJson({ a: 1, c: 1 } ++ { a: 2, b: 2 })
// => { a: 2, b: 2, c: 1 } Making easier to document and develop the functions, resulting in fewer maintenance code of grammar, parser and compilation phases. Also making the code more portable to different versions of the compiler. |
Object Factories
Currently an object value can be constructured by using
{}
. The way of creating an object simple collects all thefields inside the object, so repeated fields are just kept like that. This is very usefull for cases or formats where
repeated fields are supported like XML, QueryParams etc but not so much for things like Json
So the problem we found is that there are patterns like trying to override an object fields with other object fields, or compose an object with two other objects,
started generating problems. So we started to see patterns like
{name: "Mariano", lastName:"Mendez"}
- "name" ++ {name: "Agustin"} in order to replace a field.For this cases we added the update operator
But this doesn't fixe the case where you just want to override one object with other object fields
var agus = {name: "Agustin", lastName:"Mendez", spouse: null, married:false}
var spouse = {spouse: "Viky", married: true}
in order to merge this two objects we added the function
dw::core::Objects::mergeWith
agus mergeWith spose
does the job. But to be honest I don't think this is very declarative.Last couple of years I've been working in Typescript for some small projects, and in there people use the spread operator for things like this.
And I started to like this way of doing things, we have a similar kind of
spread
operator.Now the problem with this is that our objects support duplicated keys and then instead of ending with just one
spouse
field you end up with two.So for this I propose to add a new object constructor that creates an object without duplicated keys, and that keys in the bootom override keys on the top.
Like a Map or Dictionary in other languages.
The suggested sintax is
{{
}}
Now this starts to work just like in Typescript. Also other kind of things, like conditional fields also work.
It is just a different Object factory where it guaranties that there key names are unique.
I like this because it makes working with
json
much more clean and uses a pattern that is already tested and plays well with the rest of the language.The text was updated successfully, but these errors were encountered: