-
Notifications
You must be signed in to change notification settings - Fork 274
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should makeLenses generate code using record syntax by default? #986
Comments
copying my comment on PR. Is it faster to compile, or produces smaller Core? IIRC record updates are still compiled into Core case? AFAIK, big records just lead to quadratic code size, if you don't go very creative, like in https://well-typed.com/blog/2021/08/large-records/ |
The resulting core is slightly different. Changed the example to just two fields for brevity: data Big = Big { _a0 :: Int , _a1 :: Int } Then both cores for a0 [InlPrag=INLINE (sat-args=2)] :: Lens' Big Int
[GblId, Arity=3, Caf=NoCafRefs, Unf=OtherCon []]
a0
= \ (@ (x0 :: * -> *))
($dFunctor_a0 :: Functor x0)
(eta_B2 :: Int -> x0 Int)
(eta1_B1 :: Big) -> But then the two differed, while case eta1_B1 of { Big x1 x2 ->
fmap
@ x0
$dFunctor_a0
@ Int
@ Big
(\ (y :: Int) -> BigRecord.Big y0 x2)
(eta_B2 x1)
} the new variant produces the following core: fmap
@ x0
$dFunctor_a0
@ Int
@ Big
(\ (y :: Int) ->
case eta1_B1 of { Big x1 x2 ->
BigRecord.Big y x2
})
(eta_B2 (case eta1_B1 of { Big x1 x2 -> x1 })) The difference being that the regular one matches |
At a wild guess, using record syntax might save time in the type-checker because you're type-checking an expression that is constant-size rather than linear in the number of record fields? You could test this hypothesis by compiling with Given that the Core is larger with the new variant, I wonder if optimizing use sites might end up taking longer. And is there a semantic or runtime performance difference arising from the change? |
For
-- makeLenses
case eta1_B1 of { Big x1 x2 ->
fmap
@ x0
$dFunctor_a0
@ Int
@ Big
(\ (y0 :: Int) -> Big y0 x2)
(eta_B2 x1)
}
over modifier l big
= case big of { Big x1 x2 ->
(\ (y0 :: Int) -> Big y0 x2)
(modifier x1)
}
=<beta-redux>
case big of { Big x1 x2 ->
let y0 = modifier x1
in Big y0 x2
}
=<inline>
case big of { Big x1 x2 ->
Big (modifier x1) x2
} -- using records
fmap
@ x0
$dFunctor_a0
@ Int
@ Big
(\ (y :: Int) ->
case eta1_B1 of { Big x1 x2 ->
Big y x2
})
(eta_B2 (case eta1_B1 of { Big x1 x2 -> x1 }))
over modifier l big
= (\ (y :: Int) ->
case big of { Big x1 x2 ->
Big y x2
})
(modifier (case big of { Big x1 x2 -> x1 }))
=<beta-redux>
let y = modifier (case big of { Big x1 x2 -> x1 })
in case big of { Big x1 x2 ->
Big y x2
}
=<inline>
case big of { Big x1 x2 ->
Big (modifier (case big of { Big x1 x2 -> x1 })) x2
}
=<notice that `big` is cased on twice>
case big of { Big x1 x2 ->
Big (modifier x1) x2
} I think GHC is smart enough to figure out double ( |
Results of Last steps:
|
Generate lenses using record syntax (#986)
#987 adds an option to generate lenses using record syntax, but this option is disabled by default. I'll leave this issue open to discuss whether we should change the default. |
makeLenses
produces code of length that is quadratic to the number of fields, and this appears to result in long compilation times -Here on my M1 MacBook Pro with GHC 8.10.7 (at commit f76e271), a module with large records that compiles in 2.35 seconds, compiles in 20.15 seconds when adding
makeLenses
for these records. It appears that using record syntax makes this slightly faster, at 15.98 seconds (see accompanying PR) - a 21% reduction in compilation time, still pretty slow but at least it's an improvement. I guess the reason it is still slow is that the code does translate to the same Core later??(I looked into this issue because of this large record in real code which is slowing my build times)
The text was updated successfully, but these errors were encountered: