-
-
Notifications
You must be signed in to change notification settings - Fork 184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CSS 2020 queries #1281
CSS 2020 queries #1281
Conversation
The |
Thank you for the summary Rick. It would be good to get some further input on these helpers from you or other analysts, since APIs designed in a vacuum tend to not be very good APIs. I've spent some time today documenting them (just JSDoc with default settings since time is of the essence). Playground: https://projects.verou.me/rework-utils/ Please try using these utils to write JS for the issues tagged Needs JS. Commit your code in the Currently I'm writing these without imports, since it sounds like the BigQuery JS doesn't really understand ESM and we'll need to bundle them up. Is my assumption correct? If while using them you realize the utils API needs improvement please open an issue or PR in the corresponding repo (rework-utils or parsel).
Except the ones that depend on custom metrics. |
+1 we should bundle the JS and access the globals via SQL. After implementing a simple metric in LeaVerou/css-almanac#47 my only question is how the SQL that uses the scripts would look. Do you see us doing much work in the BigQuery UDF, basically the body of the |
@rviscomi Excellent point. I was expecting the former, but the latter works too. My experience with BigQuery query authoring is minimal, so I'll defer to you for what is preferable. If we are to follow the latter approach, we should start giving the functions meaningful names instead of just |
Yeah, I think having more of the metric logic in the SQL would better separate the concerns of the util and the analysis. So for example: CREATE TEMPORARY FUNCTION countBorderBoxDeclarations(ast STRING) RETURNS NUMERIC LANGUAGE js AS '''
try {
return countDeclarationsByProperty(ast.stylesheet.rules, {properties: 'box-sizing', values: 'border-box'});
} catch (e) {
return null;
}
'''
OPTIONS (library="gs://httparchive/lib/css-almanac-utils.js");
SELECT
client,
APPROX_QUANTILES(declarations, 1000)[OFFSET(500)] AS median_declarations
FROM (
SELECT
client,
countBorderBoxDeclarations(css) AS declarations
FROM
`httparchive.almanac.parsed_css`
WHERE
date = '2020-08-01')
GROUP BY
client |
Sure, though keep in mind not all are this small. E.g. the one about color formats is at least 89 loc. |
The size of the code shouldn't be an issue for BigQuery. For maintainability, if there are any functions that can be reused in other queries, we should bake those into the |
@LeaVerou could you provide me with a JS file containing all of the available JS utils we would need in the queries? We can continue iterating on it but it'd be good to test out what we have so far. I'll start with |
Sure, do you have any ideas on how to produce it? Rollup doesn't really do that, they'd be under a namespace. I'm thinking probably a custom gulp utility that uses
Wait, I thought we decided the computation for each stat would be in the SQL unless it's reusable? |
I forked the repo and manually extracted each function into One issue is the bleeding edge optional chaining syntax ( Here's a proof of concept query: #standardSQL
# - Distribution of the number of occurrences of box-sizing:border-box per page.
# - Percent of pages with that style.
CREATE TEMPORARY FUNCTION countBorderBoxDeclarations(css STRING) RETURNS NUMERIC LANGUAGE js AS '''
try {
const ast = JSON.parse(css);
return countDeclarations(ast.stylesheet.rules, {properties: /^(-(o|moz|webkit|ms)-)?box-sizing$/, values: 'border-box'});
} catch (e) {
return null;
}
'''
OPTIONS (library="gs://httparchive/lib/rework-utils.js");
SELECT
percentile,
client,
COUNT(DISTINCT IF(declarations > 0, page, NULL)) AS pages,
COUNT(DISTINCT page) AS total,
COUNT(DISTINCT IF(declarations > 0, page, NULL)) / COUNT(DISTINCT page) AS pct_pages,
APPROX_QUANTILES(declarations, 1000 IGNORE NULLS)[OFFSET(percentile * 10)] AS declarations_per_page
FROM (
SELECT
client,
page,
SUM(countBorderBoxDeclarations(css)) AS declarations
FROM
`httparchive.almanac.parsed_css`
WHERE
date = '2020-08-01'
GROUP BY
client,
page),
UNNEST([10, 25, 50, 75, 90]) AS percentile
GROUP BY
percentile,
client
ORDER BY
percentile,
client
Each one of these queries processes 9.7 TB, which incurs ~$50 😬 . So if you wanted to write the queries it's best to use the smaller One process suggestion. Rather than write the metric JS in |
Thank you! That's great for a proof of concept, but is already out of date, so we need to write a script to do it. :)
Optional chaining is not exactly bleeding edge, it's been supported since February. I assumed BQ's JS engine was similar to a recent Chromium. If not, do we know which version of Chrome/V8 it's running? I can just stop using optional chaining (which I've used a lot in queries too, it's not just the utils), but it would be good to know what else might not be supported.
Indeed there are. With the current setup, we can test out queries on any CSS (via URL or direct input) via the Rework Utils playground ( |
Could you submit a PR with any changes against sql/lib/rework-utils.js? I'll sync that file with GCS for testing on BigQuery.
Nothing specific on this in the BigQuery docs AFAICT.
Maybe. This PR will contain 40+ queries regardless of where the JS logic lives, so I think it'd be easier to review if everything is in one place. That also helps Almanac readers down the line if they want to look at an SQL file to scrutinize how a metric was calculated. Iterative testing is also possible in BigQuery, even if a bit forced: #standardSQL
CREATE TEMP FUNCTION parseCSS(stylesheet STRING)
RETURNS STRING LANGUAGE js AS '''
try {
var css = parse(stylesheet)
return JSON.stringify(css);
} catch (e) {
'';
}
'''
OPTIONS (library="gs://httparchive/lib/parse-css.js");
CREATE TEMPORARY FUNCTION countBorderBoxDeclarations(css STRING) RETURNS NUMERIC LANGUAGE js AS '''
try {
const ast = JSON.parse(css);
return countDeclarations(ast.stylesheet.rules, {properties: /^(-(o|moz|webkit|ms)-)?box-sizing$/, values: 'border-box'});
} catch (e) {
return null;
}
'''
OPTIONS (library="gs://httparchive/lib/rework-utils.js");
SELECT
countBorderBoxDeclarations(parseCSS('''
#foo {
color: red;
box-sizing: border-box;
}
.bar:first-child {
color: blue;
box-sizing: border-box;
}
''')) AS declarations Results:
|
@LeaVerou interesting discrepancy in custom property adoption between the 2019 approach and your https://github.com/HTTPArchive/almanac.httparchive.org/blob/main/sql/2019/02_CSS/02_01.sql measures the % of websites with custom properties. I've rerun the query with 2020 data and it's producing ~15% for desktop and ~20% for mobile, which is really interesting growth in itself, up from 5% in 2019. However I've also been analyzing custom property usage with your Queries: https://gist.github.com/rviscomi/71328c6b395f377e7d7f6c7be5ab6da7 Which approach would you want to use for the 2020 chapter: the one with a comparable methodology as last year, or the one you specially built to study custom properties? Aside: the most popular custom property name is |
Hi @rviscomi,
Given that I'm still actively iterating on them, I don't think doing this manually is a good use of our time. One of us should write a script to do it. I could, but naturally, this means I'd have less time for actual querying code. Your call.
It's equivalent to Chrome 75 it turns out. Ok, I can work with that!
You asked me in your previous message if there's a testing benefit to having the js separately in the css-almanac repo, and I explained in detail how this helps iterating on them. So, I'm very surprised you would then go ahead to suggest having everything in one big PR anyway, almost as if I hadn't responded to your question at all. Yes, we can sort of iterate with BigQuery, but it's much more clunky, and since there's better testing infrastructure in place, I'm not sure why we would do it that way. I can see how large PRs with all queries may work for other, smaller chapters, but I believe it will end up being a mess here. Furthermore, having the js separate in the css-almanac repo means that:
I can see how you'd like a single source of truth in the almanac repo, but since the queries are written after the JS is finalized, it's unlikely that the SQL will get out of sync with the JS. But there are ways around this if DRYness is a concern: from build tool includes to JS functions that we just call in the queries. Also, you have full commit permissions in the css-almanac repo, so you don't actually need to send PRs for stats you don't need reviewed, you can just commit directly, as Dmitry already did. Lastly, the expected time commitment for analysts is 12 hours total. I've easily donated 2-3x that already, and there's still plenty of work to do. I'm not complaining, as I'm really into this project and I'm enjoying this work, but It would be good if I could contribute without jumping through too many hoops. |
That's really interesting!!
We definitely need to use the custom metric, since there's a lot more to report and not all can be determined via the CSS AST, but we should clarify in the text that the methodology is different and how, so that people don't compare it with last year's 5% (and we should also report how that number increased, so they have something to compare).
Fascinating. What were the others? |
@rviscomi A few questions:
|
You hit the nail on the head that we found ourselves in a position where we need to sacrifice time we could be spending on the analysis itself to bridge the infrastructure gaps we've created. Code for this chapter's analysis is now spread across three different repositories:
We do lose unit testing among other things, but I think the agility we gain makes up for that loss. Does that resonate with you? If not, do you feel like the current technical benefits of having query code split up this way are worth the process overhead and possible delays? From the project management perspective, this is one of the largest chapters and it has already started to slip past its analysis milestones, so the delays are becoming a greater threat to getting this chapter released on time. If that happens, the contingency plan would be to release it post-launch, which I'm afraid would cause it to lose out on a lot of early readership.
+1 SGTM. All of the results from queries I've written so far are available in the chapter sheet.
My ideal is to have 1-1 query-to-metric, optimizing for the readers' experience of wanting to scrutinize how the stats were calculated or remix the query themselves. That said, it's ok to have a query with multiple similar/related metrics. If this approach is too expensive for your BigQuery quota, you can develop the queries using
The UDF JS operates on one CSS payload at a time, so one pattern we could use here would be to extract an array of durations in the JS, then |
Hi @rviscomi, Thank you for the thoughtful response. In general, I agree with your points. However, I'm not sure the reason for any deadline slippage is infrastructure. Out of its three analysts:
Furthermore, the analysis is not split across three repos, only two: this one and css-almanac. The fact that my work in the Almanac inspired me to write two libraries (Parsel and Rework Utils) isn't a fragmentation of the analysis effort any more than Rework being in its own repo is a fragmentation of effort. The remaining two repos mainly reflect the lack of consensus about where the analysis should happen, not an insufficiently scrappy workflow: I have so far exclusively worked in css-almanac, and you have almost exclusively worked in this one. So what's the best way to go forwards? I propose adopting your proposal, with a few slight tweaks:
Yes, we should definitely avoid that at any cost. What is the next hard deadline? I see the roadmap lists September as analyzing data, so we're still within that and I'm confident we can finish the analysis and results review by the end of the month. I do see we're past the Sep 7 and Sep 14 sub-deadlines, but since I'm also the main author, and I review stats as we go, that's a much tighter, and therefore faster, feedback cycle. I can reach out to the other authors as we go to review stats for their own sections.
Thanks, I will keep that in mind. I think the specific example I asked about definitely falls in the category of "similar/related metrics", so I guess we're good.
If I'm reading this right, wouldn't a CSS file with e.g. 100 durations be weighed 100 times higher than another with only one? Is this desirable? (not a rhetorical question, I could argue this both ways :) )
Sure, that's better! |
I'm happy to try your suggested compromise and adapt if needed. The only hard deadline is the launch date in mid-November. Reviewing during analysis sounds like a good way to keep things moving.
We could deduplicate durations at the page level before aggregating if that would give you the data you're looking for. |
I'm just not sure if a website should be weighed higher than others in this aggregate based on how they are using CSS animations. A few more questions about things that have come up:
|
My advice would be to have the Fonts section with a few of the most relevant/interesting stats, and include a note at the end like "see the Fonts chapter for more data on fonts usage". It would be good to coordinate with the Fonts chapter lead to ensure that the results are all harmonious.
It's appropriate for some metrics, depending on the questing being answered. I think readers have an easier time grokking stats that are presented in terms of the number of pages, rather than the number of values. For example "2% of pages include duration values longer than 1 second" or "among pages that set a duration, the median page sets 7 different values", as opposed to value-aggregated stats like "the most common value is 75ms" or "the median value is 1020ms". |
Ok, so these are our fonts-related issues:
I see these options:
Actually, looking at the Fonts queries I wonder if the overlap is less than I thought, which makes me more inclined to go with 3. Btw they may find the Rework Utils useful since they're doing similar things. Thoughts? |
I'd go with option 1, in which the focus is more on the CSS than the fonts themselves. The utils may be useful, so it's worth offering, but my hunch is that it'd be easier for them to reuse 2019 SQL. |
@rviscomi in which of the issues I linked did you feel the focus was on the fonts and not on the CSS? It seems to me that all of them are about the CSS, yet some overlap anyway.
Ah ok then. I didn't realize they were using last year's SQL. |
LeaVerou/css-almanac#2 and LeaVerou/css-almanac#15 feel more Font-y than CSS-y to me for some of the metrics. For example "How many websites use variable fonts?" and popular font families/stacks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All a bit beyond me, I learnt a few things.
Nit picking: the reported lack of newlines at the end of the queries.
If you find some queries slow I found pre-extracting the custom metric in the SQL worked a lot faster. e.g.
getCssInJS(JSON_EXTRACT_SCALAR(payload, '$._css')) AS cssInJs
Thanks for reviewing! This is helpful to make sure we're iterating on the queries and keeping the reviews small. There will be more PRs for this chapter! 😅
This is ok with me, personally.
Thanks! These can definitely get slow so I'll keep that tip in mind. |
Progress on #898
Usage
Custom Properties
Selectors
Values & Units
Color
Images
Layout
Transitions & Animations
Responsive Design
Browser support
@supports
Internationalization
CSS & JS
Meta