From baac04c408330093f547bedc71a738c76406eb03 Mon Sep 17 00:00:00 2001 From: Claudine Date: Wed, 30 Oct 2024 20:28:24 +0000 Subject: [PATCH 01/10] add contribution section --- .../pr05_2024_final_report_Claudine_Chen.md | 259 ++++++++++++++++++ 1 file changed, 259 insertions(+) create mode 100644 2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md diff --git a/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md b/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md new file mode 100644 index 0000000..c87aecb --- /dev/null +++ b/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md @@ -0,0 +1,259 @@ +# Final Report for Simplifying the workflow for Processing libraries + +## Introduction +This project's aim was to improve the developer experience for contributing +libraries to Processing. +The work from this project manifests as 2 separate projects, as well as improved +documentation. These two projects are the processing library template, the new +contributions workflow, and I will write about them in separate sections. + +There were two main guiding principles in this work: + +1. automate what can be automated +2. make the interactions as intuitive and self-explanatory as possible. + +## Project Plan +Below is a broad overview of the process of contributing to libraries, tools, and modes, and the initial resources for this. The work accomplished or planned to be accomplished +is in the final column. + +| Step | Previous workflow | Current workflow | +|------|------------------|-------------------| +| The idea | | An invitation to contribute, via a tutorial on the website | +|The information | Information on how to develop a library can be found in the Github wiki documents for processing 4, in three separate pages, and then there is a page for modes.

https://github.com/benfry/processing4/wiki/Library-Basics
https://github.com/benfry/processing4/wiki/Library-Guidelines
https://github.com/benfry/processing4/wiki/Library-Overview
https://github.com/benfry/processing4/wiki/Mode-Overview | Update information in tutorial on website, and in repository for library template. | +| The development | There are a series of templates that have been developed. The older ones are using Eclipse and Ant, newer ones with Intellij and Gradle. In addition to the executable, the developer is required to create a website that holds documentation for the library.

https://github.com/processing/processing-library-template
https://github.com/processing/processing-library-template-gradle
https://github.com/processing/processing-templates
https://github.com/processing/processing-tool-template
https://github.com/processing/processing-android-library-template | Repository templates for libraries using the new Gradle templates, keeping in mind the needs of the beginner. Consolidate existing functionality: resolution of Processing core using jitpack and unofficial Processing repo`; automation for building and exporting artifacts, including PDEX/PDEZ bundles.

CI/CD deployment of the reference to GitHub pages

Simple integration with local Processing | +| The integration - discoverability | The current process is to contact the Processing librarian with a url to your release.

https://forum.processing.org/one/topic/publishing-contributed-libraries-and-tools

The librarian then manually updates the private repository https://github.com/processing/processing-contributions | Submissions of new contributions via an issue template.

Develop automated pipeline that updates the repository from the properties file, with the objective to reduce the workload on Processing librarian.

This also includes a refactor on how contributions are stored, storing all information in a database file in the repository. | + + + +## Processing Library Template + +This subproject's target was to create an as-intuitive-as-possible Gradle-based library +template. The existing `processing-library-template` was based on Ant and Eclipse, +both technologies that have been superceded by Gradle and Intellij/VS Code, +respectively. The existing `processing-library-template-gradle` seemed to take a +design approach of eliminating any interactions by the user with Gradle scripts, +opting instead to have the user input all relevant build parameters into a config +file. Propagating the information from this config file into Gradle tasks required +additional code. So while this approach does work, it comes at a cost of readability +of the Gradle scripts. + +We took a different approach, which is to not protect the user from Gradle, but +let this be an opportunity to learn how to interact with Gradle. We tried to +simplify the interaction as much as possible, and to instruct and explain as much +as possible. In our template, the user will add their library name, domain, and +dependencies into the Gradle file, guided by comments. + +The library template is now integrated into the processing account. +The previously named `processing/processing-library-template` is now named +`processing/processing-library-template-ant`. +The previously named `mingness/processing-library-template` is now named +`processing/processing-library-template`. + + +### Existing Templates +There were two existing templates for developing libraries for Processing on +Github that were used as initial models: +- https://github.com/processing/processing-library-template-gradle +- https://github.com/enkatsu/processing-library-template-gradle + +Also notable are these two repositories: +- https://github.com/hamoid/processing-library-template - used enkatsu's template as a model. used jitpack to resolve Processing +- https://github.com/jjeongin/creative-machine - library by Jeongin, on of the authors of the processing gradle template + +Some specific differences between the two model templates: + +- https://github.com/processing/processing-library-template-gradle + - The developer interaction for configuring the build is to fill out the + /resources/build.properties file with all build and release parameters + - It resolves Processing by pointing to the locally installed jar files + - The build.gradle file has 141 lines; processing-library.gradle 159 lines + - It provides gradle tasks for releasing the library + - This template provides fully documented example code, and an example +- https://github.com/enkatsu/processing-library-template-gradle + - The developer interaction for configuring the build is to add dependencies + to the build.gradle + - It resolves Processing from Maven, using an unofficial version, 3.3.7 + - The build.gradle file has 56 lines + - It does not provide scripts for releasing the library + - This template provides example code and example, but it is not commented + extensively + + +The build.gradle + processing-library.gradle files of the processing template +are large, and aren't easy to digest on first look. That's because they are not +designed to be edited. Is this a missed opportunity for gaining familiarity with +gradle? enkatsu's template is simple, invites and requires editing to configure +to your library. + +### The New Template + +The new template is a Gradle build script, an example library, combined the benefits of the model templates, in the following list: + +- used jitpack to resolve Processing +- asks user to edit build file directly, but with helpful comments, to add their own dependencies, library name, domain name, and version. +- It provides gradle tasks for releasing the library +- This template provides fully documented example code, and an example library, +- The template can be compiled as is. This provides a working example for the contributor to work from. Many people learn from example. +- It provides a new mkdocs powered documentation website example. This framework provides a simple format suitable for documentation websites, based on markdown files. +- The template has a workflow that includes the necessary release artifacts, aside from the default for Github, which is the source code +- + + + + + + +## Adding a New Contribution + +This project's objective was to refactor how Processing tracks contributions, +considering the guiding principles. A previously manual process was automated, +and the contributions data was consolidated into a database file. + +### Logic of previous repository +The previous process required the Processing librarian to add entries to the +`source.conf` file for the new contribution according to which categories might be +associated with the contribution. The listing of the source in this file under the +categories served to override the categories listed in the properties file. +The new entry included the new id number, and the url of the properties file. +A comment at the top tracked what the next id number should be, and was manually +iterated. All of the data about the contributions were distributed across multiple +files, and no one file contained all of the data about the contributions in plain +text. The source of truth for the contributions were the properties files in the +published libraries themselves, distributed across the internet. + +The `scripts/build_contribs.py` script would parse the input files, read in the properties +files from the internet, and output files used by the website and Contribution Manager +in the PDE. These files also were a part of the repository. + +The `scripts/build_contribs.py` script also performed some alterations to the data in +the properties files. + +- if the id was listed in `skipped.conf`, the contribution was not included +- if the id was listed in `broken.conf`, the contribution's maxRevision field was capped at 228, +which is a previous version of Processing. +- the categories the contribution was listed under would overwrite what was listed +in the properties file. + + +### Logic of this project +Considering the guideline, to make processes and the data and code as intuitive +and self-explanatory as possible, the refactor was to create a single database +file that contains all the data related to contributions that we might need. +From this database file, we can then create the output files. Another way of thinking +about it, during the running of the build script of previous process, all this +data is in memory; just the state is never recorded more permanently. Previous +formatting logic that was stored as algorithms in the build script are now stored as data. + +The format of the database file was selected to be a list of objects, in yaml format. +Yaml format allows for ease of understanding any edits in the database over a +tabular format, like csv. + +The data in the database directly reflect the data in the properties files. This makes it +intuitive to compare values in the database against the properties file. The ability to +define if a contribution should be listed is set by a new `status` field. If anything needs +to be overwritten, like the categories, or maxRevision, this is defined in the field +`override`. + +The `status` field reflects if a source url is still serving the library with a +value of `VALID`, and we have two states if the url is not valid. `BROKEN` indicates +that the url did not serve the properties file as expected, but we will continue +to check. This `status` value is automatically set if encountered during an update. +If we know a library has been deprecated, then we can manually set the `status` to +`DEPRECATED` and updates for library will no longer be attempted, but we still +have an archive of the library. + +The value of the `override` field is an object, where any component field values will +replace the existing field values. For example, libraries in the `broken.conf` file are +outdated, and we want to cap the `maxRevision` to `228`. This cap can be applied by setting +`override` to {`maxRevision`: `228`} + +In the new repository, the output files are no longer a part of the repository, +to make source of truth clear - the database file. These output files will instead +reside as workflow artifacts. + +### Implementation +There were some difficulties in implementing this solution, simply because, data is messy, and +is rarely as theoretically clean as expected. This work needed to convert logic enshrined in +config files and a script into data. The process of doing this required iterations of trial +and error. + +Two different levels of validation are used, where the validation for new contributions is more +strict than for existing contributions. This is because some older properties files have +different field names than currently expected. Currently we look for `authors` and `categories`, +but in older files they might be `authorList` or `category`. + +We implemented validation using Pydantic's built in validation, simply because considering all +the cases manually became too much. Pydantic wasn't used from the start, because it was +intended to write the scripts in a language in the Java family, like Kotlin. However, they +currently are in Python. + +### User Interaction + +The previous process required the new contributor to email the Processing librarian. +We have replaced this initial contact with an issue form. The issue form makes it +clear as to what is required to register the library, and can be used to trigger workflows. + +## Workflows +The creation of the issue form triggers a Github workflow, which automatically +retrieves the properties file, parses and validates it, and if valid, creates a +pull request for the Processing librarian to review. The way the workflow was designed, +was to allow for a rerun of the workflow, in case of edits to the repository. In order to +do this, the branch is deleted before being recreated again by the workflow. + +Also, ensure readability and to use preexisting documentation, use of third-party +actions was preferred. + +The workflow that checks the urls daily needs to merge its changes automatically, without +humen intervention. For this reason, we make direct changes into the default branch `main`. +This workflow will check the url of every contribution that doesn't have a `status` of +`DEPRECATED`, and will update the contribution if the `version` changes. If there isn't +a properties file at the url, the contribution will be marked with a status of `BROKEN` +until it is live again. + + +# APPENDIX A: The original process for adding contributions +1. The library developer will email the Processing librarian, with the url of the release artifacts. +2. The librarian will feedback on any recommendations for changes to the library developer. +3. The librarian adds the library manually to `sources.conf`. This file lists the categories, under +which, in each row, is an id number and the url to the properties file (*.txt) file for the applicable +contribution. These contributions override the categories listed in the properties file. +4. The librarian then runs some scripts to add the library into necessary locations. + 1. https://github.com/processing/processing-contributions/blob/master/scripts/build_contribs.py + is run as github action, daily. This generates `pde/contribs.txt` file - used by contributions + manager. This script visits the url to the properties file (*.txt), and pulls the values from + this file, and updates them in the `pde/contribs.txt` file. If any of the ids are in the + `skipped.conf` file, the library is skipped. If any of the ids are in the `broken.conf` file, + the `maxRevision` is pinned to 228, which is an older version. These two *.conf files are also + manually updated. + 2. run `update-contributions.js` script in `processing-website` repo. this copies files from + the `sources` folder in the `processing-contributions` repo, into the `content/contributions` + folder in the `processing-website` repo, while removing the `id` key-value pair. The `sources` + folder contains json files, one for each contribution. + + +# APPENDIX B: Design of database file +* The file is named `contributions.yaml` +* All fields from the properties file will be included directly, except for the categories, which will +be parsed into a list. +The fields from the `library.properties` file are: `name`, `version`, `prettyVersion`, +`minRevision`, `maxRevision`, `authors`, `url`, `type`, `categories`, `sentence`, `paragraph`. +* Other relevant fields that are represented in the output files are + * `id` - a unique integer identifier. This is also used in Processing. + * `download`- the url for the zip file, that contains the library + * `source` - the url that links to the properties file, from the published library. +* Newly introduced fields are + * `status` - Possible values are + * `DEPRECATED` - Libraries that seem to be permanently down, or have been deprecated. + These are libraries that are commented out of `source.conf`. This is manually set. + * `BROKEN` - libraries who's properties file cannot be retrieved, but we will still check. + These are libraries listed in `skipped.conf` + * `VALID` - libraries that are valid and available + * `override` - This is an object, where any component field values will replace the existing field values. + For example, libraries in the `broken.conf` file are outdated, and we want to cap the + `maxRevision` to `228`. This cap can be applied by setting `override` to {`maxRevision`: `228`} + * `log` - Any notes of explanation, such as why a library was labeled `BROKEN` +* Other fields to be included are + * `previous_versions` - a list of previous `prettyVersion` values + * `date_added` - Date library was added to contributions. This is a future facing field + * `last_updated` - Date library was last updated in the repo. This is a future facing field + From ed788b3ecc529e82fbe7593a40f38f106f3b47eb Mon Sep 17 00:00:00 2001 From: Claudine Date: Thu, 31 Oct 2024 21:57:10 +0000 Subject: [PATCH 02/10] update template text --- .../pr05_2024_final_report_Claudine_Chen.md | 111 ++++++++++-------- 1 file changed, 62 insertions(+), 49 deletions(-) diff --git a/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md b/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md index c87aecb..1ed7f3a 100644 --- a/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md +++ b/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md @@ -27,9 +27,9 @@ is in the final column. ## Processing Library Template -This subproject's target was to create an as-intuitive-as-possible Gradle-based library +This project's target was to create an as-intuitive-as-possible, Gradle-based library template. The existing `processing-library-template` was based on Ant and Eclipse, -both technologies that have been superceded by Gradle and Intellij/VS Code, +and both technologies that have been superceded by Gradle and Intellij/VS Code, respectively. The existing `processing-library-template-gradle` seemed to take a design approach of eliminating any interactions by the user with Gradle scripts, opting instead to have the user input all relevant build parameters into a config @@ -81,33 +81,39 @@ Some specific differences between the two model templates: The build.gradle + processing-library.gradle files of the processing template are large, and aren't easy to digest on first look. That's because they are not -designed to be edited. Is this a missed opportunity for gaining familiarity with -gradle? enkatsu's template is simple, invites and requires editing to configure -to your library. +designed to be edited. enkatsu's template is simple, invites and requires editing +to configure to your library. ### The New Template -The new template is a Gradle build script, an example library, combined the benefits of the model templates, in the following list: - -- used jitpack to resolve Processing -- asks user to edit build file directly, but with helpful comments, to add their own dependencies, library name, domain name, and version. -- It provides gradle tasks for releasing the library -- This template provides fully documented example code, and an example library, -- The template can be compiled as is. This provides a working example for the contributor to work from. Many people learn from example. -- It provides a new mkdocs powered documentation website example. This framework provides a simple format suitable for documentation websites, based on markdown files. -- The template has a workflow that includes the necessary release artifacts, aside from the default for Github, which is the source code -- - - - - +The new template is a Gradle build script, an example library, and an example +documentation website. It combined the benefits of the model templates. +All features are: + +- use of jitpack to resolve Processing, instead of local jar files. +Once Processing is offered on Maven, this will be changed to resolve via +official sources. +- A Gradle build file, that + - asks user to edit build file directly, to add their own +dependencies, library name, domain name, and version. + - provides gradle tasks for releasing the library and all required artifacts + - installs the library in a local Processing instance. +- The template can be compiled as is. It includes a working example library, and +defaults that can be compiled. This provides a working example to work from. +- This template provides fully documented example code that can be easily accessed +in Processing from the examples menu. +- It provides a new mkdocs powered documentation website example. This framework +provides a simple format suitable for documentation websites, based on markdown files. +- The template has a workflow that includes the necessary release artifacts in a Github +release. The default release artifacts only include the source code. ## Adding a New Contribution This project's objective was to refactor how Processing tracks contributions, -considering the guiding principles. A previously manual process was automated, -and the contributions data was consolidated into a database file. +considering the guiding principles of automation and intuitiveness. A previously +manual process was automated, and the contributions data was consolidated into +a database file. ### Logic of previous repository The previous process required the Processing librarian to add entries to the @@ -116,17 +122,14 @@ associated with the contribution. The listing of the source in this file under t categories served to override the categories listed in the properties file. The new entry included the new id number, and the url of the properties file. A comment at the top tracked what the next id number should be, and was manually -iterated. All of the data about the contributions were distributed across multiple -files, and no one file contained all of the data about the contributions in plain -text. The source of truth for the contributions were the properties files in the -published libraries themselves, distributed across the internet. - -The `scripts/build_contribs.py` script would parse the input files, read in the properties -files from the internet, and output files used by the website and Contribution Manager -in the PDE. These files also were a part of the repository. +iterated. There were also two other configuration files, `skipped.conf` and +`broken.conf`, which contained lists of id numbers. -The `scripts/build_contribs.py` script also performed some alterations to the data in -the properties files. +The `scripts/build_contribs.py` script would parse these config files, read in all +the properties files from the internet, and output files used by the website and +Contribution Manager in the PDE. These files also were a part of the repository. +The `scripts/build_contribs.py` script performed some alterations to the data in +the properties files with the following rules. - if the id was listed in `skipped.conf`, the contribution was not included - if the id was listed in `broken.conf`, the contribution's maxRevision field was capped at 228, @@ -134,6 +137,10 @@ which is a previous version of Processing. - the categories the contribution was listed under would overwrite what was listed in the properties file. +In short, the data about the contributions were distributed across multiple +files, including within algorithms, and across the internet. This does not provide +an intuitive interaction with the data. + ### Logic of this project Considering the guideline, to make processes and the data and code as intuitive @@ -145,22 +152,25 @@ data is in memory; just the state is never recorded more permanently. Previous formatting logic that was stored as algorithms in the build script are now stored as data. The format of the database file was selected to be a list of objects, in yaml format. -Yaml format allows for ease of understanding any edits in the database over a -tabular format, like csv. - -The data in the database directly reflect the data in the properties files. This makes it -intuitive to compare values in the database against the properties file. The ability to -define if a contribution should be listed is set by a new `status` field. If anything needs -to be overwritten, like the categories, or maxRevision, this is defined in the field -`override`. - -The `status` field reflects if a source url is still serving the library with a -value of `VALID`, and we have two states if the url is not valid. `BROKEN` indicates -that the url did not serve the properties file as expected, but we will continue -to check. This `status` value is automatically set if encountered during an update. +Yaml format allows for ease of understanding any edits in the database. Each edit +will appear per line with the field name, making each edit easy to see using Github's +interface, compared to tabular formats, like csv. For tabular formats, Github's interface, which shows +if a row has an edit, would highlight more which contribution had and edit, but the +edit itself would be harder to see, especially since the column label would be on a +different line. + +To make it intuitive to compare values in the database against the properties file, +the data in the database directly reflect the data in the properties files. The visibility +of a contribution is set by a new `status` field. If anything needs to be overwritten, +like the categories, or maxRevision, this is defined in the field `override`. + +For the `status` field, a value of `VALID` indicates the library is live. There are two +states if the url is not valid; both of these states result in the library not being +able to be installed. `BROKEN` indicates that the url did not serve the +properties file as expected, but we will continue to check. This `status` value is +automatically set if encountered during an update. If we know a library has been deprecated, then we can manually set the `status` to -`DEPRECATED` and updates for library will no longer be attempted, but we still -have an archive of the library. +`DEPRECATED` and updates for library will no longer be attempted. The value of the `override` field is an object, where any component field values will replace the existing field values. For example, libraries in the `broken.conf` file are @@ -253,7 +263,10 @@ The fields from the `library.properties` file are: `name`, `version`, `prettyVer `maxRevision` to `228`. This cap can be applied by setting `override` to {`maxRevision`: `228`} * `log` - Any notes of explanation, such as why a library was labeled `BROKEN` * Other fields to be included are - * `previous_versions` - a list of previous `prettyVersion` values - * `date_added` - Date library was added to contributions. This is a future facing field - * `last_updated` - Date library was last updated in the repo. This is a future facing field + * `previous_versions` - a list of previous `prettyVersion` values. This is a future facing field. + * `date_added` - Date library was added to contributions. This will be added whenever a new library is + added. To have complete data for this field will require some detective work into the archives. + * `last_updated` - Date library was last updated in the repo. This will be added whenever a library is + updated. To have complete data for this field will require waiting for all libraries to be updated, or + will require some detective work into the archives. From b2c820fc1885feedc7c9f8e3a0d2e6f77cbb4693 Mon Sep 17 00:00:00 2001 From: Claudine Date: Thu, 31 Oct 2024 22:09:09 +0000 Subject: [PATCH 03/10] add summary header --- .../final-reports/pr05_2024_final_report_Claudine_Chen.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md b/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md index 1ed7f3a..61dc12b 100644 --- a/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md +++ b/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md @@ -1,5 +1,11 @@ # Final Report for Simplifying the workflow for Processing libraries +| **Project** | [Simplifying the workflow for Processing libraries, tools, and modes](https://github.com/processing/pr05-grant/wiki/2024-Project-List-for-%60pr05%60-=-Processing-Foundation-Software-Development-Grant#%F0%9F%85%BF%EF%B8%8F-simplify-the-workflow-for-processing-libraries-tools-and-modes) | +| :--- | :--- | +| **Grantee** | [Claudine Chen](https://github.com/mingness) | +| **Mentors** | [Stef Tervelde](https://github.com/Stefterv) | +| **Repos**| https://github.com/processing/processing-library-template
https://github.com/processing/processing-contributions | + ## Introduction This project's aim was to improve the developer experience for contributing libraries to Processing. @@ -12,6 +18,8 @@ There were two main guiding principles in this work: 1. automate what can be automated 2. make the interactions as intuitive and self-explanatory as possible. +The + ## Project Plan Below is a broad overview of the process of contributing to libraries, tools, and modes, and the initial resources for this. The work accomplished or planned to be accomplished is in the final column. From db319da2346465e21446ca0d8bf200c291684280 Mon Sep 17 00:00:00 2001 From: Claudine Date: Thu, 31 Oct 2024 22:16:45 +0000 Subject: [PATCH 04/10] add documentation repo --- .../final-reports/pr05_2024_final_report_Claudine_Chen.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md b/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md index 61dc12b..1988d5c 100644 --- a/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md +++ b/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md @@ -3,8 +3,8 @@ | **Project** | [Simplifying the workflow for Processing libraries, tools, and modes](https://github.com/processing/pr05-grant/wiki/2024-Project-List-for-%60pr05%60-=-Processing-Foundation-Software-Development-Grant#%F0%9F%85%BF%EF%B8%8F-simplify-the-workflow-for-processing-libraries-tools-and-modes) | | :--- | :--- | | **Grantee** | [Claudine Chen](https://github.com/mingness) | -| **Mentors** | [Stef Tervelde](https://github.com/Stefterv) | -| **Repos**| https://github.com/processing/processing-library-template
https://github.com/processing/processing-contributions | +| **Mentor** | [Stef Tervelde](https://github.com/Stefterv) | +| **Repos**| https://github.com/processing/processing-library-template
https://github.com/processing/processing-contributions
documentation repo: https://github.com/mingness/pr05-simplify-workflows| ## Introduction This project's aim was to improve the developer experience for contributing @@ -18,7 +18,6 @@ There were two main guiding principles in this work: 1. automate what can be automated 2. make the interactions as intuitive and self-explanatory as possible. -The ## Project Plan Below is a broad overview of the process of contributing to libraries, tools, and modes, and the initial resources for this. The work accomplished or planned to be accomplished From 480f1bd45ab1ea2e606bde71706394e74af00105 Mon Sep 17 00:00:00 2001 From: Claudine Date: Fri, 1 Nov 2024 09:23:10 +0000 Subject: [PATCH 05/10] convert snake case to camel case --- .../final-reports/pr05_2024_final_report_Claudine_Chen.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md b/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md index 1988d5c..b341416 100644 --- a/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md +++ b/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md @@ -270,10 +270,10 @@ The fields from the `library.properties` file are: `name`, `version`, `prettyVer `maxRevision` to `228`. This cap can be applied by setting `override` to {`maxRevision`: `228`} * `log` - Any notes of explanation, such as why a library was labeled `BROKEN` * Other fields to be included are - * `previous_versions` - a list of previous `prettyVersion` values. This is a future facing field. - * `date_added` - Date library was added to contributions. This will be added whenever a new library is + * `previousVersions` - a list of previous `prettyVersion` values. This is a future facing field. + * `dateAdded` - Date library was added to contributions. This will be added whenever a new library is added. To have complete data for this field will require some detective work into the archives. - * `last_updated` - Date library was last updated in the repo. This will be added whenever a library is + * `lastUpdated` - Date library was last updated in the repo. This will be added whenever a library is updated. To have complete data for this field will require waiting for all libraries to be updated, or will require some detective work into the archives. From e50d42c54418d78b3c92a657c21c8fa095cf137d Mon Sep 17 00:00:00 2001 From: Claudine Date: Fri, 1 Nov 2024 11:41:49 +0000 Subject: [PATCH 06/10] streamline template discussion --- .../pr05_2024_final_report_Claudine_Chen.md | 53 +++++-------------- 1 file changed, 13 insertions(+), 40 deletions(-) diff --git a/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md b/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md index b341416..821b7f5 100644 --- a/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md +++ b/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md @@ -48,7 +48,8 @@ We took a different approach, which is to not protect the user from Gradle, but let this be an opportunity to learn how to interact with Gradle. We tried to simplify the interaction as much as possible, and to instruct and explain as much as possible. In our template, the user will add their library name, domain, and -dependencies into the Gradle file, guided by comments. +dependencies into the Gradle file, guided by comments. We also had a principle, to +define values only once. The library template is now integrated into the processing account. The previously named `processing/processing-library-template` is now named @@ -57,44 +58,6 @@ The previously named `mingness/processing-library-template` is now named `processing/processing-library-template`. -### Existing Templates -There were two existing templates for developing libraries for Processing on -Github that were used as initial models: -- https://github.com/processing/processing-library-template-gradle -- https://github.com/enkatsu/processing-library-template-gradle - -Also notable are these two repositories: -- https://github.com/hamoid/processing-library-template - used enkatsu's template as a model. used jitpack to resolve Processing -- https://github.com/jjeongin/creative-machine - library by Jeongin, on of the authors of the processing gradle template - -Some specific differences between the two model templates: - -- https://github.com/processing/processing-library-template-gradle - - The developer interaction for configuring the build is to fill out the - /resources/build.properties file with all build and release parameters - - It resolves Processing by pointing to the locally installed jar files - - The build.gradle file has 141 lines; processing-library.gradle 159 lines - - It provides gradle tasks for releasing the library - - This template provides fully documented example code, and an example -- https://github.com/enkatsu/processing-library-template-gradle - - The developer interaction for configuring the build is to add dependencies - to the build.gradle - - It resolves Processing from Maven, using an unofficial version, 3.3.7 - - The build.gradle file has 56 lines - - It does not provide scripts for releasing the library - - This template provides example code and example, but it is not commented - extensively - - -The build.gradle + processing-library.gradle files of the processing template -are large, and aren't easy to digest on first look. That's because they are not -designed to be edited. enkatsu's template is simple, invites and requires editing -to configure to your library. - -### The New Template - -The new template is a Gradle build script, an example library, and an example -documentation website. It combined the benefits of the model templates. All features are: - use of jitpack to resolve Processing, instead of local jar files. @@ -105,6 +68,9 @@ official sources. dependencies, library name, domain name, and version. - provides gradle tasks for releasing the library and all required artifacts - installs the library in a local Processing instance. +- A `release.properties` file, where the user will input values for the library +properties text file. All fields are mapped directly, and the `version` in the +Gradle build file is mapped to `prettyVersion`. - The template can be compiled as is. It includes a working example library, and defaults that can be compiled. This provides a working example to work from. - This template provides fully documented example code that can be easily accessed @@ -122,6 +88,13 @@ considering the guiding principles of automation and intuitiveness. A previously manual process was automated, and the contributions data was consolidated into a database file. +The processing contributions repository is now integrated into the processing account. +The previously named and private repository `processing/processing-contributions` is now named +`processing/processing-contributions-legacy`. +There is now a repository named `processing/processing-contributions`, which contains +only the important files of `mingness/processing-contributions-new`. + + ### Logic of previous repository The previous process required the Processing librarian to add entries to the `source.conf` file for the new contribution according to which categories might be @@ -248,7 +221,7 @@ contribution. These contributions override the categories listed in the properti folder contains json files, one for each contribution. -# APPENDIX B: Design of database file +# APPENDIX B: Design of contributions database file * The file is named `contributions.yaml` * All fields from the properties file will be included directly, except for the categories, which will be parsed into a list. From 558485a6842b6ef5d323db3e814ce4d06974831e Mon Sep 17 00:00:00 2001 From: Claudine Date: Fri, 1 Nov 2024 11:49:00 +0000 Subject: [PATCH 07/10] tweak formatting --- .../final-reports/pr05_2024_final_report_Claudine_Chen.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md b/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md index 821b7f5..40895e8 100644 --- a/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md +++ b/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md @@ -58,7 +58,7 @@ The previously named `mingness/processing-library-template` is now named `processing/processing-library-template`. -All features are: +Feature list: - use of jitpack to resolve Processing, instead of local jar files. Once Processing is offered on Maven, this will be changed to resolve via @@ -201,7 +201,7 @@ a properties file at the url, the contribution will be marked with a status of ` until it is live again. -# APPENDIX A: The original process for adding contributions +## APPENDIX A: The original process for adding contributions 1. The library developer will email the Processing librarian, with the url of the release artifacts. 2. The librarian will feedback on any recommendations for changes to the library developer. 3. The librarian adds the library manually to `sources.conf`. This file lists the categories, under @@ -221,7 +221,7 @@ contribution. These contributions override the categories listed in the properti folder contains json files, one for each contribution. -# APPENDIX B: Design of contributions database file +## APPENDIX B: Design of contributions database file * The file is named `contributions.yaml` * All fields from the properties file will be included directly, except for the categories, which will be parsed into a list. From fc8ba6798a28f7786954e05cca2d7fa1776d3362 Mon Sep 17 00:00:00 2001 From: Claudine Date: Fri, 1 Nov 2024 11:50:17 +0000 Subject: [PATCH 08/10] tweak header --- .../final-reports/pr05_2024_final_report_Claudine_Chen.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md b/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md index 40895e8..18185e6 100644 --- a/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md +++ b/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md @@ -183,7 +183,7 @@ The previous process required the new contributor to email the Processing librar We have replaced this initial contact with an issue form. The issue form makes it clear as to what is required to register the library, and can be used to trigger workflows. -## Workflows +### Workflows The creation of the issue form triggers a Github workflow, which automatically retrieves the properties file, parses and validates it, and if valid, creates a pull request for the Processing librarian to review. The way the workflow was designed, From 268dd9fe5d61bb4024bc62fe7a8b7c23d136ceca Mon Sep 17 00:00:00 2001 From: Claudine Date: Sun, 3 Nov 2024 00:08:42 +0000 Subject: [PATCH 09/10] refactor final report --- .../pr05_2024_final_report_Claudine_Chen.md | 296 ++++-------------- 1 file changed, 69 insertions(+), 227 deletions(-) diff --git a/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md b/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md index 18185e6..4af8644 100644 --- a/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md +++ b/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md @@ -6,247 +6,89 @@ | **Mentor** | [Stef Tervelde](https://github.com/Stefterv) | | **Repos**| https://github.com/processing/processing-library-template
https://github.com/processing/processing-contributions
documentation repo: https://github.com/mingness/pr05-simplify-workflows| -## Introduction -This project's aim was to improve the developer experience for contributing -libraries to Processing. -The work from this project manifests as 2 separate projects, as well as improved -documentation. These two projects are the processing library template, the new -contributions workflow, and I will write about them in separate sections. -There were two main guiding principles in this work: - -1. automate what can be automated -2. make the interactions as intuitive and self-explanatory as possible. - - -## Project Plan -Below is a broad overview of the process of contributing to libraries, tools, and modes, and the initial resources for this. The work accomplished or planned to be accomplished -is in the final column. +## Technical Decisions + +### Library template +1. The Gradle build file written in Kotlin, for the library template. Kotlin is now the default script language. This decision, however, forced us to put everything in one file, rather than having editable fields in one file, and release tasks in another. This is because Kotlin doesn't allow the build file to be split into multiple files. +2. Gradle is not easy, which is probably why the template `processing-library-template-gradle` seemed to take a +design approach of eliminating any interactions by the user with Gradle scripts, opting instead to have the user input all relevant build parameters into a config file. Propagating the information from this config file into Gradle tasks required +additional code. So while this approach does work, it comes at a cost of readability of the Gradle scripts. We opted to not protect the user from Gradle, but let this be an opportunity to learn how to interact with Gradle. We tried to simplify the interaction as much as possible, and to instruct and explain as much as possible. In our template, the user will add their library name, domain, and dependencies into the Gradle file, guided by comments. +3. We had a principle, to define values only once. Therefore, the version is defined in Gradle once, and propagated to the library properties file. Previously the version was also reported back by the library, but all the options available to accomplish this without setting the version twice were unappealing, and we chose to not implement this. +4. We chose to promote the use of MkDocs for creating documentation websites via Github project pages. This provides straightforward documentation websites based on markdown. +5. We chose to have a `release.properties` file, where the user will input values for the library properties text file. All fields are mapped directly, and the `version` in the +Gradle build file is mapped to `prettyVersion` in the properties file. + +### Add contributions workflow +1. We decided to make a new database file, called `contributions.yaml`. The format of the database file was selected to be a list of objects, in yaml format. Yaml format allows for ease of understanding any edits in the database. Each edit will appear per line with the field name, making each edit easy to see using Github's interface, compared to tabular formats, like csv. For tabular formats, Github's interface, which shows if a row has an edit, would highlight more which contribution had and edit, but the edit itself would be harder to see, especially since the column label would be on a different line. +2. In the previous repository, some data from the properties files are overwritten by design, based on the categories the source url was listed under, or if it's a "broken" contribution. To make it intuitive to compare values in the database against the properties file, it was decided that the data in the database directly reflect the data in the properties files. To be able to overwrite this data in the output files, we've implemented an `override` field. The value of the `override` field is an object, where any component field values will replace the existing field values. For example, libraries in the `broken.conf` file are outdated, and we want to cap the `maxRevision` to `228`. This cap can be applied by setting `override` to {`maxRevision`: `228`} +3. We want the database file to contain all contributions ever. This means, we need to store contributions that are deprecated, and we need to store that state. This is set by a new `status` field. If we know a library has been deprecated, then we can manually set the `status` to `DEPRECATED` and updates for library will no longer be attempted. A value of `VALID` indicates the library is live. `BROKEN` indicates that the url did not serve the properties file as expected, but we will continue to check the source url for the properties. This `status` value is automatically set if the properties are not found at the url during an update. +4. The previous process required the new contributor to email the Processing librarian. +We have replaced this initial contact with an issue form. The issue form makes it +clear as to what is required to register the library, and can be used to trigger workflows. +5. The creation of the issue form triggers a Github workflow, which automatically +retrieves the properties file, parses and validates it, and if valid, creates a +pull request for the Processing librarian to review. The way the workflow was designed, +was to allow for a rerun of the workflow, in case of edits to the repository. In order to do this, the branch is deleted before being recreated again by the workflow. +6. To ensure readability and to use preexisting documentation, use of third-party +actions was preferred. +7. The workflow that checks the urls daily needs to merge its changes automatically, without human intervention. For this reason, we make direct changes into the default branch `main`. This workflow will check the url of every contribution that doesn't have a `status` of `DEPRECATED`, and will update the contribution if the `version` changes. If there isn't a properties file at the url, the contribution will be marked with a status of `BROKEN` until it is live again. +8. In the new repository, the output files are no longer a part of the repository, +to make source of truth clear - the database file. These output files will instead +reside as workflow artifacts. +9. Two different levels of validation are used, where the validation for new contributions is more strict than for existing contributions. This is because some older properties files have different field names than currently expected. Currently we look for `authors` and `categories`, but in older files they might be `authorList` or `category`. +10. We implemented validation using Pydantic's built in validation, simply because considering all the cases manually became too much. Pydantic wasn't used from the start, because it was intended to write the scripts in a language in the Java family, like Kotlin. However, they currently are in Python. -| Step | Previous workflow | Current workflow | -|------|------------------|-------------------| -| The idea | | An invitation to contribute, via a tutorial on the website | -|The information | Information on how to develop a library can be found in the Github wiki documents for processing 4, in three separate pages, and then there is a page for modes.

https://github.com/benfry/processing4/wiki/Library-Basics
https://github.com/benfry/processing4/wiki/Library-Guidelines
https://github.com/benfry/processing4/wiki/Library-Overview
https://github.com/benfry/processing4/wiki/Mode-Overview | Update information in tutorial on website, and in repository for library template. | -| The development | There are a series of templates that have been developed. The older ones are using Eclipse and Ant, newer ones with Intellij and Gradle. In addition to the executable, the developer is required to create a website that holds documentation for the library.

https://github.com/processing/processing-library-template
https://github.com/processing/processing-library-template-gradle
https://github.com/processing/processing-templates
https://github.com/processing/processing-tool-template
https://github.com/processing/processing-android-library-template | Repository templates for libraries using the new Gradle templates, keeping in mind the needs of the beginner. Consolidate existing functionality: resolution of Processing core using jitpack and unofficial Processing repo`; automation for building and exporting artifacts, including PDEX/PDEZ bundles.

CI/CD deployment of the reference to GitHub pages

Simple integration with local Processing | -| The integration - discoverability | The current process is to contact the Processing librarian with a url to your release.

https://forum.processing.org/one/topic/publishing-contributed-libraries-and-tools

The librarian then manually updates the private repository https://github.com/processing/processing-contributions | Submissions of new contributions via an issue template.

Develop automated pipeline that updates the repository from the properties file, with the objective to reduce the workload on Processing librarian.

This also includes a refactor on how contributions are stored, storing all information in a database file in the repository. | +## Challenges +- Writing the Gradle build file was a challenge. Gradle is not well documented, and while I had used it before, coding in it is another matter. Also, we chose to write the build file in Kotlin, for which there is even less documentation. +- Gradle does not work well when run from command line. I had an apparent issue with caching when running Gradle from command line. It worked perfectly from the Gradle menus in Intellij and VS Code. +- There were some difficulties in implementing the contributions database file, simply because, data is messy, and is rarely as theoretically clean as expected. This work needed to convert logic enshrined in config files and a script into data. The process of doing this required iterations of trial and error. -## Processing Library Template -This project's target was to create an as-intuitive-as-possible, Gradle-based library -template. The existing `processing-library-template` was based on Ant and Eclipse, -and both technologies that have been superceded by Gradle and Intellij/VS Code, -respectively. The existing `processing-library-template-gradle` seemed to take a -design approach of eliminating any interactions by the user with Gradle scripts, -opting instead to have the user input all relevant build parameters into a config -file. Propagating the information from this config file into Gradle tasks required -additional code. So while this approach does work, it comes at a cost of readability -of the Gradle scripts. +## Tasks Completed -We took a different approach, which is to not protect the user from Gradle, but -let this be an opportunity to learn how to interact with Gradle. We tried to -simplify the interaction as much as possible, and to instruct and explain as much -as possible. In our template, the user will add their library name, domain, and -dependencies into the Gradle file, guided by comments. We also had a principle, to -define values only once. +### Library template -The library template is now integrated into the processing account. -The previously named `processing/processing-library-template` is now named +- [x] A Gradle build file, that + - uses jitpack to resolve Processing, instead of local jar files. Once Processing is offered on Maven, this will be changed to resolve via official sources. + - has Gradle tasks for building the library, creating the properties file, creating all release artifacts, and installing the library in a local Processing instance. +- [x] The template contains an example library that uses an external dependency, as an a functioning example to work from. The template can be compiled as is. This template provides an example that can be easily accessed in Processing from the examples menu. +- [x] The template has a workflow that includes the necessary release artifacts in a Github release. The default release artifacts only include the source code. +- [x] It provides a new mkdocs powered documentation website example. This framework provides a simple format suitable for documentation websites, based on markdown files. +- [x] The template includes documentation, including detailed instructions on getting started, using the Gradle build file, and how to release and publish. +- [x] The library template is now integrated into the processing account. + - The previously named `processing/processing-library-template` is now named `processing/processing-library-template-ant`. -The previously named `mingness/processing-library-template` is now named + - The previously named `mingness/processing-library-template` is now named `processing/processing-library-template`. - -Feature list: - -- use of jitpack to resolve Processing, instead of local jar files. -Once Processing is offered on Maven, this will be changed to resolve via -official sources. -- A Gradle build file, that - - asks user to edit build file directly, to add their own -dependencies, library name, domain name, and version. - - provides gradle tasks for releasing the library and all required artifacts - - installs the library in a local Processing instance. -- A `release.properties` file, where the user will input values for the library -properties text file. All fields are mapped directly, and the `version` in the -Gradle build file is mapped to `prettyVersion`. -- The template can be compiled as is. It includes a working example library, and -defaults that can be compiled. This provides a working example to work from. -- This template provides fully documented example code that can be easily accessed -in Processing from the examples menu. -- It provides a new mkdocs powered documentation website example. This framework -provides a simple format suitable for documentation websites, based on markdown files. -- The template has a workflow that includes the necessary release artifacts in a Github -release. The default release artifacts only include the source code. - - -## Adding a New Contribution - -This project's objective was to refactor how Processing tracks contributions, -considering the guiding principles of automation and intuitiveness. A previously -manual process was automated, and the contributions data was consolidated into -a database file. - -The processing contributions repository is now integrated into the processing account. -The previously named and private repository `processing/processing-contributions` is now named -`processing/processing-contributions-legacy`. -There is now a repository named `processing/processing-contributions`, which contains +### Add contributions workflow +- [x] Create new database file, called `contributions.yaml`, as designed. The design is described in https://github.com/mingness/pr05-simplify-workflows/blob/main/Adding_contribution_workflow_notes.md#design-of-database-file +- [x] Write script for checking online property txt files for each contribution. +- [x] Create scripts to create `pde/contribs.txt`, and `content/contributions/*.json` files. +- [x] Create issue form for new library contributors to fill, which then automatically creates the pull request that changes the database file. +- [x] Test output files with website and contribution manager. +- [x] Create workflow such that, if there is an update, edit the database file, and recreate contribs.txt and json files. The output files are stored as workflow artifacts. +- [x] The processing contributions repository is now integrated into the processing account. + - The previously named and private repository `processing/processing-contributions` is now named `processing/processing-contributions-legacy`. + - There is now a repository named `processing/processing-contributions`, which contains only the important files of `mingness/processing-contributions-new`. -### Logic of previous repository -The previous process required the Processing librarian to add entries to the -`source.conf` file for the new contribution according to which categories might be -associated with the contribution. The listing of the source in this file under the -categories served to override the categories listed in the properties file. -The new entry included the new id number, and the url of the properties file. -A comment at the top tracked what the next id number should be, and was manually -iterated. There were also two other configuration files, `skipped.conf` and -`broken.conf`, which contained lists of id numbers. - -The `scripts/build_contribs.py` script would parse these config files, read in all -the properties files from the internet, and output files used by the website and -Contribution Manager in the PDE. These files also were a part of the repository. -The `scripts/build_contribs.py` script performed some alterations to the data in -the properties files with the following rules. - -- if the id was listed in `skipped.conf`, the contribution was not included -- if the id was listed in `broken.conf`, the contribution's maxRevision field was capped at 228, -which is a previous version of Processing. -- the categories the contribution was listed under would overwrite what was listed -in the properties file. - -In short, the data about the contributions were distributed across multiple -files, including within algorithms, and across the internet. This does not provide -an intuitive interaction with the data. - - -### Logic of this project -Considering the guideline, to make processes and the data and code as intuitive -and self-explanatory as possible, the refactor was to create a single database -file that contains all the data related to contributions that we might need. -From this database file, we can then create the output files. Another way of thinking -about it, during the running of the build script of previous process, all this -data is in memory; just the state is never recorded more permanently. Previous -formatting logic that was stored as algorithms in the build script are now stored as data. - -The format of the database file was selected to be a list of objects, in yaml format. -Yaml format allows for ease of understanding any edits in the database. Each edit -will appear per line with the field name, making each edit easy to see using Github's -interface, compared to tabular formats, like csv. For tabular formats, Github's interface, which shows -if a row has an edit, would highlight more which contribution had and edit, but the -edit itself would be harder to see, especially since the column label would be on a -different line. - -To make it intuitive to compare values in the database against the properties file, -the data in the database directly reflect the data in the properties files. The visibility -of a contribution is set by a new `status` field. If anything needs to be overwritten, -like the categories, or maxRevision, this is defined in the field `override`. - -For the `status` field, a value of `VALID` indicates the library is live. There are two -states if the url is not valid; both of these states result in the library not being -able to be installed. `BROKEN` indicates that the url did not serve the -properties file as expected, but we will continue to check. This `status` value is -automatically set if encountered during an update. -If we know a library has been deprecated, then we can manually set the `status` to -`DEPRECATED` and updates for library will no longer be attempted. - -The value of the `override` field is an object, where any component field values will -replace the existing field values. For example, libraries in the `broken.conf` file are -outdated, and we want to cap the `maxRevision` to `228`. This cap can be applied by setting -`override` to {`maxRevision`: `228`} - -In the new repository, the output files are no longer a part of the repository, -to make source of truth clear - the database file. These output files will instead -reside as workflow artifacts. - -### Implementation -There were some difficulties in implementing this solution, simply because, data is messy, and -is rarely as theoretically clean as expected. This work needed to convert logic enshrined in -config files and a script into data. The process of doing this required iterations of trial -and error. - -Two different levels of validation are used, where the validation for new contributions is more -strict than for existing contributions. This is because some older properties files have -different field names than currently expected. Currently we look for `authors` and `categories`, -but in older files they might be `authorList` or `category`. - -We implemented validation using Pydantic's built in validation, simply because considering all -the cases manually became too much. Pydantic wasn't used from the start, because it was -intended to write the scripts in a language in the Java family, like Kotlin. However, they -currently are in Python. - -### User Interaction - -The previous process required the new contributor to email the Processing librarian. -We have replaced this initial contact with an issue form. The issue form makes it -clear as to what is required to register the library, and can be used to trigger workflows. - -### Workflows -The creation of the issue form triggers a Github workflow, which automatically -retrieves the properties file, parses and validates it, and if valid, creates a -pull request for the Processing librarian to review. The way the workflow was designed, -was to allow for a rerun of the workflow, in case of edits to the repository. In order to -do this, the branch is deleted before being recreated again by the workflow. +## Limitations -Also, ensure readability and to use preexisting documentation, use of third-party -actions was preferred. +- The library template at `processing/processing-library-template` has been tested only for a simple case. We will have to wait for a crowd of people to test the template, and hopefully let us know if something is not working well. +- The scripts in `processing/processing-contributions` do not have tests. +- The `contributions.yaml` database file in `processing/processing-contributions` has three fields which do not have comprehensive data. These fields are `dateAdded`, `dateUpdated`, and `previousVersions`. To fill in this data will require delving into the archives the Github commit history. +- Only a template for libraries was created. A template for tools and modes was not investigated. +- Originally there was an idea to run the examples from the library template using the IDE, but this was not implemented. However, it is possible to run the examples in Processing, so perhaps this is not a limitation. -The workflow that checks the urls daily needs to merge its changes automatically, without -humen intervention. For this reason, we make direct changes into the default branch `main`. -This workflow will check the url of every contribution that doesn't have a `status` of -`DEPRECATED`, and will update the contribution if the `version` changes. If there isn't -a properties file at the url, the contribution will be marked with a status of `BROKEN` -until it is live again. - - -## APPENDIX A: The original process for adding contributions -1. The library developer will email the Processing librarian, with the url of the release artifacts. -2. The librarian will feedback on any recommendations for changes to the library developer. -3. The librarian adds the library manually to `sources.conf`. This file lists the categories, under -which, in each row, is an id number and the url to the properties file (*.txt) file for the applicable -contribution. These contributions override the categories listed in the properties file. -4. The librarian then runs some scripts to add the library into necessary locations. - 1. https://github.com/processing/processing-contributions/blob/master/scripts/build_contribs.py - is run as github action, daily. This generates `pde/contribs.txt` file - used by contributions - manager. This script visits the url to the properties file (*.txt), and pulls the values from - this file, and updates them in the `pde/contribs.txt` file. If any of the ids are in the - `skipped.conf` file, the library is skipped. If any of the ids are in the `broken.conf` file, - the `maxRevision` is pinned to 228, which is an older version. These two *.conf files are also - manually updated. - 2. run `update-contributions.js` script in `processing-website` repo. this copies files from - the `sources` folder in the `processing-contributions` repo, into the `content/contributions` - folder in the `processing-website` repo, while removing the `id` key-value pair. The `sources` - folder contains json files, one for each contribution. - - -## APPENDIX B: Design of contributions database file -* The file is named `contributions.yaml` -* All fields from the properties file will be included directly, except for the categories, which will -be parsed into a list. -The fields from the `library.properties` file are: `name`, `version`, `prettyVersion`, -`minRevision`, `maxRevision`, `authors`, `url`, `type`, `categories`, `sentence`, `paragraph`. -* Other relevant fields that are represented in the output files are - * `id` - a unique integer identifier. This is also used in Processing. - * `download`- the url for the zip file, that contains the library - * `source` - the url that links to the properties file, from the published library. -* Newly introduced fields are - * `status` - Possible values are - * `DEPRECATED` - Libraries that seem to be permanently down, or have been deprecated. - These are libraries that are commented out of `source.conf`. This is manually set. - * `BROKEN` - libraries who's properties file cannot be retrieved, but we will still check. - These are libraries listed in `skipped.conf` - * `VALID` - libraries that are valid and available - * `override` - This is an object, where any component field values will replace the existing field values. - For example, libraries in the `broken.conf` file are outdated, and we want to cap the - `maxRevision` to `228`. This cap can be applied by setting `override` to {`maxRevision`: `228`} - * `log` - Any notes of explanation, such as why a library was labeled `BROKEN` -* Other fields to be included are - * `previousVersions` - a list of previous `prettyVersion` values. This is a future facing field. - * `dateAdded` - Date library was added to contributions. This will be added whenever a new library is - added. To have complete data for this field will require some detective work into the archives. - * `lastUpdated` - Date library was last updated in the repo. This will be added whenever a library is - updated. To have complete data for this field will require waiting for all libraries to be updated, or - will require some detective work into the archives. +## Work Remaining & Next Steps +- [ ] Write tutorial, and publish on Processing website +- [ ] Compare the output files from the daily update workflow of `processing/processing-contributions` with the output files from `processing/processing-contributions-legacy` +- [ ] Use the output files from the daily update workflow of `processing/processing-contributions` for Contribution Manager and Processing website +- [ ] Review feedback on the library template at `processing/processing-library-template`, and iterate on improvements +- [ ] Delve into archives of contributions, and add data for fields `dateAdded`, `dateUpdated`, and `previousVersions` in the `contributions.yaml` database file in `processing/processing-contributions`. From c841ac96b7d2e2ec699469f6d01f93f71c761edc Mon Sep 17 00:00:00 2001 From: Claudine Date: Mon, 2 Dec 2024 16:30:45 +0000 Subject: [PATCH 10/10] additional comment on documentation requirement --- .../final-reports/pr05_2024_final_report_Claudine_Chen.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md b/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md index 4af8644..24d21a6 100644 --- a/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md +++ b/2024_NewBeginnings/final-reports/pr05_2024_final_report_Claudine_Chen.md @@ -15,7 +15,7 @@ design approach of eliminating any interactions by the user with Gradle scripts, opting instead to have the user input all relevant build parameters into a config file. Propagating the information from this config file into Gradle tasks required additional code. So while this approach does work, it comes at a cost of readability of the Gradle scripts. We opted to not protect the user from Gradle, but let this be an opportunity to learn how to interact with Gradle. We tried to simplify the interaction as much as possible, and to instruct and explain as much as possible. In our template, the user will add their library name, domain, and dependencies into the Gradle file, guided by comments. 3. We had a principle, to define values only once. Therefore, the version is defined in Gradle once, and propagated to the library properties file. Previously the version was also reported back by the library, but all the options available to accomplish this without setting the version twice were unappealing, and we chose to not implement this. -4. We chose to promote the use of MkDocs for creating documentation websites via Github project pages. This provides straightforward documentation websites based on markdown. +4. We chose to promote the use of MkDocs for creating documentation websites via Github project pages. This provides straightforward documentation websites based on markdown. Hosting documentation is a requirement for publishing a library in the Contribution Manager. The old documentation recommended people build and host their own documentation website which adds a lot of complexity. The main goal of this project was to simplify the process of creating and publishing a library, so it made sense to automate the process of creating/hosting this documentation as much as possible. 5. We chose to have a `release.properties` file, where the user will input values for the library properties text file. All fields are mapped directly, and the `version` in the Gradle build file is mapped to `prettyVersion` in the properties file.