Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Standardize human-usable error codes for reference & testing #15

Open
ahelwer opened this issue Nov 25, 2024 · 0 comments
Open

Comments

@ahelwer
Copy link

ahelwer commented Nov 25, 2024

It is common in language tooling for each possible error to have an associated searchable code. For example, with the rust compiler:

We generally try to assign each error message a unique code like E0123. These codes are defined in the compiler in the diagnostics.rs files found in each crate, which basically consist of macros. All error codes have an associated explanation: new error codes must include them. Note that not all historical (no longer emitted) error codes have explanations.

This is also the case for the .NET SDK, the MSVC compiler, and probably quite a few other compiler- or interpreter-like projects. This is a distinct concept from process exit codes; the error code is output as part of the human-readable error text sent to stdout or stderr. It would be wonderful for TLA+ tools to develop & adopt such a standard, starting with the Java tools.

The benefits to this scheme are as follows:

  1. Improved testing: testing error output is often quite difficult. How do you test that the tool does the right thing with more stringency than just checking that it fails? The tool could easily fail for the wrong reason, or future changes could make this so. On the other hand if you encode the exact expected error verbiage this makes the test fragile and doubles the work when rewording error messages. Simply checking that an expected error code is included somewhere in the error output text is a wonderful compromise.
  2. Improved tool usability: documentation websites can have an error code section with expanded explanations to help users understand tricky error cases. In this way we enable ourselves to document outside the well-trodden happy path, and users are more often able to help themselves.
  3. Improved library usability: programs consuming the tools as a library instead of a CLI application can receive a standardized error code from a function call, enabling them to better serve their users if desired.
  4. Standardization & implementation independence: the error codes will form a standardized library of erroneous behavior that any future alternative implementation of the TLA+ tools can be expected to satisfy. This augments the Java tools development experience to be a generator of powerful regression tests that slowly accrue over time, making both the TLA+ standard and any alternative tools stronger.

Happily the Java tools are already well on their way to start this process! The file org.lamport.tlatools/src/tlc2/output/EC.java has a great number of error codes already defined. The work remaining is thus:

  1. Expose these error codes in the actual TLC error output
  2. Extract these error codes into a table in the tlaplus-standard repository
  3. Create an implementation-independent suite of tests for every error code associating it with a given input, and run these test cases against the Java tools
  4. Expand the set of error codes over time, until every single possible error in the entire Java tools codebase has a unique associated error code, and any additional errors reported by users accrue to a new error code
  5. Create documentation pages on docs.tlapl.us for every single error code, with examples, links to related conceptual content, and quick tips on fixing it

This would be a substantial chunk of work but I believe would be one of the best forcing functions to greatly improve the reliability of the tools.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

1 participant