You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Usually validation errors are due to some systemic reason (eg an RDFization process has a mistake).
In that case you don't get a couple of ValidationResults for a given reason, you get thousands or millions.
After the data producer fixes the root cause of errors in this limited report, they can rerun the validator, fix another batch of errors, etc.
This is similar to how many compilers abort after the encounter 1000 errors: the programmer is not likely to fix more than 1000; and because often one error precipitates multiple following errors, it's better to work in digestible chunks.
The text was updated successfully, but these errors were encountered:
Yes, TopBraid has the same options. But this may be outside of the standardization process unless we define some sort of validation API similar to the SPARQL protocol.
Usually validation errors are due to some systemic reason (eg an RDFization process has a mistake).
In that case you don't get a couple of ValidationResults for a given reason, you get thousands or millions.
It is more practical to have the ability to put some limit(s). Rdf4j has this ability: https://rdf4j.org/documentation/programming/shacl/#limiting-the-validation-report (total results, and results per constraint)
After the data producer fixes the root cause of errors in this limited report, they can rerun the validator, fix another batch of errors, etc.
This is similar to how many compilers abort after the encounter 1000 errors: the programmer is not likely to fix more than 1000; and because often one error precipitates multiple following errors, it's better to work in digestible chunks.
The text was updated successfully, but these errors were encountered: