You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
tl;dr: One could check whether property min/max values make sense, depending on the data type (and offset/scale) that has been defined for the property.
The validator currently checks whether raw metadata values obey the limits for the component data type. For example, when a property has the type UINT16, then a value of 99999 will cause an error for not being in the valid [0,65k] range.
Things become mode tricky for for max/min values. The documentation there currently says
Maximum allowed value for the property. [ ... ] This is the maximum of all property values, after the transforms based on the normalized, offset, and scale properties have been applied.
There could be some validation for this. For example, for a UINT8 value, a maximum of max=-1 does not make sense. The point that makes this less trivial is that offset and scale have to be taken into account. So for a UINT8 value with scale=-1, a maximum of max=-1does make sense. Even this could be validated. The pseudocode for this check could roughly be:
// The final value of a property is computed as
// value = rawValue * scale + offset
// and therefore
// rawValue = (value - offset) / scale
// So one has to check
const rawMax = (max - offset) / scale;
const rawMin = (min - offset) / scale;
assert(rawValue <= rawMax);
assert(rawValue >= rawMin);
(with some sign hassle for negative scale values that is omitted here)
But even this will find its limits, specifically related to "large" values (obviously, but not exclusively for UINT64 - c.f. #251 ).
The specification does not make statements about the possible values for offset and scale that go beyond the general, overarching knowledge that ~"values larger than 2^53 may reduce portability". But for the validation, the results of computations with these values become relevant. For example, there may be a property
offset = 2^25
scale = 2^25
min = 2^25
max = 2^50
Computing the valid rawMin/rawMax values based on this, and checking the validity of a rawValue involves many places where the limited precision could cause false validation errors. One way of addressing that might be using an arbitrary-precision decimal library. But even if the computations took place with arbitrary precision, one still would have to perform checks against bigint at the right place, to make sure that the validation is correct even for values with an integer component type.
The text was updated successfully, but these errors were encountered:
tl;dr: One could check whether property
min/max
values make sense, depending on the data type (andoffset/scale
) that has been defined for the property.Brought up via CesiumGS/3d-tiles#711:
The validator currently checks whether raw metadata values obey the limits for the component data type. For example, when a property has the type
UINT16
, then a value of 99999 will cause an error for not being in the valid [0,65k] range.Things become mode tricky for for
max
/min
values. The documentation there currently saysThere could be some validation for this. For example, for a
UINT8
value, a maximum ofmax=-1
does not make sense. The point that makes this less trivial is thatoffset
andscale
have to be taken into account. So for aUINT8
value withscale=-1
, a maximum ofmax=-1
does make sense. Even this could be validated. The pseudocode for this check could roughly be:(with some sign hassle for negative
scale
values that is omitted here)But even this will find its limits, specifically related to "large" values (obviously, but not exclusively for
UINT64
- c.f. #251 ).The specification does not make statements about the possible values for
offset
andscale
that go beyond the general, overarching knowledge that ~"values larger than 2^53 may reduce portability". But for the validation, the results of computations with these values become relevant. For example, there may be a propertyComputing the valid
rawMin
/rawMax
values based on this, and checking the validity of arawValue
involves many places where the limited precision could cause false validation errors. One way of addressing that might be using an arbitrary-precision decimal library. But even if the computations took place with arbitrary precision, one still would have to perform checks againstbigint
at the right place, to make sure that the validation is correct even for values with an integer component type.The text was updated successfully, but these errors were encountered: