You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After some googling it seems the /usr/bin/X11 linking to /usr/bin/ is regular way how things on Ubuntu are set up, but I'm a bit worried about the minimize_access script going unnecessarily deeply into this infinite cycle, I guess it would be nice to not follow X11 link at all (detecting that you are already going through the destination) or not more than once, ie. I would expect those permissions to be adjusted on /usr/bin/code or /usr/bin/X11/code.
This seems to be regression/change of #64 done in 2016 (I would guess it was intentional change and thus following symlinks is a feature?).
In ideal world maybe caching realpath of directories which are already being processed and rejecting processing the same one twice would make this more robust (and efficient maybe too? I guess it depends on how efficiently it's possible to get real/canonical path, in worst case this may cause overall load to increase).
(version used seems to be hardening-os_hardening 2.4.0 - I'm not sure how to verify that, got it from listing of dependencies)
edit: #116 seems to be another related change done in past, didn't notice this one before
The text was updated successfully, but these errors were encountered:
Running puppet agent after VSCode package update, I did notice minimize_access is going slightly deep into the rabbit hole /usr/bin/X11/X11/X11/...:
After some googling it seems the
/usr/bin/X11
linking to/usr/bin/
is regular way how things on Ubuntu are set up, but I'm a bit worried about the minimize_access script going unnecessarily deeply into this infinite cycle, I guess it would be nice to not follow X11 link at all (detecting that you are already going through the destination) or not more than once, ie. I would expect those permissions to be adjusted on/usr/bin/code
or/usr/bin/X11/code
.This seems to be regression/change of #64 done in 2016 (I would guess it was intentional change and thus following symlinks is a feature?).
In ideal world maybe caching
realpath
of directories which are already being processed and rejecting processing the same one twice would make this more robust (and efficient maybe too? I guess it depends on how efficiently it's possible to get real/canonical path, in worst case this may cause overall load to increase).(version used seems to be
hardening-os_hardening 2.4.0
- I'm not sure how to verify that, got it from listing of dependencies)edit: #116 seems to be another related change done in past, didn't notice this one before
The text was updated successfully, but these errors were encountered: