-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dump optimized thresholds for buildings as a yaml file #109
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,47 @@ | ||
import os | ||
import shutil | ||
from pathlib import Path | ||
|
||
import hydra | ||
|
||
from lidar_prod.tasks.building_validation import thresholds | ||
from lidar_prod.tasks.building_validation_optimization import ( | ||
BuildingValidationOptimizer, | ||
) | ||
from lidar_prod.tasks.utils import BDUniConnectionParams | ||
|
||
TMP_DIR = Path("tmp/lidar_prod/tasks/building_validation_optimization") | ||
|
||
|
||
def setup_module(module): | ||
try: | ||
shutil.rmtree(TMP_DIR) | ||
except FileNotFoundError: | ||
pass | ||
TMP_DIR.mkdir(parents=True, exist_ok=True) | ||
|
||
|
||
def test_BuildingValidationOptimizer_run(hydra_cfg): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This test, while simpler, seems redundant with other tests for the BVO in https://github.com/IGNF/lidar-prod/blob/yaml-thresholds/tests/lidar_prod/test_optimization.py It's probably a good reason to merge the two, and to assert the existing of output files like you do in this version. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't understand the point of merging both tests: if that test fails we know it's the save/load function, if the other fails we know it's something else. What would be the point of merging? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The point is just to avoid duplicate code by adding this new assertion on the threshold file to the already existing test There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. However, I'd rather move the test on BuildingValidationOptimizer to test_building_validation_optimization.py to ensure consistency between the test files directory and the library files directory |
||
config = hydra_cfg.copy() | ||
opt_cfg = config.building_validation.optimization | ||
opt_cfg.paths.input_las_dir = "tests/files/building_optimization_data/preds" | ||
opt_cfg.paths.results_output_dir = str(TMP_DIR / "run") | ||
|
||
bvo: BuildingValidationOptimizer = hydra.utils.instantiate( | ||
config.building_validation.optimization | ||
) | ||
|
||
bd_uni_connection_params: BDUniConnectionParams = hydra.utils.instantiate( | ||
hydra_cfg.bd_uni_connection_params | ||
) | ||
bvo.bv.bd_uni_connection_params = bd_uni_connection_params | ||
bvo.run() | ||
|
||
th_yaml = opt_cfg.paths.building_validation_thresholds | ||
|
||
assert os.path.isfile(th_yaml) | ||
assert isinstance(thresholds.load(th_yaml), thresholds) | ||
|
||
for filename in os.listdir(opt_cfg.paths.input_las_dir): | ||
assert (TMP_DIR / "run" / "prepared" / filename).is_file() | ||
assert (TMP_DIR / "run" / "updated" / filename).is_file() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you consider using a temporary dir obtained using
tempfile
instead? Or is there a reason to hardcode this?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the reason decided to hardcode this is to be able to have a look at the generated results (mainly in case the test fails)