diff --git a/README.md b/README.md index 4f6f8bb..2729461 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,9 @@ # gref4hsi - a toolchain for the georeferencing and orthorectification of hyperspectral pushbroom data. -This software was made with a special emphasis on georeferencing and orthorectification of underwater hyperspectral data close-range. However, it is equally simple to use for airborne data, and probably even for satellite imagery (although modified approaches including analytical ellipsoid intersection may be better). A coregistration module might be added in the future to flexibly improve image alignment. +This software was made with a special emphasis on georeferencing and orthorectification of hyperspectral imagery from drones and underwater robots. However, it is equally simple to use for airborne data, and probably even for satellite imagery (although modified approaches including analytical ellipsoid intersection may be better). There is also a coregistration module (at beta stage) which, given an accurate RGB orthomosaic, can optimize geometric parameters (static or time-varying) to align the data. +The functionality in bullet form is: +* georeference.py: The software currently supports direct georeferencing through CPU-accelerated ray tracing of push broom measurements onto terrain files including 3D triangular meshes (\*.ply), 2.5D raster DEM/DSM (e.g. \*.tif) and geoid models. +* orthorectification.py: The software performs orthorectification (a form of image resampling) to any user specified projection (by EPSG code) and resolution. The default resampling strategy is north-east oriented rasters, but the software does support memory-optimally oriented rasters (smallest bounding rectangle). When you resample, you essentially figure out where to put measurements in a geographic grid. In the software, we resample the data cube, an RGB composite of the cube, but also ancillary data like intersection/sun geometries, pixel coordinates (in spatial dimension of imager) and timestamps. +* coregistration.py: Given a reference RGB orthomosaic (e.g. from photo-matching) covering the area of the hyperspectral image, the coregistration module does SIFT-matching with the RGB composite (handling any differences in projection and raster transforms). The module also has an optimization component for using the matches to minimize the reprojection error. The user can select/toggle which parameters to optimize, including boresight angles, lever arms, camera model parameters or time-varying errors in position/orientation. Notably the module requires that the pixel-coordinates and timestamps are resampled, as done automatically in the orthorectification step. This README is a bit verbose, but is meant to act as a "tutorial" as well, and I recommend trying to understand as much as possible. @@ -7,6 +11,7 @@ This README is a bit verbose, but is meant to act as a "tutorial" as well, and I The easiest approach is to use conda/mamba to get gdal and pip to install the package with the following commands (has been tested for python 3.8-3.10): ``` conda create -n my_env python=3.10 gdal rasterio +conda activate my_env pip install gref4hsi ``` @@ -14,6 +19,7 @@ pip install gref4hsi Seamless installation for windows is, to my experience a bit more challenging because of gdal and rasterio. For python 3.10 you could run ``` conda create -n my_env python=3.10 +conda activate my_env pip install GDAL‑3.4.3‑cp310‑cp310‑win_amd64.whl pip install rasterio‑1.2.10‑cp310‑cp310‑win_amd64.whl pip install gref4hsi @@ -91,22 +97,26 @@ Moreover, the f represent the camera's focal length in pixels, width is the numb u = np.random.randint(1, width + 1) # Pixel numbers left to right (value from 1 to width) # Pinhole part -x_norm_pinhole = (u - u_c) / f +x_norm_pinhole = (u - cx) / f # Distortion part -x_norm_nonlin = -(k1 * ((u - u_c) / 1000) ** 5 + \ - k2 * ((u - u_c) / 1000) ** 3 + \ - k3 * ((u - u_c) / 1000) ** 2) / f +x_norm_nonlin = -(k1 * ((u - cx) / 1000) ** 5 + \ + k2 * ((u - cx) / 1000) ** 3 + \ + k3 * ((u - cx) / 1000) ** 2) / f x_norm_hsi = x_norm_pinhole + x_norm_nonlin X_norm_hsi = np.array([x_norm_hsi, 0, 1]) # The pixel ray in the hsi frame ``` -Of course most these parameters are hard to guess for a HSI and there are two simple ways of finding them. The first way is to ignore distortions and if you know the angular field of view (AFOV). Then you can calculate: +Of course most of these parameters are hard to guess for a HSI and there are two simple ways of finding them. The first way is to ignore distortions and assume you know the angular field of view (AFOV). Then you can calculate: $$f = \frac{width}{2tan(AFOV/2)}$$ -Besides that, you set cx=width/2, and remaining k's to zero. The second approach is if you have an array describing the FOV (often from the manufacturer). + + +Besides that, you set cx=width/2, and remaining k's to zero. + +The second approach to get the camera model is if you have an array describing the FOV (often from the manufacturer). In our example above that would amount to a 512-length array, e.g. in degrees ``` from gref4hsi.specim_parsing_utils import Specim @@ -131,7 +141,6 @@ param_dict['tz'] = tz # Write dictionary to *.xml file CalibHSI(file_name_cal_xml= confic['Absolute Paths']['hsi_calib_path'], - config = config, mode = 'w', param_dict = param_dict) ``` @@ -139,13 +148,12 @@ CalibHSI(file_name_cal_xml= confic['Absolute Paths']['hsi_calib_path'], ### Terrain files -There are three allowable types ("model_export_type" in the configuration file), namely "ply_file" (a.k.a. mesh files), "dem_file" and "geoid". Example geoids are added under data/world/geoids/ including the global egm08 and a norwegian geoid. Your local geoid can probably be found from [geoids](https://www.agisoft.com/downloads/geoids/) or similar. Just ensure you add this path to the 'Absolute Paths' section in the configuration file. Similarly, if you choose "model_export_type = dem_file", you can usa a terrain model from from your area as long as you add the path to the file in the 'Absolute Paths'. Remember that if the local terrain (dem file) is given wrt the geoid, you can plus them together in e.g. QGIS or simply add an approximate offset in python with rasterio. This is because the software expects DEMs giving the ellipsoid height. Lastly, if the "ply_file" is the desired option a path to this file must be added to the config file. +There are three allowable types ("model_export_type" in the configuration file), namely "ply_file" (a.k.a. mesh files), "dem_file" and "geoid". Example geoids are added under data/world/geoids/ including the global egm08 and a norwegian geoid. Your local geoid can probably be found from [geoids](https://www.agisoft.com/downloads/geoids/) or similar. Just ensure you add this path to the 'Absolute Paths' section in the configuration file. Similarly, if you choose "model_export_type = dem_file", you can use a terrain model from from your area as long as you add the path to the file in the 'Absolute Paths'. Remember that if the local terrain (dem file) is given wrt the geoid, you can plus them together in e.g. QGIS or simply add an approximate offset in python with rasterio. This is because the software expects DEMs giving the ellipsoid height of the terrain. Lastly, if the "ply_file", or 3D triangular mesh is the desired option, a path to this file must be added to the config file. It should be fairly easy to use any triangular mesh format even though the option suggests "\*.ply". ### The h5 files This format must comply with the h5 paths in the configuration file. All sections starting with "HDF.xxxx" are related to these paths. I realize that it is inconvenient to transform the format if your recorded data is in NETCDF, ENVI etc, but making this software I chose H5/HDF because of legacy and because of the flexibility of the mini-file system. The input structure (only mandatory entries) of the H5 files under mission_dir/Input/H5 is as follows (printed using h5tree in Linux): ``` 2022-08-31_0800_HSI_transectnr_0_chunknr_0.h5 -2022-08-31_0800_HSI_transectnr_0_chunknr_0.h5 ├── processed │ └── radiance │ ├── calibration @@ -266,6 +274,16 @@ parsing_utils.export_model(config_file_mission) # Georeference the line scans of the hyperspectral imager. Utilizes parsed data georeference.main(config_file_mission) +# Orthorectify/resample datacube, RGB-composite and ancillary data orthorectification.main(config_file_mission) + +# Optional: coregistration +# Match RGB composite to reference, find features and following data, ground control point (gcp) list, for each feature pair: +# reference point 3D (from reference), position/orientation of vehicle (using resampled time) and pixel coordinate (using resampled pixel coordinate) +coregistration.main(config_file_mission, mode='compare') + +# The gcp list allows reprojecting reference points and evaluate the reprojection error, +# which is used to optimize static geometric parameters (e.g. boresight...) or dynamic geometric parameters (time-varying nav errors). +coregistration.main(config_file_mission, mode='calibrate') ``` diff --git a/data/config_examples/configuration_specim.ini b/data/config_examples/configuration_specim.ini index 301fc88..2bee305 100644 --- a/data/config_examples/configuration_specim.ini +++ b/data/config_examples/configuration_specim.ini @@ -3,9 +3,6 @@ mission_dir = D:/HyperspectralDataAll/HI/2020-07-01-14-34-57-NewYorkCity/ pose_export_type = h5_embedded model_export_type = dem_file sensor_type = Specim FX10 -offset_x = -1 -offset_y = -1 -offset_z = -1 max_ray_length = 200 blue_wave_length = 460 green_wave_length = 530 @@ -55,7 +52,6 @@ is_calibrated = True folder = processed/nav/ quaternion_ecef = processed/nav/quaternion_ref_ecef position_ecef = processed/nav/position_ref_ecef -pos0 = processed/nav/position_offset timestamp = processed/nav/timestamp_hsi [HDF.calibration] diff --git a/data/config_examples/configuration_template.ini b/data/config_examples/configuration_template.ini index 50a0a05..d0f552f 100755 --- a/data/config_examples/configuration_template.ini +++ b/data/config_examples/configuration_template.ini @@ -5,9 +5,6 @@ mission_dir = D:/Specim/Missions/2022-08-31-Rem�y/2022-08-31_0800_HSI/processe pose_export_type = h5_embedded model_export_type = geoid # Change to dem_file if you have local terrain model (a.k.a 2.5D), or ply_file if you have *.ply mesh model of terrain (a.k.a 3D ) sensor_type = Specim FX10 # Change to sensor model name -offset_x = 2951833.126067576 -offset_y = 293161.63041865156 -offset_z = 5627566.333684149 max_ray_length = 200 # Change to roughly 3x flying height for your platform (better too large than too small) blue_wave_length = 460 # Wavelength for making rgb composites green_wave_length = 530 # Wavelength for making rgb composites @@ -52,7 +49,6 @@ is_calibrated = True # Recommended option folder = processed/nav/ quaternion_ecef = processed/nav/quaternion_ref_ecef position_ecef = processed/nav/position_ref_ecef -pos0 = processed/nav/position_offset timestamp = processed/nav/timestamp_hsi [HDF.calibration] # Optionally edit @@ -90,6 +86,10 @@ ancillary_suffix = _anc nodata = -9999 raster_transform_method = north_east # Can be set to minimal_rectangle giving the memory-optimal raster transform, but these rotated rasters are unfortunaty not well supported by downstream tools +[HDF.coregistration] +position_ecef = processed/coreg/position_ecef # The modified position after coregistration +quaternion_ecef = processed/coreg/quaternion_ecef # The modified position after coregistration + [Absolute Paths] # Edit these according to need. For this example, the top folder is D:/Specim/Missions/2022-08-31-Rem�y/2022-08-31_0800_HSI/processed/ calib_folder = D:/Specim/Missions/2022-08-31-Rem�y/2022-08-31_0800_HSI/processed/Input/Calib/ diff --git a/data/config_examples/configuration_uhi.ini b/data/config_examples/configuration_uhi.ini index 1f471ea..47ccb7c 100644 --- a/data/config_examples/configuration_uhi.ini +++ b/data/config_examples/configuration_uhi.ini @@ -5,9 +5,6 @@ model_export_type = altimeter sensor_type = UHI-4 tex_path = None modeltype = DEM -offset_x = -1 -offset_y = -1 -offset_z = -1 max_ray_length = 20 blue_wave_length = 460 green_wave_length = 530 @@ -63,7 +60,6 @@ is_calibrated = True folder = processed/nav/ quaternion_ecef = processed/nav/quaternion_ref_ecef position_ecef = processed/nav/position_ref_ecef -pos0 = processed/nav/pos0 timestamp = processed/nav/timestamp_hsi [HDF.calibration] diff --git a/gref4hsi/scripts/coregistration.py b/gref4hsi/scripts/coregistration.py new file mode 100644 index 0000000..f1625fe --- /dev/null +++ b/gref4hsi/scripts/coregistration.py @@ -0,0 +1,823 @@ +import configparser +import os +import time + +import numpy as np +import spectral as sp + +from gref4hsi.utils.gis_tools import GeoSpatialAbstractionHSI +from gref4hsi.utils.parsing_utils import Hyperspectral +import gref4hsi.utils.geometry_utils as geom_utils +from gref4hsi.utils.geometry_utils import CalibHSI, GeoPose + +from scipy.spatial.transform import Rotation as RotLib +from scipy.optimize import least_squares +from scipy.interpolate import interp1d +import pandas as pd +from scipy.sparse import lil_matrix + + + +def numerical_jacobian(fun, param, eps=1e-6, **kwargs): + """ + Computes the Jacobian of a function using central differences. + + Args: + fun: The function for which to compute the Jacobian. + param: The parameters of the function. + eps: A small epsilon value for numerical differentiation. + + Returns: + A numpy array representing the Jacobian. + """ + n_param = len(param) + n_fun = len(fun(param, **kwargs)) + jacobian = np.zeros((n_fun, n_param)) + for i in range(n_param): + param_pos = param.copy() + param_neg = param.copy() + param_pos[i] += eps + param_neg[i] -= eps + jacobian[:, i] = (fun(param_pos, **kwargs) - fun(param_neg, **kwargs)) / (2 * eps) + return jacobian + + +def interpolate_time_nodes(time_from, value, time_to, method = 'linear'): + """Interpolates the parameters (values) from the time nodes (time_from) to any array of interpolatable time_to array + + :param time_from: The time of the time_nodes (error parameters) + :type time_from: ndarray (n,) + :param value: The values of the position and orientation errors at the time_nodes + :type value: ndarray (6, n) + :param time_to: The queried time points for interpolation + :type time_to: ndarray (m,) + :param method: ['nearest', 'linear', 'slinear', 'quadratic', 'cubic'], defaults to 'linear' + :type method: str, optional + :return: _description_ + :rtype: _type_ + """ + # One simple option is to use Scipy's sortiment of interpolation options. The sparsity pattern should be included somehow.. + if method in ['nearest', 'linear', 'slinear', 'quadratic', 'cubic']: + return interp1d(time_from, value, kind=method)(time_to) + + +def compose_pose_errors(param_pose_tot, time_nodes, unix_time_features, rot_body_ned, rot_ned_ecef, pos_body, time_interpolation_method): + """Takes a (6*n_node) vector of errors, interpolates and composes (adds) them to the pose from the navigation data""" + n_features = len(unix_time_features) + + # Interpolate to the right time + err_interpolated = interpolate_time_nodes(time_nodes, + param_pose_tot, + time_to = unix_time_features, + method=time_interpolation_method).transpose() + + # The errors in the interpolated form + param_err_pos = err_interpolated[:, 0:3] + param_err_eul_ZYX = np.vstack((err_interpolated[:, 3:4].flatten(), + np.zeros(n_features), + np.zeros(n_features))).transpose() + + # To be left multiplied with attitude + rot_err_NED = RotLib.from_euler('ZYX', param_err_eul_ZYX, degrees=True) + + + # Modify the poses with the parametrically interpolated values: + pos_body += param_err_pos + + # Recall that we only estimate yaw errors + rot_body_ned_corr = rot_err_NED * rot_body_ned + + # Convert orientation to ECEF + rot_body_ecef = rot_ned_ecef * rot_body_ned_corr + + return pos_body, rot_body_ecef + + +def calculate_intrinsic_param(is_variab_param_intr, param, param0, as_calib_obj = False): + """Based on which parameters were toggled for adjustments it fills the 11 element intrinsic camera parameter vector + + :param is_variab_param_intr: _description_ + :type is_variab_param_intr: bool + :param param: _description_ + :type param: _type_ + :param param0: _description_ + :type param0: _type_ + :return: _description_ + :rtype: _type_ + """ + n_param_tot_static = len(is_variab_param_intr) + param_vec_total = np.zeros(n_param_tot_static) + + # Static parameters (not time-varying) + param_count = 0 + for i in range(n_param_tot_static): + if bool(is_variab_param_intr[i]): + # take from the parameter vector (assuming that they are ordered the same way) + param_vec_total[i] = param[param_count] + param_count += 1 + + else: + # Take fixed parameters from param0 - the initial parameter guess + param_vec_total[i] = param0[i] + + + if as_calib_obj: + # If data is to be returned as dictionary + calib_dict = {'rx': param_vec_total[0], + 'ry': param_vec_total[1], + 'rz': param_vec_total[2], + 'cx': param_vec_total[3], + 'f': param_vec_total[4], + 'k1': param_vec_total[5], + 'k2': param_vec_total[6], + 'k3': param_vec_total[7], + 'tx': param_vec_total[8], + 'ty': param_vec_total[9], + 'tz': param_vec_total[10]} + + return calib_dict + else: + # If data is to be returned as vector + return param_vec_total + + +def calculate_pose_param(is_variab_param_extr, is_variab_param_intr, param): + """Calculate a (6,n) pose vector from the parameter vector + + :param is_variab_param_extr: Boolean array saying which pose degrees of freedom are variable (to be adjusted) + :type is_variab_param_extr: ndarray (6,) bool + :param is_variab_param_intr: Boolean array saying which intrinsic/static parameters are variable (to be adjusted) + :type is_variab_param_intr: ndarray (11,) bool + :param param: parameter vector + :type param: ndarray (n_variab_param_static + 6*n,) + :return: The (6, n) pose vector where n is the number of time nodes + :rtype: ndarray (6, n) + """ + + n_variab_param_static = sum(is_variab_param_intr==1) + + # The number of pose degrees of freedom to estimate. Estimating posX, posY and yaw would give n_pose_dofs = 3 + n_pose_dofs = is_variab_param_extr.sum() + param_time_var = param[n_variab_param_static:].reshape((n_pose_dofs, -1)) + + # The errors default to zero + param_pose_tot = np.zeros((6, param_time_var.shape[1])) + + var_dof_count = 0 + for i in range(6): # Iterate 3 positions and 3 orientations + # i=0 is posx, i=1 is posy, i=2 is posz, i=3 is roll, i=4 is pitch, and i=5 is yaw + if is_variab_param_extr[i]: + # If a degree of freedom is to be tuned, we insert the error parameters from the current vector + param_pose_tot[i,:] = param_time_var[var_dof_count,:] + var_dof_count += 1 + + + return param_pose_tot + +def objective_fun_reprojection_error(param, features_df, param0, is_variab_param_intr, is_variab_param_extr, time_nodes, time_interpolation_method): + """_summary_ + + :param param: _description_ + :type param: _type_ + :param features_df: _description_ + :type features_df: _type_ + :param param0: _description_ + :type param0: _type_ + :param is_variab_param_intr: Array describing which of the 11 intrinsic parameters to be calibrated (boresight, lever arm, cam calib) + :type is_variab_param_intr: ndarray (11,) bool + :param is_variab_param_extr: Array describing which of the 6 intrinsic pose time series to be calibrated (posX, posY, posZ, roll, pitch, yaw) + :type is_variab_param_extr: ndarray (11,) bool + :param time_nodes: _description_ + :type time_nodes: _type_ + :param time_interpolation_method: _description_ + :type time_interpolation_method: _type_ + :return: _description_ + :rtype: _type_ + """ + + # + + param_vec_intr = calculate_intrinsic_param(is_variab_param_intr, param, param0) + + # Boresight + rot_x = param_vec_intr[0] + rot_y = param_vec_intr[1] + rot_z = param_vec_intr[2] + + # Principal point + cx = param_vec_intr[3] + + # Focal length + f = param_vec_intr[4] + + # Distortions + k1 = param_vec_intr[5] + k2 = param_vec_intr[6] + k3 = param_vec_intr[7] + + # Lever arm + trans_x = param_vec_intr[8] + trans_y = param_vec_intr[9] + trans_z = param_vec_intr[10] + + + + # Lever arm + trans_hsi_body = np.array([trans_z, trans_y, trans_x]) + + # Boresight vector + eul_ZYX_hsi_body = np.array([rot_z, rot_y, rot_x]) * 180 / np.pi + + # Convert to rotation object for convenience + rot_hsi_body = RotLib.from_euler('ZYX', eul_ZYX_hsi_body, degrees=True) + + # The position of the vehicle body wrt ECEF + pos_body = np.vstack((features_df['position_x'], + features_df['position_y'], + features_df['position_z'])).transpose() + + # The quaternion of the body with respect to NED + quat_body_ned = np.vstack((features_df['quaternion_b_n_x'], + features_df['quaternion_b_n_y'], + features_df['quaternion_b_n_z'], + features_df['quaternion_b_n_w'])).transpose() + + # The rotation from NED to ECEF + quat_ned_ecef = np.vstack((features_df['quaternion_n_e_x'], + features_df['quaternion_n_e_y'], + features_df['quaternion_n_e_z'], + features_df['quaternion_n_e_w'])).transpose() + + # Convert to rotation object for convenience + rot_body_ned = RotLib.from_quat(quat_body_ned) + rot_ned_ecef = RotLib.from_quat(quat_ned_ecef) + + # Whether to estimate time-varying errors + if time_nodes is None: + # Assumes that navigation-based poses are correct + rot_body_ecef = rot_ned_ecef * rot_body_ned + + else: + + # Calculate the 6 dof pose error parameters (non-adjustable parameters rows are zero) + param_pose_tot = calculate_pose_param(is_variab_param_extr, is_variab_param_intr, param) + + # The parametric errors represent a handful of nodes and must be interpolated to the feature times + unix_time_features = features_df['unix_time'] + + # We compose them with (add them to) the position/orientation estimates from the nav system + pos_body, rot_body_ecef = compose_pose_errors(param_pose_tot, time_nodes, unix_time_features, rot_body_ned, rot_ned_ecef, pos_body, time_interpolation_method) + + + + + # The reference points in ECEF (obtained from the reference orthomosaic) + points_world_reference = np.vstack((features_df['reference_points_x'], + features_df['reference_points_y'], + features_df['reference_points_z'])).transpose() + + # We reproject the reference points to the normalized HSI image plane + X_norm = geom_utils.reproject_world_points_to_hsi_plane(trans_hsi_body, + rot_hsi_body, + pos_body, + rot_body_ecef, + points_world_reference) # reprojects to the same frame + + # Reprojected observations in image plane + x_norm_reproj = X_norm[:, 0] + y_norm_reproj = X_norm[:, 1] + + # The observation is by definition in the scanline where y_norm = 0 + # Using the pixel numbers corresponding to the features + # we convert the pixel number to an x-coordinate on the normalized HSI image plane + pixel_nr = features_df['pixel_nr'] + x_norm = geom_utils.compute_camera_rays_from_parameters(pixel_nr=pixel_nr, + cx=cx, + f=f, + k1=k1, + k2=k2, + k3=k3) + + # At the time (features_df["unix_time"]) of an observation, the feature satisfies y_norm=0 (not the case for y_norm_reproj). + y_norm = np.zeros(x_norm.shape) + + # The difference is expressed as + errorx = np.array(x_norm - x_norm_reproj) + errory = np.array(y_norm - y_norm_reproj) + + # Least squares expects a 1D function evaluation vector + return np.concatenate((errorx.reshape(-1), errory.reshape(-1))) + +def filter_gcp_by_registration_error(u_err, v_err, method = 'iqr'): + """Filters the matched point based on the registration error in pixels in x/east (u_err) and y/north (v_err). + Returns a binary mask of + + :param u_err: error in pixels in x/east (u_err) + :type u_err: ndarray (n,) + :param v_err: error in pixels in y/north (v_err) + :type v_err: ndarray (n,) + :param method: The filtration method, defaults to 'iqr' + :type method: str, optional + :return: Binary mask that can be used for masking e.g. a dataframe or numpy array + :rtype: ndarray (n,) + """ + if method == 'iqr': + # Errors in the x direction. We define the inlier interval by use of the interquartile rule + Q1_u = np.percentile(u_err, 25) + Q3_u = np.percentile(u_err, 75) + IQR_u = Q3_u-Q1_u + lower_bound_u = Q1_u - 1.5*IQR_u + upper_bound_u = Q3_u + 1.5*IQR_u + + # Errors in the x direction. We define the inlier interval by use of the interquartile rule + Q1_v = np.percentile(v_err, 25) + Q3_v = np.percentile(v_err, 75) + IQR_v = Q3_v-Q1_v + lower_bound_v = Q1_v - 1.5*IQR_v + upper_bound_v = Q3_v + 1.5*IQR_v + + # Simply mask by the interquartile rule for the north-east registration errors (unit is in pixels) + feature_mask = np.all([u_err > lower_bound_u, + u_err < upper_bound_u, + v_err > lower_bound_v, + v_err < upper_bound_v], axis=0) + else: + print('This method does not exist') + NotImplementedError + return feature_mask + + +def calculate_cam_and_pose_from_param(h5_filename, param, features_df, param0, is_variab_param_intr, is_variab_param_extr, time_nodes, time_interpolation_method, h5_paths): + + # Seems wize to return + # Return in a dictionary allowing for simple dumping to XML file (exept width) + if is_variab_param_intr.sum() != 0: + # Only compute if one or more parameters have been calibrated + calib_param_intr = calculate_intrinsic_param(is_variab_param_intr, param, param0, as_calib_obj = True) + + + if time_nodes is None: + + # Calculate the pose vector + param_pose_tot = calculate_pose_param(is_variab_param_extr, is_variab_param_intr, param) + + # Read the original poses and time stamps from h5 + + position_ecef = Hyperspectral.get_dataset(h5_filename=h5_filename, + dataset_name=h5_paths['h5_folder_position_ecef']) + # Extract the ecef orientations for each frame + quaternion_ecef = Hyperspectral.get_dataset(h5_filename=h5_filename, + dataset_name=h5_paths['h5_folder_quaternion_ecef']) + # Extract the timestamps for each frame + time_pose = Hyperspectral.get_dataset(h5_filename=h5_filename, + dataset_name=h5_paths['h5_folder_quaternion_ecef']) + + # Minor code block for decomposing orientation into BODY-NED and NED-ECEF rotations + rot_body = RotLib.from_quat(quaternion_ecef) + geo_pose = GeoPose(timestamps=time_pose, + rot_obj=rot_body, + rot_ref='ECEF', + pos=position_ecef, + pos_epsg=4978) + rot_body_ned = geo_pose.rot_obj_ned + rot_ned_ecef = geo_pose.rot_obj_ned_2_ecef + + # Compose the error parameter vector into the original pose estimates + pos_body_corrected, rot_body_ecef_corrected = compose_pose_errors(param_pose_tot, + time_nodes, + time_pose, + rot_body_ned, + rot_ned_ecef, + position_ecef, + time_interpolation_method) + + + return calib_param_intr, pos_body_corrected, rot_body_ecef_corrected.as_quat() + + + + + + + + + + + # Read the correction data from param + +# Function called to apply standard processing on a folder of files +def main(config_path, mode): + config = configparser.ConfigParser() + config.read(config_path) + + # Set the coordinate reference systems + epsg_proj = int(config['Coordinate Reference Systems']['proj_epsg']) + epsg_geocsc = int(config['Coordinate Reference Systems']['geocsc_epsg_export']) + + # Establish reference data + path_orthomosaic_reference_folder = config['Absolute Paths']['orthomosaic_reference_folder'] + orthomosaic_reference_fn = os.listdir(path_orthomosaic_reference_folder)[0] # Grab the only file in folder + ref_ortho_path = os.path.join(path_orthomosaic_reference_folder, orthomosaic_reference_fn) + + dem_path = config['Absolute Paths']['dem_path'] + + # Establish match data (HSI), including composite and anc data + path_composites_match = config['Absolute Paths']['rgb_composite_folder'] + path_anc_match = config['Absolute Paths']['anc_folder'] + + # Create a temporary folder for resampled reference orthomosaics and DEMs + ref_resampled_gis_path = config['Absolute Paths']['ref_ortho_reshaped'] + if not os.path.exists(ref_resampled_gis_path): + os.mkdir(ref_resampled_gis_path) + + + ref_gcp_path = config['Absolute Paths']['ref_gcp_path'] + + + # The necessary data from the H5 file for getting the positions and orientations. + + # Position is stored here in the H5 file + h5_folder_position_ecef = config['HDF.processed_nav']['position_ecef'] + + # Quaternion is stored here in the H5 file + h5_folder_quaternion_ecef = config['HDF.processed_nav']['quaternion_ecef'] + + # Poses modified in coregistration are written to + h5_folder_position_ecef_coreg = config['HDF.coregistration']['position_ecef'] + + # Quaternion is stored here in the H5 file + h5_folder_quaternion_ecef_coreg = config['HDF.coregistration']['quaternion_ecef'] + + # Timestamps here + h5_folder_time_pose = config['HDF.processed_nav']['timestamp'] + + h5_paths = {'h5_folder_position_ecef': h5_folder_position_ecef, + 'h5_folder_quaternion_ecef': h5_folder_quaternion_ecef, + 'h5_folder_time_pose': h5_folder_quaternion_ecef, + 'h5_folder_position_ecef_coreg': h5_folder_position_ecef_coreg, + 'h5_folder_quaternion_ecef_coreg': h5_folder_quaternion_ecef_coreg} + + + + print("\n################ Coregistering: ################") + + # Iterate the RGB composites + hsi_composite_files = sorted(os.listdir(path_composites_match)) + file_count = 0 + + if mode == 'compare': + print("\n################ Comparing to reference: ################") + for file_count, hsi_composite_file in enumerate(hsi_composite_files): + + file_base_name = hsi_composite_file.split('.')[0] + + # The match data (hyperspectral) + hsi_composite_path = os.path.join(path_composites_match, hsi_composite_file) + print(hsi_composite_path) + + # Prior to matching the files are cropped to image grid of hsi_composite_path + ref_ortho_reshaped_path = os.path.join(ref_resampled_gis_path, hsi_composite_file) + # The DEM is also cropped to grid for easy extraction of data + dem_reshaped = os.path.join(ref_resampled_gis_path, file_base_name + '_dem.tif') + + # + GeoSpatialAbstractionHSI.resample_rgb_ortho_to_hsi_ortho(ref_ortho_path, hsi_composite_path, ref_ortho_reshaped_path) + + GeoSpatialAbstractionHSI.resample_dem_to_hsi_ortho(dem_path, hsi_composite_path, dem_reshaped) + + # By comparing the hsi_composite with the reference rgb mosaic we get two feature vectors in the pixel grid and + # the absolute registration error in meters in global coordinates + uv_vec_hsi, uv_vec_ref, diff_AE_meters, transform_pixel_projected = GeoSpatialAbstractionHSI.compare_hsi_composite_with_rgb_mosaic(hsi_composite_path, + ref_ortho_reshaped_path) + + + # At first the reference observations must be converted to a true 3D system, namely ecef + ref_points_ecef = GeoSpatialAbstractionHSI.compute_reference_points_ecef(uv_vec_ref, + transform_pixel_projected, + dem_reshaped, + epsg_proj, + epsg_geocsc) + + + + + + # Next up we need to get the associated pixel number and frame number. Luckily they are in the same grid as the pixel observations + anc_file_path = os.path.join(path_anc_match, file_base_name + '.hdr') + anc_image_object = sp.io.envi.open(anc_file_path) + anc_band_list = anc_image_object.metadata['band names'] + anc_nodata = float(anc_image_object.metadata['data ignore value']) + + pixel_nr_grid = anc_image_object[:,:, anc_band_list.index('pixel_nr_grid')].squeeze() + unix_time_grid = anc_image_object[:,:, anc_band_list.index('unix_time_grid')].squeeze() + + # Remove the suffixes added in the orthorectification + suffixes = ["_north_east", "_minimal_rectangle"] + for suffix in suffixes: + if file_base_name.endswith(suffix): + file_base_name_h5 = file_base_name[:-len(suffix)] + + # Read the ecef position, quaternion and timestamp + h5_filename = os.path.join(config['Absolute Paths']['h5_folder'], file_base_name_h5 + '.h5') + + # Extract the ecef positions for each frame + position_ecef = Hyperspectral.get_dataset(h5_filename=h5_filename, + dataset_name=h5_folder_position_ecef) + # Extract the ecef orientations for each frame + quaternion_ecef = Hyperspectral.get_dataset(h5_filename=h5_filename, + dataset_name=h5_folder_quaternion_ecef) + # Extract the timestamps for each frame + time_pose = Hyperspectral.get_dataset(h5_filename=h5_filename, + dataset_name=h5_folder_time_pose) + + + + pixel_nr_vec, unix_time_vec, position_vec, quaternion_vec, feature_mask = GeoSpatialAbstractionHSI.compute_position_orientation_features(uv_vec_hsi, + pixel_nr_grid, + unix_time_grid, + position_ecef, + quaternion_ecef, + time_pose, + nodata = anc_nodata) + + rot_body = RotLib.from_quat(quaternion_vec) + geo_pose = GeoPose(timestamps=unix_time_vec, + rot_obj=rot_body, + rot_ref='ECEF', + pos=position_vec, + pos_epsg=4978) + + # Divide into two linked rotations + quat_body_to_ned = geo_pose.rot_obj_ned.as_quat() + quat_ned_to_ecef = geo_pose.rot_obj_ned_2_ecef.as_quat() + + + # Mask the reference points accordingly and the difference vector + ref_points_vec = ref_points_ecef[feature_mask, :] + diff_AE_valid = diff_AE_meters[feature_mask] + + diff_uv = uv_vec_hsi[feature_mask, :] - uv_vec_ref[feature_mask, :] + + # Now we have computed the GCPs with coincident metainformation + n_cols_df = pixel_nr_vec.size + gcp_dict = {'file_count': np.ones(n_cols_df)*file_count, + 'h5_filename': np.repeat(h5_filename, n_cols_df), + 'pixel_nr': pixel_nr_vec, + 'unix_time': unix_time_vec, + 'position_x': position_vec[:,0], + 'position_y': position_vec[:,1], + 'position_z': position_vec[:,2], + 'quaternion_b_n_x': quat_body_to_ned[:,0], + 'quaternion_b_n_y': quat_body_to_ned[:,1], + 'quaternion_b_n_z': quat_body_to_ned[:,2], + 'quaternion_b_n_w': quat_body_to_ned[:,3], + 'quaternion_n_e_x': quat_ned_to_ecef[:,0], + 'quaternion_n_e_y': quat_ned_to_ecef[:,1], + 'quaternion_n_e_z': quat_ned_to_ecef[:,2], + 'quaternion_n_e_w': quat_ned_to_ecef[:,3], + 'reference_points_x': ref_points_vec[:,0], + 'reference_points_y': ref_points_vec[:,1], + 'reference_points_z': ref_points_vec[:,2], + 'diff_absolute_error': diff_AE_valid, + 'diff_u': diff_uv[:, 0], + 'diff_v': diff_uv[:, 1]} + + + + # Convert to a dataframe + gcp_df = pd.DataFrame(gcp_dict) + + # Maybe write this dataframe to a + if file_count==0: + gcp_df_all = gcp_df + else: + gcp_df_all = pd.concat([gcp_df_all, gcp_df]) + + gcp_df_all.to_csv(path_or_buf=ref_gcp_path) + + elif mode == 'calibrate': + + + gcp_df_all = pd.read_csv(ref_gcp_path) + + # Registration error in pixels in x-direction (u_err) and y-direction (v_err) + u_err = gcp_df_all['diff_u'] + v_err = gcp_df_all['diff_v'] + + feature_mask = filter_gcp_by_registration_error(u_err, v_err, method = 'iqr') + + # These features are used + df_gcp_filtered = gcp_df_all[feature_mask] + + + print("\n################ Calibrating camera parameters: ################") + # Using the accumulated features, we can optimize for the boresight angles, camera parameters and lever arms + + ## Defining the options: + + # Here we define what that is to be calibrated + calibrate_dict = {'calibrate_boresight': False, + 'calibrate_camera': False, + 'calibrate_lever_arm': False, + 'calibrate_cx': False, + 'calibrate_f': True, + 'calibrate_k1': False, + 'calibrate_k2': True, + 'calibrate_k3': True} + + calibrate_per_transect = True + estimate_time_varying = True + time_node_spacing = 10 # s. If spacing 10 and transect lasts for 53 s will attempt to divide into largest integer leaving + time_interpolation_method = 'linear' + + # Select which time varying degrees of freedom to estimate errors for + calibrate_dict_extr = {'calibrate_pos_x': True, + 'calibrate_pos_y': True, + 'calibrate_pos_z': True, + 'calibrate_roll': False, + 'calibrate_pitch': False, + 'calibrate_yaw': True} + + # Localize the prior calibration and use as initial parameters as well as constants for parameter xx if calibrate_xx = False + cal_obj_prior = CalibHSI(file_name_cal_xml=config['Absolute Paths']['hsi_calib_path']) + + # Whether the parameter is toggled for calibration + is_variab_param_intr = np.zeros(11).astype(np.int64) + + # Whether the DOF is toggled for calibration + is_variab_param_extr = np.zeros(6).astype(np.int64) + + param0 = np.array([cal_obj_prior.rx, + cal_obj_prior.ry, + cal_obj_prior.rz, + cal_obj_prior.cx, + cal_obj_prior.f, + cal_obj_prior.k1, + cal_obj_prior.k2, + cal_obj_prior.k3, + cal_obj_prior.tx, + cal_obj_prior.ty, + cal_obj_prior.tz]) + + if calibrate_dict ['calibrate_boresight']: + is_variab_param_intr[0:3] = 1 + + if calibrate_dict ['calibrate_camera']: + is_variab_param_intr[3:8] = 1 + if not calibrate_dict['calibrate_cx']: + is_variab_param_intr[3] = 0 + if not calibrate_dict['calibrate_f']: + is_variab_param_intr[4] = 0 + if not calibrate_dict['calibrate_k1']: + is_variab_param_intr[5] = 0 + if not calibrate_dict['calibrate_k2']: + is_variab_param_intr[6] = 0 + if not calibrate_dict['calibrate_k3']: + is_variab_param_intr[7] = 0 + + if calibrate_dict ['calibrate_lever_arm']: + is_variab_param_intr[8:11] = 1 + + + + if estimate_time_varying: + # Define which dofs are to be adjusted in optimization + if calibrate_dict_extr['calibrate_pos_x']: + is_variab_param_extr[0] = 1 + + if calibrate_dict_extr['calibrate_pos_y']: + is_variab_param_extr[1] = 1 + + if calibrate_dict_extr['calibrate_pos_z']: + is_variab_param_extr[2] = 1 + + if calibrate_dict_extr['calibrate_roll']: + is_variab_param_extr[3] = 1 + + if calibrate_dict_extr['calibrate_pitch']: + is_variab_param_extr[4] = 1 + + if calibrate_dict_extr['calibrate_yaw']: + is_variab_param_extr[5] = 1 + + # Based on which options were toggled, the number of adjustable dofs can be computed + n_adjustable_dofs = is_variab_param_extr.sum() + + # These are the adjustable parameters (not time-varying) + param0_variab = param0[is_variab_param_intr==1] + + # Set the keyword arguments for optimization + kwargs = {'features_df': None, + 'param0': param0, + 'is_variab_param_intr': is_variab_param_intr, + 'time_nodes': None, + 'is_variab_param_extr': is_variab_param_extr, + 'time_interpolation_method': time_interpolation_method} + + # The sometimes natural think to do + if calibrate_per_transect: + # Number of transects is found form data frame + n_transects = 1 + (df_gcp_filtered['file_count'].max() - df_gcp_filtered['file_count'].min()) + + # Iterate through transects + for i in range(int(n_transects)): + df_current = df_gcp_filtered[df_gcp_filtered['file_count'] == i] + + # Update the feature info + kwargs['features_df'] = df_current + + n_features = df_current.shape[0] + + if estimate_time_varying: + times_samples = df_current['unix_time'] + transect_duration_sec = times_samples.max() - times_samples.min() + number_of_nodes = int(np.floor(transect_duration_sec/time_node_spacing)) + 1 + + # The time varying parameters are in total the number of dofs times number of nodes + param0_time_varying = np.zeros(n_adjustable_dofs*number_of_nodes) + + # The time-varying parameters are stacked after the intrinsic parameters. + # This vector only holds parameters that will be adjusted + param0_variab_tot = np.concatenate((param0_variab, + param0_time_varying), axis=0) + + # Calculate the number of nodes. It divides the transect into equal intervals, + # meaning that the intervals can be somewhat different at least for a small number of them. + time_nodes = np.linspace(start=times_samples.min(), + stop = times_samples.max(), + num = number_of_nodes) + # Update optimization kwarg + kwargs['time_nodes'] = time_nodes + + # Calculate the Jacobian for finding and exploiting sparsity + J = numerical_jacobian(fun = objective_fun_reprojection_error, param = param0_variab_tot, **kwargs) + + sparsity_perc = 100*((J==0).sum()/J.size) + + #print(f'Jacobian computed with {sparsity_perc:.0f} % zeros') + + # Using list of list representation as recommended by Scipy's least squares + sparsity = lil_matrix(J.shape, dtype=int) + + # Setting the non sparse elements (in theory they could be set for small values of J too) + sparsity[J != 0] = 1 + else: + time_nodes = None + param0_variab_tot = param0_variab + + + # Run once with initial parameters + res_pre_optim = objective_fun_reprojection_error(param0_variab_tot, **kwargs) + + # Calculate the median absolute error in pixels + abs_err = np.sqrt(res_pre_optim[0:n_features]**2 + res_pre_optim[0:n_features]**2) + median_error = np.median(abs_err)*cal_obj_prior.f + print(f'Original MAE rp-error is {median_error:.1f} pixels') + + # Optimize the transect and record time duration + time_start = time.time() + res = least_squares(fun = objective_fun_reprojection_error, + x0 = param0_variab_tot, + x_scale='jac', + jac_sparsity=sparsity, + kwargs=kwargs) + + optim_dur = time.time() - time_start + + # Absolute reprojection errors + abs_err = np.sqrt(res.fun[0:n_features]**2 + res.fun[0:n_features]**2) + median_error_pix = np.median(abs_err)*cal_obj_prior.f + + print(f'Optimized MAE rp-error is {median_error_pix:.1f} pixels') + print(f'Optimization time was {optim_dur:.0f} sec') + print(f'Number of nodes was {number_of_nodes}') + print(f'Number of features was {n_features}') + print('') + + + # TODO: the parameters should be injected into h5-file + + h5_filename = df_current['h5_filename'][0] + param_optimized = res.x + camera_model_dict_updated, position_updated, quaternion_updated = calculate_cam_and_pose_from_param(h5_filename, param_optimized, **kwargs, h5_paths=h5_paths) + + else: + + n_features = df_gcp_filtered.shape[0] + + res_pre_optim = objective_fun_reprojection_error(param0_variab, df_gcp_filtered, param0, is_variab_param_intr) + median_error_x = np.median(np.abs(res_pre_optim[0:n_features])) + median_error_y = np.median(np.abs(res_pre_optim[n_features:2*n_features])) + print(f'Original rp-error in x is {median_error_x} and in y is {median_error_y}') + + res = least_squares(fun = objective_fun_reprojection_error, + x0 = param0_variab, + args= (df_gcp_filtered, param0, is_variab_param_intr, ), + x_scale='jac', + method = 'lm') + + median_error_x = np.median(np.abs(res.fun[0:n_features])) + median_error_y = np.median(np.abs(res.fun[n_features:2*n_features])) + print(f'Optimized MAE rp-error in x is {median_error_x} and in y is {median_error_y}') + + \ No newline at end of file diff --git a/gref4hsi/scripts/georeference.py b/gref4hsi/scripts/georeference.py index ae2a19b..8903df9 100644 --- a/gref4hsi/scripts/georeference.py +++ b/gref4hsi/scripts/georeference.py @@ -12,15 +12,15 @@ # Lib resources: from gref4hsi.utils.geometry_utils import CameraGeometry, CalibHSI from gref4hsi.utils.parsing_utils import Hyperspectral -from gref4hsi.scripts import visualize +from gref4hsi.utils import visualize -def cal_file_to_rays(filename_cal, config): +def cal_file_to_rays(filename_cal): # See paper by Sun, Bo, et al. "Calibration of line-scan cameras for precision measurement." Applied optics 55.25 (2016): 6836-6843. # Loads line camera parameters for the hyperspectral imager from an xml file. # Certain imagers deliver geometry "per pixel". This can be resolved by fitting model parameters. - calHSI = CalibHSI(file_name_cal_xml=filename_cal, config = config) + calHSI = CalibHSI(file_name_cal_xml=filename_cal) f = calHSI.f u_c = calHSI.cx @@ -66,17 +66,7 @@ def cal_file_to_rays(filename_cal, config): rot_hsi_ref_obj = RotLib.from_euler(seq = 'ZYX', angles = rot_hsi_ref_eul, degrees=False) - try: - lever_arm_unit = config['General']['lever_arm_unit'] - except: - # Defaults to meter if not set - lever_arm_unit = 'm' - - # Handle legacy - if lever_arm_unit == 'mm': - translation_ref_hsi = np.array([trans_x, trans_y, trans_z]) / 1000 # These are millimetres - elif lever_arm_unit == 'm': - translation_ref_hsi = np.array([trans_x, trans_y, trans_z]) + translation_ref_hsi = np.array([trans_x, trans_y, trans_z]) intrinsic_geometry_dict = {'translation_ref_hsi': translation_ref_hsi, 'rot_hsi_ref_obj': rot_hsi_ref_obj, @@ -86,18 +76,18 @@ def cal_file_to_rays(filename_cal, config): return intrinsic_geometry_dict -def define_hsi_ray_geometry(pos_ref_ecef, quat_ref_ecef, time_pose, pos0, intrinsic_geometry_dict): +def define_hsi_ray_geometry(pos_ref_ecef, quat_ref_ecef, time_pose, intrinsic_geometry_dict): """Instantiate a camera geometry object from the h5 pose data""" - pos = pos_ref_ecef # Reference positions in ECEF offset by pos0 + pos = pos_ref_ecef # Reference positions in ECEF rot_obj = RotLib.from_quat(quat_ref_ecef) # Reference orientations wrt ECEF ray_directions_local = intrinsic_geometry_dict['ray_directions_local'] translation_ref_hsi = intrinsic_geometry_dict['translation_ref_hsi'] rot_hsi_ref_obj = intrinsic_geometry_dict['rot_hsi_ref_obj'] - hsi_geometry = CameraGeometry(pos0=pos0, pos=pos, rot=rot_obj, time=time_pose, is_interpolated=True, use_absolute_position=True) + hsi_geometry = CameraGeometry(pos=pos, rot=rot_obj, time=time_pose, is_interpolated=True, use_absolute_position=True) hsi_geometry.intrinsicTransformHSI(translation_ref_hsi=translation_ref_hsi, rot_hsi_ref_obj = rot_hsi_ref_obj) @@ -167,11 +157,10 @@ def main(iniPath, viz = False): intrinsic_geometry_dict = cal_file_to_rays(filename_cal=hsi_cal_xml, config=config) - # Define the rays in ECEF for each frame. Note that if there is no position offset, pos0 is a 1x3 of zeros + # Define the rays in ECEF for each frame. hsi_geometry = define_hsi_ray_geometry(pos_ref_ecef = hyp.pos_ref, quat_ref_ecef = hyp.quat_ref, - time_pose = hyp.pose_time, - pos0 = hyp.pos0, + time_pose = hyp.pose_time, intrinsic_geometry_dict = intrinsic_geometry_dict) @@ -185,7 +174,8 @@ def main(iniPath, viz = False): hsi_geometry.compute_tide_level(path_tide, tide_format = 'NMA') - hsi_geometry.compute_elevation_mean_sealevel(source_epsg=config['Coordinate Reference Systems']['geocsc_epsg_export'], geoid_path=config['Absolute Paths']['geoid_path']) + hsi_geometry.compute_elevation_mean_sealevel(source_epsg = config['Coordinate Reference Systems']['geocsc_epsg_export'], + geoid_path = config['Absolute Paths']['geoid_path']) write_intersection_geometry_2_h5_file(hsi_geometry=hsi_geometry, config = config, h5_filename=path_hdf) diff --git a/gref4hsi/scripts/orthorectification.py b/gref4hsi/scripts/orthorectification.py index 30ba6bb..f7e7c06 100644 --- a/gref4hsi/scripts/orthorectification.py +++ b/gref4hsi/scripts/orthorectification.py @@ -55,21 +55,12 @@ def main(iniPath): # Settings having to do with coordinate reference systems, described below SettingsCRS = namedtuple('SettingsOrthorectification', ['epsg_geocsc', - 'epsg_proj', - 'off_x', - 'off_y', - 'off_z']) + 'epsg_proj']) config_crs = SettingsCRS(epsg_geocsc=int(config['Coordinate Reference Systems']['geocsc_epsg_export']), # The epsg code of the geocentric coordinate system (ECEF) epsg_proj=int(config['Coordinate Reference Systems']['proj_epsg']), # The epsg code of the projected coordinate system (e.g. UTM 32 has epsg 32632 for wgs 84 ellipsoid) - off_x = float(config['General']['offset_x']), - # The geocentric offset along x - off_y = float(config['General']['offset_y']), - # The geocentric offset along y - off_z = float(config['General']['offset_z']) - # The geocentric offset along z ) # Settings associated with orthorectification of datacube diff --git a/gref4hsi/tests/test_main_hi.py b/gref4hsi/tests/test_main_hi.py index bbdc2f0..4ab2575 100644 --- a/gref4hsi/tests/test_main_hi.py +++ b/gref4hsi/tests/test_main_hi.py @@ -7,7 +7,7 @@ from scripts import georeference from scripts import orthorectification from utils import parsing_utils -from scripts import visualize +from gref4hsi.utils import visualize from gref4hsi.utils.config_utils import prepend_data_dir_to_relative_paths """ diff --git a/gref4hsi/tests/test_main_specim.py b/gref4hsi/tests/test_main_specim.py index 6168610..5351772 100644 --- a/gref4hsi/tests/test_main_specim.py +++ b/gref4hsi/tests/test_main_specim.py @@ -5,100 +5,74 @@ import argparse from collections import namedtuple import pathlib +import yaml -# This is very hard coded, but not necessary if Python does not know where to look -module_path = '/home/haavasl/VsCodeProjects/gref4hsi/gref4hsi/' -module_path = 'C:/Users/haavasl/VSCodeProjects/hyperspectral_toolchain/gref4hsi/' +if os.name == 'nt': + # Windows OS + base_fp = 'D:' + home = 'C:/Users/haavasl' +elif os.name == 'posix': + # This Unix-like systems inl. Mac and fLinux + base_fp = '/media/haavasl/Expansion' + home = '/home/haavasl' + +# Use this if working with the github repo to do quick changes to the module +module_path = os.path.join(home, 'VsCodeProjects/gref4hsi/') if module_path not in sys.path: sys.path.append(module_path) # Local resources from gref4hsi.scripts import georeference from gref4hsi.scripts import orthorectification +from gref4hsi.scripts import coregistration from gref4hsi.utils import parsing_utils, specim_parsing_utils -from gref4hsi.scripts import visualize +from gref4hsi.utils import visualize from gref4hsi.utils.config_utils import prepend_data_dir_to_relative_paths, customize_config import numpy as np + """ This script is meant to be used for testing the processing pipeline of airborne HI data from the Specim AFX10 instrument. """ # Make it simple to swap when working a bit on windows and a bit on Linux -if os.name == 'nt': - # Windows OS - base_fp = 'D:' - home = 'C:/Users/haavasl' -elif os.name == 'posix': - # This Unix-like systems inl. Mac and Linux - base_fp = '/media/haavasl/Expansion' - home = 'C:/Users/haavasl' - -def main(): - # Set up argparse - parser = argparse.ArgumentParser('georeference and rectify images') - - parser.add_argument('-s', '--specim_mission_folder', - type=str, - default= r"/Specim/Missions/2024-02-19-Sletvik/slettvik_hopavaagen_202402191253_ntnu_hyperspectral_74m", - help='folder storing the hyperspectral data for the specific mission') - - parser.add_argument('-e', '--epsg_code', - default=25832, - type=int, - help='Coordinate Reference System EPSG code (e.g. 25832 for UTM 32)') - - parser.add_argument('-r', '--resolution_orthomosaic', - type=float, - default=1, - help='Resolution of the final processed orthomosaic in meters') - - parser.add_argument('-cal_dir', '--calibration_directory', - type=str, - default="/Specim/Lab_Calibrations", - help='Directory holding all spectral/radiometric/geometric calibrations for all binning values' ) - - parser.add_argument('-c', '--config_file_yaml', - default="", - help='File that contains the configuration \ - parameters for the processing. \ - If nothing, one is generated from template.\ - can simply be one that was used for another mission.') - parser.add_argument('-t', '--terrain_type', - default="geoid", type=str, - help ='If terrain DEM is known, select "dem_file", and if not select "geoid".') - - parser.add_argument('-geoid', '--geoid_path', - default="/Specim/Missions/2024-02-19-Sletvik/slettvik_hopavaagen_202402191253_ntnu_hyperspectral_74m/geoids/no_kv_HREF2018A_NN2000_EUREF89.tif", - type=str, - help='If terrain DEM is not available.') - - parser.add_argument('-d', '--dem_path', - default="/Specim/Missions/2024-02-19-Sletvik/slettvik_hopavaagen_202402191253_ntnu_hyperspectral_74m/dem/dem.tif", - type=str, - help='A digital terrain model, if available. If none, the geoid will be used.') - - parser.add_argument('-v', '--enable_visualize', - default=False, - type = bool, - help='Visualize vehicle track and terrain model') - - - args = parser.parse_args() +def main(config_yaml, specim_mission_folder, geoid_path, config_template_path, lab_calibration_path): + # Read flight-specific yaml file + with open(config_yaml, 'r') as file: + config_data = yaml.safe_load(file) + + # assigning the arguments to variables for simple backwards compatibility - SPECIM_MISSION_FOLDER = os.path.join(base_fp, args.specim_mission_folder) - EPSG_CODE = args.epsg_code - RESOLUTION_ORTHOMOSAIC = args.resolution_orthomosaic - CALIBRATION_DIRECTORY = os.path.join(base_fp, args.calibration_directory) - CONFIG_FILE = args.config_file_yaml - ENABLE_VISUALIZE = args.enable_visualize - TERRAIN_TYPE = args.terrain_type - DEM_PATH = os.path.join(base_fp, args.dem_path) - GEOID_PATH = os.path.join(base_fp, args.geoid_path) + SPECIM_MISSION_FOLDER = specim_mission_folder + EPSG_CODE = config_data['mission_epsg'] + RESOLUTION_ORTHOMOSAIC = config_data['resolution_orthomosaic'] + CALIBRATION_DIRECTORY = lab_calibration_path + + + dem_fold = os.path.join(specim_mission_folder, "dem") + + if not os.path.exists(dem_fold): + print('DEM folder does not exist so Geoid is used as terrain instead') + TERRAIN_TYPE = "geoid" + else: + if not os.listdir(dem_fold): + #print(f"The folder '{dem_fold}' is empty so Geoid is used as terrain instead.") + TERRAIN_TYPE = "geoid" + else: + # If there is a folder and it is not empty + # Find the only file that is there + files = [f for f in os.listdir(dem_fold) if f not in ('.', '..')] + DEM_PATH = os.path.join(dem_fold, files[0]) + #print(f"The file '{DEM_PATH}' is used as terrain.") + TERRAIN_TYPE = "dem_file" + + + + GEOID_PATH = geoid_path # Settings associated with preprocessing of data from Specim Proprietary data to pipeline-compatible data SettingsPreprocess = namedtuple('SettingsPreprocessing', ['dtype_datacube', @@ -111,7 +85,7 @@ def main(): 'config_file_name']) config_specim_preprocess = SettingsPreprocess(dtype_datacube = np.float32, # The data type for the datacube - lines_per_chunk= 2000, # Raw datacube is chunked into this many lines. GB_per_chunk = lines_per_chunk*n_pixels*n_bands*4 bytes + lines_per_chunk= 8000, # Raw datacube is chunked into this many lines. GB_per_chunk = lines_per_chunk*n_pixels*n_bands*4 bytes specim_raw_mission_dir = SPECIM_MISSION_FOLDER, # Folder containing several mission cal_dir = CALIBRATION_DIRECTORY, # Calibration directory holding all calibrations at all binning levels reformatted_missions_dir = os.path.join(SPECIM_MISSION_FOLDER, 'processed'), # The fill value for empty cells (select values not occcuring in cube or ancillary data) @@ -129,15 +103,8 @@ def main(): config_file_mission = os.path.join(DATA_DIR, 'configuration.ini') - # Read config from a template (relative path): - if CONFIG_FILE != "": - config_path_template = CONFIG_FILE - else: - config_path_template = os.path.join(home, '/VsCodeProjects/gref4hsi/data/config_examples/configuration_specim.ini') - - # Set the data directory for the mission, and create empty folder structure - prepend_data_dir_to_relative_paths(config_path=config_path_template, DATA_DIR=DATA_DIR) + prepend_data_dir_to_relative_paths(config_path=config_template_path, DATA_DIR=DATA_DIR) # Non-default settings custom_config = {'General': @@ -153,6 +120,7 @@ def main(): 'Orthorectification': {'resample_rgb_only': True, # True can be good choice for speed during DEV + 'resample_ancillary': True, 'resolutionhyperspectralmosaic': RESOLUTION_ORTHOMOSAIC, # Resolution in m 'raster_transform_method': 'north_east'}, # North-east oriented rasters. @@ -164,13 +132,19 @@ def main(): 'geoid_path' : GEOID_PATH, #'geoid_path' : 'data/world/geoids/egm08_25.gtx', 'dem_path' : DEM_PATH, + 'orthomosaic_reference_folder' : os.path.join(specim_mission_folder, "orthomosaic"), + 'ref_ortho_reshaped' : os.path.join(DATA_DIR, "Intermediate", "RefOrthoResampled"), + 'ref_gcp_path' : os.path.join(DATA_DIR, "Intermediate", "gcp.csv"), # (above) The georeferencing allows processing using norwegian geoid NN2000 and worldwide EGM2008. Also, use of seafloor terrain models are supported. ' # At the moment refractive ray tracing is not implemented, but it could be relatively easy by first ray tracing with geoid+tide, # and then ray tracing from water #'tide_path' : 'D:/HyperspectralDataAll/HI/2022-08-31-060000-Remoy-Specim/Input/tidevann_nn2000_NMA.txt' - } - # Tide data can be downloaded from https://www.kartverket.no/til-sjos/se-havniva - # Preferably it is downloaded with reference "NN2000" to agree with DEM + }, + # If coregistration is done, then the data must be stored after processing somewhere + 'HDF.coregistration': { + 'position_ecef': 'processed/coreg/position_ecef', + 'quaternion_ecef' : 'processed/coreg/quaternion_ecef' + } } @@ -190,27 +164,41 @@ def main(): # This function parses raw specim data including (spectral, radiometric, geometric) calibrations and nav data # into an h5 file. The nav data is written to "raw/nav/" subfolders, whereas hyperspectral data and calibration data # written to "processed/hyperspectral/" and "processed/calibration/" subfolders - specim_parsing_utils.main(config=config, - config_specim=config_specim_preprocess) + """specim_parsing_utils.main(config=config, + config_specim=config_specim_preprocess)""" # Interpolates and reformats the pose (of the vehicle body) to "processed/nav/" folder. config = parsing_utils.export_pose(config_file_mission) # Exports model - parsing_utils.export_model(config_file_mission) + """parsing_utils.export_model(config_file_mission)""" # Commenting out the georeference step is fine if it has been done ## Visualize the data 3D photo model from RGB images and the time-resolved positions/orientations - if ENABLE_VISUALIZE: - visualize.show_mesh_camera(config, show_mesh = True, show_pose = True, ref_frame='ENU') + """if ENABLE_VISUALIZE: + visualize.show_mesh_camera(config, show_mesh = True, show_pose = True, ref_frame='ENU')""" # Georeference the line scans of the hyperspectral imager. Utilizes parsed data - georeference.main(config_file_mission) + #georeference.main(config_file_mission) + + #orthorectification.main(config_file_mission) + + coregistration.main(config_file_mission, mode='compare') - orthorectification.main(config_file_mission) + coregistration.main(config_file_mission, mode='calibrate') if __name__ == "__main__": - main() \ No newline at end of file + # Select a recording folder on drive + specim_mission_folder = os.path.join(base_fp, r"Specim/Missions/2022-08-31-Remøy/remoy_202208310800_ntnu_hyperspectral_74m") + + # Globally accessible files: + geoid_path = os.path.join(home, "VsCodeProjects/gref4hsi/data/world/geoids/egm08_25.gtx") + config_template_path = os.path.join(home, "VsCodeProjects/gref4hsi/data/config_examples/configuration_specim.ini") + lab_calibration_path = os.path.join(base_fp, "Specim/Lab_Calibrations") + + # The configuration file + config_yaml = os.path.join(specim_mission_folder, "config.seabee.yaml") + main(str(config_yaml), str(specim_mission_folder), geoid_path, config_template_path, lab_calibration_path) \ No newline at end of file diff --git a/gref4hsi/tests/test_main_uhi.py b/gref4hsi/tests/test_main_uhi.py index 6cdc304..1d74852 100644 --- a/gref4hsi/tests/test_main_uhi.py +++ b/gref4hsi/tests/test_main_uhi.py @@ -1,24 +1,38 @@ from collections import namedtuple import configparser import os +import sys import numpy as np +# Detect OS and set FPs +if os.name == 'nt': + # Windows OS + base_fp = 'D:' + home = 'C:/Users/haavasl' +elif os.name == 'posix': + # This Unix-like systems inl. Mac and Linux + base_fp = '/media/haavasl/Expansion' + home = 'C:/Users/haavasl' + +# Use this if working with the github repo to do quick changes to the module +module_path = os.path.join(home, 'VsCodeProjects/gref4hsi/') +if module_path not in sys.path: + sys.path.append(module_path) + from gref4hsi.utils import parsing_utils, uhi_parsing_utils from gref4hsi.scripts import georeference, orthorectification -from gref4hsi.scripts import visualize +from gref4hsi.utils import visualize from gref4hsi.utils.config_utils import prepend_data_dir_to_relative_paths, customize_config -DATA_DIR = "/media/haavasl/Expansion/HyperspectralDataAll/UHI/2020-07-01-14-34-57-ArcticSeaIce-Ben-Lange/" +DATA_DIR = os.path.join(base_fp, "HyperspectralDataAll/UHI/2020-07-01-14-40-15-ArcticSeaIce-Ben-Lange/") # The configuration file stores the settings for georeferencing config_file_mission = os.path.join(DATA_DIR, 'configuration.ini') - - -config_path_template = '/home/haavasl/VsCodeProjects/gref4hsi/data/config_examples/configuration_uhi.ini' +config_path_template = os.path.join(home, 'VsCodeProjects/gref4hsi/data/config_examples/configuration_uhi.ini') # Copies the template to config_file_mission and sets up the necessary directories prepend_data_dir_to_relative_paths(config_path=config_path_template, DATA_DIR=DATA_DIR) @@ -27,8 +41,8 @@ custom_config = {'General': {'mission_dir': DATA_DIR, 'model_export_type': 'dem_file', # Infer seafloor structure from altimeter recordings - 'max_ray_length': 5, - 'lab_cal_dir': '/media/haavasl/Expansion/HyperspectralDataAll/UHI/Lab_Calibration_Data/NP'}, # Max distance in meters from UHI to seafloor + 'max_ray_length': 20, + 'lab_cal_dir': os.path.join(base_fp, 'HyperspectralDataAll/UHI/Lab_Calibration_Data/NP')}, # Max distance in meters from UHI to seafloor 'Coordinate Reference Systems': {'proj_epsg' : 3395, # The projected CRS for orthorectified data (an arctic CRS) @@ -40,11 +54,11 @@ {'dem_folder': 'Input/GIS/'}, # Using altimeter, we generate one DEM per transect chunk 'Absolute Paths': - {'geoid_path': '/media/haavasl/Expansion/Specim/Missions/2024-02-19-Sletvik/slettvik_hopavaagen_202402191253_ntnu_hyperspectral_74m/geoids/no_kv_HREF2018A_NN2000_EUREF89.tif'}, # Using altimeter, we generate one DEM per transect chunk + {'geoid_path': os.path.join(home, 'VsCodeProjects\gref4hsi\data\world\geoids\egm08_25.gtx')}, # Using altimeter, we generate one DEM per transect chunk 'Orthorectification': - {'resample_rgb_only': True, # Good choice for speed - 'resolutionhyperspectralmosaic': 0.1, # 1 cm + {'resample_rgb_only': False, # Good choice for speed + 'resolutionhyperspectralmosaic': 0.01, # 1 cm 'raster_transform_method': 'north_east'}, 'HDF.raw_nav': {'altitude': 'raw/nav/altitude', @@ -95,17 +109,20 @@ [0, 0, -1]]), # Boolean being expressing whether to rectify only composite (true) or data cube and composite (false). True is fast. translation_alt_to_body = np.array([0.5, 0, 0]), - time_offset_sec = 23, + time_offset_sec = 0, + # Ben's tick s1 starts at 1593614097.992003 -> 22 s delay + # Ben's tick s2 starts at 1593614414.995001 -> 0 s delay + lon_lat_alt_origin = np.array([1, 1, 0]), # The beast sets up a fake coordinate system at 1 deg lon/lat. config_file_name = 'configuration.ini', - resolution_dem = 0.2, - agisoft_process = False) + resolution_dem = 0.2, # The resolution of the Altimeter-based DEM + agisoft_process = False) # This is an option for photogrammetry-based processing in the case you have imagery + -# TODO: update config.ini automatically with paths for simple reproducability """config = configparser.ConfigParser() config.read(config_file)""" @@ -116,7 +133,7 @@ def main(): config.read(config_file_mission) # The minimum for georeferencing is to parse 1) Mesh model and 2) The pose of the reference - """uhi_parsing_utils.uhi_beast(config=config, config_uhi=config_uhi_preprocess)""" + uhi_parsing_utils.uhi_beast(config=config, config_uhi=config_uhi_preprocess) config = parsing_utils.export_pose(config_file_mission) diff --git a/gref4hsi/scripts/colours.py b/gref4hsi/utils/colours.py similarity index 100% rename from gref4hsi/scripts/colours.py rename to gref4hsi/utils/colours.py diff --git a/gref4hsi/utils/geometry_utils.py b/gref4hsi/utils/geometry_utils.py index 5201b18..ccc00e5 100644 --- a/gref4hsi/utils/geometry_utils.py +++ b/gref4hsi/utils/geometry_utils.py @@ -26,7 +26,7 @@ # A file were we define geometry and geometric transforms class CalibHSI: - def __init__(self, file_name_cal_xml, config, mode = 'r', param_dict = None): + def __init__(self, file_name_cal_xml, mode = 'r', param_dict = None): """ :param file_name_cal_xml: str File name of calibration file for line camera model @@ -80,15 +80,9 @@ def __init__(self, file_name_cal_xml, config, mode = 'r', param_dict = None): fd.write(xmltodict.unparse(xml_dict)) class CameraGeometry(): - def __init__(self, pos0, pos, rot, time, is_interpolated = False, use_absolute_position = False): - self.pos0 = pos0 - - if use_absolute_position: - self.position_nav = pos - else: - self.position_nav = pos - np.transpose(pos0) # Camera pos - - + def __init__(self, pos, rot, time, is_interpolated = False, use_absolute_position = False): + + self.position_nav = pos self.rotation_nav = rot self.time = time @@ -402,17 +396,17 @@ def compute_view_directions_local_tangent_plane(self): self.theta_v = np.zeros((n, m)) self.phi_v = np.zeros((n, m)) - x_ecef = self.points_ecef_crs[:, :, 0].reshape((-1,1)) + self.pos0[0] - y_ecef = self.points_ecef_crs[:, :, 1].reshape((-1,1)) + self.pos0[1] - z_ecef = self.points_ecef_crs[:, :, 2].reshape((-1,1)) + self.pos0[2] + x_ecef = self.points_ecef_crs[:, :, 0].reshape((-1,1)) + y_ecef = self.points_ecef_crs[:, :, 1].reshape((-1,1)) + z_ecef = self.points_ecef_crs[:, :, 2].reshape((-1,1)) lats, lons, alts = pm.ecef2geodetic(x = x_ecef, y = y_ecef, z = z_ecef) start = np.einsum('ijk, ik -> ijk', np.ones((n, m, 3), dtype=np.float64), self.position_ecef).reshape((-1,3)) - x_hsi = start[:, 0].reshape((-1,1)) + self.pos0[0] - y_hsi = start[:, 1].reshape((-1,1)) + self.pos0[1] - z_hsi = start[:, 2].reshape((-1,1)) + self.pos0[2] + x_hsi = start[:, 0].reshape((-1,1)) + y_hsi = start[:, 1].reshape((-1,1)) + z_hsi = start[:, 2].reshape((-1,1)) # Compute vectors from seabed intersections to HSI in NED NED = pm.ecef2ned(x= x_hsi, y= y_hsi, z=z_hsi, lat0 = lats, lon0=lons, h0 = alts) @@ -444,9 +438,9 @@ def compute_sun_angles_local_tangent_plane(self): n = self.rayDirectionsGlobal.shape[0] m = self.rayDirectionsGlobal.shape[1] - x_ecef = self.points_ecef_crs[:, :, 0].reshape((-1,1)) + self.pos0[0] - y_ecef = self.points_ecef_crs[:, :, 1].reshape((-1,1)) + self.pos0[1] - z_ecef = self.points_ecef_crs[:, :, 2].reshape((-1,1)) + self.pos0[2] + x_ecef = self.points_ecef_crs[:, :, 0].reshape((-1,1)) + y_ecef = self.points_ecef_crs[:, :, 1].reshape((-1,1)) + z_ecef = self.points_ecef_crs[:, :, 2].reshape((-1,1)) lats, lons, alts = pm.ecef2geodetic(x = x_ecef, y = y_ecef, z = z_ecef) @@ -466,9 +460,9 @@ def compute_elevation_mean_sealevel(self, source_epsg, geoid_path = 'data/world/ n = self.rayDirectionsGlobal.shape[0] m = self.rayDirectionsGlobal.shape[1] - x_ecef = self.position_ecef[:, 0].reshape((-1,1)) + self.pos0[0] - y_ecef = self.position_ecef[:, 1].reshape((-1,1)) + self.pos0[1] - z_ecef = self.position_ecef[:, 2].reshape((-1,1)) + self.pos0[2] + x_ecef = self.position_ecef[:, 0].reshape((-1,1)) + y_ecef = self.position_ecef[:, 1].reshape((-1,1)) + z_ecef = self.position_ecef[:, 2].reshape((-1,1)) #lats, lons, alts = pm.ecef2geodetic(x = x_ecef, y = y_ecef, z = z_ecef) @@ -599,7 +593,7 @@ def __init__(self, type, config): self.isfirst = True def load_cam_calibration(self, filename_cal, config): - calHSI = CalibHSI(file_name_cal_xml=filename_cal, config = config) # Generates a calibration object + calHSI = CalibHSI(file_name_cal_xml=filename_cal) # Generates a calibration object self.f = calHSI.f self.v_c = calHSI.cx self.k1 = calHSI.k1 @@ -722,39 +716,7 @@ def bilinearInterpolationRotation(self, x1_x, y1_y, rot_Q_00, rot_Q_10, rot_Q_01 return rot_tot - def computeDirections(self, param): - self.rot_x = param[0] # Pitch relative to cam (Equivalent to cam defining NED and uhi BODY) - self.rot_y = param[1]*0 # Roll relative to cam - self.rot_z = param[2] # Yaw relative to cam - self.v_c = param[3] - self.f = param[4] - self.k1 = param[5]*0 - self.k2 = param[6] - self.k3 = param[7] - - - #self.v_c = 455.414296 - #self.f = 9.55160147e+02 - #self.k1 = 0 - #self.k2 = 3.22199561e+02 - #self.k3 = -7.36445822e+01 - #self.rot_x = 1.41822029e-03 - #self.rot_z = -3.77072811e-03 - # Define camera model array for 960 pixels and 480. 1 is subtracted when. - # Pixel ranges from -0.5 - # How to compensate for binning? - - self.v = self.pixel*self.binning + 0.5*(self.binning + 1) - - # Express uhi ray directions in uhi frame using line-camera model - x_norm_lin = (self.v - self.v_c) / self.f - - x_norm_nonlin = -(self.k1 * ((self.v - self.v_c) / 1000) ** 5 + \ - self.k2 * ((self.v - self.v_c) / 1000) ** 3 + \ - self.k3 * ((self.v - self.v_c) / 1000) ** 2) / self.f - - self.x_norm = x_norm_lin + x_norm_nonlin - + # At the time of updated parameters def reprojectFeaturesHSI(self): rot_hsi_rgb = np.array([self.rot_z, self.rot_y, self.rot_x]) * 180 / np.pi @@ -777,7 +739,90 @@ def reprojectFeaturesHSI(self): -def interpolate_poses(timestamp_from, pos_from, pos0, rot_from, timestamps_to, extrapolate = True, use_absolute_position = True): +def compute_camera_rays_from_parameters(pixel_nr, cx, f, k1, k2, k3): + """_summary_ + + :param pixel_nr: _description_ + :type pixel_nr: _type_ + :param rot_x: _description_ + :type rot_x: _type_ + :param rot_y: _description_ + :type rot_y: _type_ + :param rot_z: _description_ + :type rot_z: _type_ + :param cx: _description_ + :type cx: _type_ + :param f: _description_ + :type f: _type_ + :param k1: _description_ + :type k1: _type_ + :param k2: _description_ + :type k2: _type_ + :param k3: _description_ + :type k3: _type_ + :return: _description_ + :rtype: _type_ + """ + + u = pixel_nr + 1 + + # Express uhi ray directions in uhi frame using line-camera model + x_norm_lin = (u - cx) / f + + x_norm_nonlin = -(k1 * ((u - cx) / 1000) ** 5 + \ + k2 * ((u - cx) / 1000) ** 3 + \ + k3 * ((u - cx) / 1000) ** 2) / f + + x_norm = x_norm_lin + x_norm_nonlin + + return x_norm + + +def reproject_world_points_to_hsi_plane(trans_hsi_body, rot_hsi_body, pos_body, rot_body, points_world): + """Reprojects world points to local image plane coordinates (x_h, y_h, 1) + + :param trans_hsi_body: lever arm from body center to hsi focal point + :type trans_hsi_body: _type_ + :param rot_hsi_body: boresight rotation + :type rot_hsi_body: Scipy rotation object + :param pos_body: The earth fixed earth centered position of the vehicle body centre + :type pos_body: _type_ + :param rot_body: The rotation of the body in ECEF + :type rot_body: Scipy rotation object + :param points_world: World points in ECEF + :type points_world: _type_ + :return: The reprojection coordinates + :rtype: ndarray(n, 3) + """ + + + + # We compose the hypothesized boresight (rot_hsi_body) an lever arm (trans_hsi_body) to get the hypothesized position/orientation + # of the hyperspectral imager + rotation_hsi = rot_body * rot_hsi_body + position_hsi = pos_body + rot_body.apply(trans_hsi_body) + + # Given the positions, the hsi-point vector can be expressed in ECEF + hsi_to_feature_global = points_world - position_hsi + + # Number of features to iterate + n = hsi_to_feature_global.shape[0] + + # We can now express the hsi-point vector in the HSI frame and normalize by the z component to find the image plane projection + hsi_to_feature_local = np.zeros(hsi_to_feature_global.shape) + for i in range(n): + # The vector expressed in HSI frame + hsi_to_feature_local[i, :] = (rotation_hsi[i].inv()).apply(hsi_to_feature_global[i, :]) + + # The vector normalized to lie on the virtual plane + hsi_to_feature_local[i, :] /= hsi_to_feature_local[i, 2] + + return hsi_to_feature_local + + + + +def interpolate_poses(timestamp_from, pos_from, rot_from, timestamps_to, extrapolate = True, use_absolute_position = True): """ :param timestamp_from: @@ -813,8 +858,7 @@ def interpolate_poses(timestamp_from, pos_from, pos0, rot_from, timestamps_to, e # Setting use_absolute_position to True means that position calculations are done with absolute - referenceGeometry = CameraGeometry(pos0=pos0, - pos=pos_from, + referenceGeometry = CameraGeometry(pos=pos_from, rot=rot_from, time=timestamp_from, use_absolute_position=use_absolute_position) @@ -860,16 +904,6 @@ def cartesian_to_polar(xyz): -class MeshGeometry(): - def __init__(self, config): - mesh_path = config['General']['model_path'] - texture_path = config['General']['tex_path'] - offset_x = float(config['General']['offset_x']) - offset_y = float(config['General']['offset_y']) - offset_z = float(config['General']['offset_z']) - self.pos0 = np.array([offset_x, offset_y, offset_z]).reshape([-1, 1]) - - def rotation_matrix_ecef2ned(lon, lat): """ @@ -937,7 +971,13 @@ def __init__(self, timestamps, rot_obj, rot_ref, pos, pos_epsg): self.lon = None self.hei = None - def compute_geodetic_position(self, epsg_geod): + self.compute_geodetic_position() + self.compute_geocentric_orientation() + self.compute_ned_orientation() + self.compute_ned_2_ecef() + + + def compute_geodetic_position(self, epsg_geod = 4326): """ Function for transforming positions to latitude longitude height :param epsg_geod: int @@ -986,7 +1026,7 @@ def compute_geocentric_orientation(self): self.compute_geodetic_position() R_body_2_ned = self.rot_obj_ned - R_ned_2_ecef = self.compute_rot_obj_ned_2_ecef() + R_ned_2_ecef = self.compute_ned_2_ecef() R_body_2_ecef = R_ned_2_ecef*R_body_2_ned self.rot_obj_ecef = R_body_2_ecef @@ -1000,7 +1040,7 @@ def compute_ned_orientation(self): R_body_2_ecef = self.rot_obj_ecef # Define rotation from ecef 2 ned - R_ecef_2_ned = self.compute_rot_obj_ned_2_ecef().inv() + R_ecef_2_ned = self.compute_ned_2_ecef().inv() # Compose R_body_2_ned = R_ecef_2_ned*R_body_2_ecef @@ -1011,7 +1051,7 @@ def compute_ned_orientation(self): else: pass - def compute_rot_obj_ned_2_ecef(self): + def compute_ned_2_ecef(self): N = self.lat.shape[0] rot_mats_ned_2_ecef = np.zeros((N, 3, 3), dtype=np.float64) @@ -1020,11 +1060,11 @@ def compute_rot_obj_ned_2_ecef(self): rot_mats_ned_2_ecef[i,:,:] = rot_mat_ned_2_ecef(lat=self.lat[i], lon = self.lon[i]) - self.rots_obj_ned_2_ecef = RotLib.from_matrix(rot_mats_ned_2_ecef) - - return self.rots_obj_ned_2_ecef + self.rot_obj_ned_2_ecef = RotLib.from_matrix(rot_mats_ned_2_ecef) + return self.rot_obj_ned_2_ecef + def rot_mat_ned_2_ecef(lat, lon): @@ -1053,7 +1093,36 @@ def rot_mat_ned_2_ecef(lat, lon): return R_ned_ecef +def read_raster(filename, out_crs="EPSG:3857", use_z=False): + """Read a raster to a ``pyvista.StructuredGrid``. + This will handle coordinate transformations. + """ + from rasterio import transform + import rioxarray + # Read in the data + data = rioxarray.open_rasterio(filename) + values = np.asarray(data) + data.rio.nodata + nans = values == data.rio.nodata + if np.any(nans): + # values = np.ma.masked_where(nans, values) + values[nans] = np.nan + # Make a mesh + xx, yy = np.meshgrid(data["x"], data["y"]) + if use_z and values.shape[0] == 1: + # will make z-comp the values in the file + zz = values.reshape(xx.shape) + else: + # or this will make it flat + zz = np.zeros_like(xx) + mesh = pv.StructuredGrid(xx, yy, zz) + pts = mesh.points + lon, lat = transform(data.rio.crs, out_crs, pts[:, 0], pts[:, 1]) + mesh.points[:, 0] = lon + mesh.points[:, 1] = lat + mesh["data"] = values.reshape(mesh.n_points, -1, order="F") + return mesh def dem_2_mesh(path_dem, model_path, config): """ @@ -1119,15 +1188,17 @@ def dem_2_mesh(path_dem, model_path, config): mask = band_data != no_data_value # Create and open the output XYZ file for writing if it does not exist: - #if not os.path.exists(output_xyz): - with open(output_xyz, 'w') as xyz_file: - # Write data to the XYZ file using the mask and calculated coordinates - for y in range(ds.RasterYSize): - for x in range(ds.RasterXSize): - if mask[y, x]: - x_coord = x_origin + x * x_resolution - y_coord = y_origin + y * y_resolution - xyz_file.write(f"{x_coord} {y_coord} {band_data[y, x]}\n") + if not os.path.exists(output_xyz): + with open(output_xyz, 'w') as xyz_file: + # Write data to the XYZ file using the mask and calculated coordinates + for y in range(ds.RasterYSize): + for x in range(ds.RasterXSize): + if mask[y, x]: + x_coord = x_origin + x * x_resolution + y_coord = y_origin + y * y_resolution + xyz_file.write(f"{x_coord} {y_coord} {band_data[y, x]}\n") + else: + print('*.xyz already exists') # Clean up ds = None band = None @@ -1137,9 +1208,10 @@ def dem_2_mesh(path_dem, model_path, config): # Create a pyvista point cloud object cloud = pv.PolyData(points) - + + # TODO: Apply patch to avoid crash for triangulation when using big DEM files # Generate a mesh from points - mesh = cloud.delaunay_2d() + mesh = cloud.delaunay_2d(progress_bar=True) # Transform the mesh points from projected to geocentric ECEF. geocsc = CRS.from_epsg(epsg_geocsc) @@ -1162,20 +1234,11 @@ def dem_2_mesh(path_dem, model_path, config): h_proj = points_proj[:, 2].reshape((-1, 1)) - (xECEF, yECEF, zECEF) = transformer.transform(xx=x_proj, yy=y_proj, zz=h_proj) - - mesh.points[:, 0] = xECEF.reshape(-1) - mesh.points[:, 1] = yECEF.reshape(-1) - mesh.points[:, 2] = zECEF.reshape(-1) - - offX = float(config['General']['offset_x']) - offY = float(config['General']['offset_y']) - offZ = float(config['General']['offset_z']) - - pos0 = np.array([offX, offY, offZ]).reshape((1, -1)) + (x_ecef, y_ecef, z_ecef) = transformer.transform(xx=x_proj, yy=y_proj, zz=h_proj) - # Add same offset as position data - mesh.points -= pos0 + mesh.points[:, 0] = x_ecef.reshape(-1) + mesh.points[:, 1] = y_ecef.reshape(-1) + mesh.points[:, 2] = z_ecef.reshape(-1) # Save mesh mesh.save(model_path) @@ -1303,11 +1366,11 @@ def position_transform_ecef_2_llh(position_ecef, epsg_from, epsg_to, config): geod = CRS.from_epsg(epsg_to) transformer = Transformer.from_crs(geocsc, geod) - xECEF = position_ecef[:, 0].reshape((-1, 1)) - yECEF = position_ecef[:, 1].reshape((-1, 1)) - zECEF = position_ecef[:, 2].reshape((-1, 1)) + x_ecef = position_ecef[:, 0].reshape((-1, 1)) + y_ecef = position_ecef[:, 1].reshape((-1, 1)) + z_ecef = position_ecef[:, 2].reshape((-1, 1)) - (lat, lon, hei) = transformer.transform(xx=xECEF, yy=yECEF, zz=zECEF) + (lat, lon, hei) = transformer.transform(xx=x_ecef, yy=y_ecef, zz=z_ecef) lat_lon_hei = np.zeros(position_ecef.shape) lat_lon_hei[:, 0] = lat.reshape((position_ecef.shape[0], 1)) diff --git a/gref4hsi/utils/gis_tools.py b/gref4hsi/utils/gis_tools.py index e9f8b3f..90cdf5a 100644 --- a/gref4hsi/utils/gis_tools.py +++ b/gref4hsi/utils/gis_tools.py @@ -17,9 +17,10 @@ from spectral import envi import spectral as sp import h5py +from scipy.spatial.transform import Rotation as RotLib # Lib modules -from gref4hsi.scripts.colours import Image as Imcol +from gref4hsi.utils.colours import Image as Imcol # ENVI datatype conversion dictionary dtype_dict = {1:np.uint8, @@ -37,14 +38,6 @@ def __init__(self, point_cloud, transect_string, config_crs): self.name = transect_string self.points_geocsc = point_cloud - self.offX = config_crs.off_x - self.offY = config_crs.off_y - self.offZ = config_crs.off_z - - if self.offX == -1: - print('Offsets have not been set, or they have been overwritten by default value -1') - RuntimeError - self.epsg_geocsc = config_crs.epsg_geocsc self.n_lines = point_cloud.shape[0] @@ -67,13 +60,13 @@ def transform_geocentric_to_projected(self, config_crs): proj = CRS.from_epsg(self.epsg_proj) transformer = Transformer.from_crs(geocsc, proj) - xECEF = self.points_geocsc[:,:,0].reshape((-1, 1)) - yECEF = self.points_geocsc[:, :, 1].reshape((-1, 1)) - zECEF = self.points_geocsc[:, :, 2].reshape((-1, 1)) + x_ecef = self.points_geocsc[:,:,0].reshape((-1, 1)) + y_ecef = self.points_geocsc[:, :, 1].reshape((-1, 1)) + z_ecef = self.points_geocsc[:, :, 2].reshape((-1, 1)) - (east, north, hei) = transformer.transform(xx=xECEF + self.offX, yy=yECEF + self.offY, zz=zECEF + self.offZ) + (east, north, hei) = transformer.transform(xx=x_ecef, yy=y_ecef, zz=z_ecef) self.points_proj[:,:,0] = east.reshape((self.points_proj.shape[0], self.points_proj.shape[1])) self.points_proj[:, :, 1] = north.reshape((self.points_proj.shape[0], self.points_proj.shape[1])) @@ -130,6 +123,7 @@ def resample_datacube(self, radiance_cube, wavelengths, fwhm, envi_cube_dir, rgb """ + n_bands = len(wavelengths) # @@ -370,12 +364,12 @@ def resample_ancillary(self, h5_filename, anc_dir, anc_dict, interleave = 'bsq') if data.shape[1] != self.n_pixels: # For data of shape n_lines x j, for example camera positions, reshape to n_lines x n_pixels x j j = data.shape[1] - data = np.einsum('ijk, ik -> ijk', np.ones((self.n_lines, self.n_pixels, j), dtype=np.float32), data) + data = np.einsum('ijk, ik -> ijk', np.ones((self.n_lines, self.n_pixels, j), dtype=np.float64), data) else: # For data with dimension n_lines x n_pixels, add third dimension data = data.reshape((data.shape[0], data.shape[1], 1)) - data = data.astype(dtype = np.float32) + data = data.astype(dtype = np.float64) # For first layer if band_counter == 0: @@ -438,39 +432,48 @@ def resample_ancillary(self, h5_filename, anc_dir, anc_dict, interleave = 'bsq') @staticmethod def cube_to_raster_grid(coords, raster_transform_method, resolution): + """Function that takes projected coordinates (e.g. UTM 32 east and north) of ray intersections and computes an image grid + + :param coords: _description_ + :type coords: _type_ + :param raster_transform_method: How the raster grid is calculated. "north_east" is standard and defines a rectangle along north/east. "minimal_rectangle" is memory optimal as it finds the smallest enclosing rectangle that wraps the points. + :type raster_transform_method: _type_ + :param resolution: _description_ + :type resolution: _type_ + :return: _description_ + :rtype: _type_ + """ + + if raster_transform_method == 'north_east': - # Creates the minimal area (and thus memory) rectangle around chunk + # Creates the minimal area (and thus memory) rectangle around points polygon = MultiPoint(coords).envelope # extract coordinates x, y = polygon.exterior.xy - idx_ul = 3 - idx_ur = 2 - idx_ll = 4 - suffix = '_north_east' elif raster_transform_method == 'minimal_rectangle': - # Creates the minimal area (and thus memory) rectangle around chunk + # Creates the minimal area (and thus memory) rectangle around points polygon = MultiPoint(coords).minimum_rotated_rectangle # extract coordinates x, y = polygon.exterior.xy - # Increasing indices are against the clock - # Determine basis vectors from data - idx_ul = 3 - idx_ur = 2 - idx_ll = 4 suffix = '_rotated' - # ul means upper left, ll means lower left and so on + # Indices increase against the clock + # ul means upper left corner of polygon, ll means lower left and so on + idx_ul = 3 + idx_ur = 2 + idx_ll = 4 + x_ul = x[idx_ul] y_ul = y[idx_ul] @@ -480,7 +483,7 @@ def cube_to_raster_grid(coords, raster_transform_method, resolution): x_ll = x[idx_ll] y_ll = y[idx_ll] - # The vector from upper-left corner aka origin to the upper-right equals lambda*e_basis_x + # The vector from the upper-left corner aka origin to the upper-right equals lambda*e_basis_x e_basis_x = np.array([x_ur-x_ul, y_ur-y_ul]).reshape((2,1)) w_transect = np.linalg.norm(e_basis_x) e_basis_x /= w_transect @@ -489,11 +492,11 @@ def cube_to_raster_grid(coords, raster_transform_method, resolution): e_basis_y = np.array([x_ll-x_ul, y_ll-y_ul]).reshape((2,1)) h_transect = np.linalg.norm(e_basis_y) e_basis_y /= h_transect - e_basis_y *= -1 # Ensuring Right handedness (image storage y direction is o) + e_basis_y *= -1 # Ensuring right handedness (image storage y direction is opposite) R = np.hstack((e_basis_x, e_basis_y)) # 2D rotation matrix by definition - # Define origin/translation vector + # Define origin/translation vector as the upper left corner o = np.array([x[idx_ul], y[idx_ul]]).reshape((2)) # Transformation matrix rigid body 2D @@ -502,12 +505,13 @@ def cube_to_raster_grid(coords, raster_transform_method, resolution): Trb[0:2,2] = o Trb[2,2] = 1 + # We operate with same resolution in X and Y. Note that for "minimal_rectangle" X, Y are NOT East and North. s_x = resolution s_y = resolution S = np.diag(np.array([s_x, s_y, 1])) - # Reflection to account for opposite row/up direction + # Reflection matrix to account for opposite row/up direction Ref = np.diag(np.array([1, -1, 1])) # The affine transform is then expressed: @@ -517,32 +521,39 @@ def cube_to_raster_grid(coords, raster_transform_method, resolution): width = int(np.ceil(w_transect/s_x)) height = int(np.ceil(h_transect/s_y)) - # Rasterio operates with a conventional affine + # Rasterio operates with a conventional affine geotransform a, b, c, d, e, f = Taff[0,0], Taff[0,1], Taff[0,2], Taff[1,0], Taff[1,1], Taff[1,2] - transform = rasterio.Affine(a, b, c, d, e, f) - # Pixel centers reside at half coordinates. + # Define local orthographic pixel grid. Pixel centers reside at half coordinates. xi, yi = np.meshgrid(np.arange(width) + 0.5, np.arange(height) + 0.5) + # To get homogeneous vector (not an actual z coordinate) zi = np.ones(xi.shape) - # Form homogeneous vectors + # Form homogeneous vectors (allows translation by multiplication) x_r = np.vstack((xi.flatten(), yi.flatten(), zi.flatten())).T - x_p = np.matmul(Taff, x_r.T).T # Map pixels to geographic + # Map orthographic pixels to projected system + x_p = np.matmul(Taff, x_r.T).T + # Make a point vector of grid points to be used in nearest neighbor xy = np.vstack((x_p[:,0].flatten(), x_p[:,1].flatten())).T - - # Locate nearest neighbors in a radius defined by the resolution + # Define NN search tree from intersection points tree = NearestNeighbors(radius=resolution).fit(coords) - # Calculate the nearest neighbors. Here we only use one neighbor, but other approaches could be employed - dist, indexes = tree.kneighbors(xy, 1) - - # We can mask the data by allowing interpolation with a radius of the resolution - mask_nn = dist > resolution + # Calculate the nearest intersection point (in "coords") for each grid cell ("xy"). + + # Here we only use one neighbor, and indexes is a vector of len(xy) where an element indexes(i) + # says that the closest point to xy[i] is coords[indexes[i]]. Since coords is just a flattened/reshaped version of intersection points + #, the data cube can be resampled by datacube_flat=datacube.reshape((dim1*dim2, dim3)) and + # geographic_datacube_flat = datacube.reshape((dim1_geo*dim2_geo, dim3)) so that geographic_datacube_flat = datacube_flat[indexes,:] + n_neighbors = 1 + dist, indexes = tree.kneighbors(xy, n_neighbors) + + # We can mask the data by allowing points within a radius of 2x the resolution + mask_nn = dist > 2*resolution return transform, height, width, indexes, suffix, mask_nn @@ -574,12 +585,15 @@ def write_datacube_ENVI(memmap_gen_params, nodata, transform, datacube_path, wav crs_rasterio = CRS.from_string(crs) - writer_type = "envi" + writer_type = "envi" # Hardcoded # Make some simple modifications data_file_path = datacube_path + '.' + interleave header_file_path = datacube_path + '.hdr' + if os.path.exists(header_file_path): + return + # Clean the files generated by rasterio def write_band(args): band_data, index, dst = args @@ -719,19 +733,15 @@ def write_band(args): sp.io.envi.write_envi_header(fileName=header_file_path, header_dict=header) - def compare_hsi_composite_with_rgb_mosaic(self): - self.rgb_ortho_path = self.config['Absolute Paths']['rgb_ortho_path'] - self.hsi_composite = self.config['Absolute Paths']['rgb_composite_folder'] + self.name + '.tif' - self.rgb_ortho_reshaped = self.config['Absolute Paths']['rgbOrthoReshaped'] + self.name + '.tif' - self.dem_path = self.config['Absolute Paths']['dem_path'] - self.dem_reshaped = self.config['Absolute Paths']['demReshaped'] + self.name + '_dem.tif' - - self.resample_rgb_ortho_to_hsi_ortho() - - self.resample_dem_to_hsi_ortho() - + @staticmethod + def compare_hsi_composite_with_rgb_mosaic(hsi_composite_path, ref_ortho_reshaped_path): + """Compares an HSI orthomosaic """ + + - raster_rgb = gdal.Open(self.rgb_ortho_reshaped, gdal.GA_Update) + + # The RGB orthomosaic after reshaping (the reference) + raster_rgb = gdal.Open(ref_ortho_reshaped_path, gdal.GA_Update) xoff1, a1, b1, yoff1, d1, e1 = raster_rgb.GetGeoTransform() # This should be equal raster_rgb_array = np.array(raster_rgb.ReadAsArray()) R = raster_rgb_array[0, :, :].reshape((raster_rgb_array.shape[1], raster_rgb_array.shape[2], 1)) @@ -741,16 +751,17 @@ def compare_hsi_composite_with_rgb_mosaic(self): ortho_rgb = np.concatenate((R, G, B), axis=2) rgb_image = Imcol(ortho_rgb) - raster_hsi = gdal.Open(self.hsi_composite) + # The HSI composite raster (the match) + raster_hsi = gdal.Open(hsi_composite_path) raster_hsi_array = np.array(raster_hsi.ReadAsArray()) xoff2, a2, b2, yoff2, d2, e2 = raster_hsi.GetGeoTransform() - self.transform_pixel_projected = raster_hsi.GetGeoTransform() + transform_pixel_projected = raster_hsi.GetGeoTransform() R = raster_hsi_array[0, :, :].reshape((raster_hsi_array.shape[1], raster_hsi_array.shape[2], 1)) G = raster_hsi_array[1, :, :].reshape((raster_hsi_array.shape[1], raster_hsi_array.shape[2], 1)) B = raster_hsi_array[2, :, :].reshape((raster_hsi_array.shape[1], raster_hsi_array.shape[2], 1)) - ortho_hsi = np.concatenate((R, G, B), axis=2) + # Some form of image processing max_val = np.percentile(ortho_hsi.reshape(-1), 99) ortho_hsi /= max_val ortho_hsi[ortho_hsi > 1] = 1 @@ -758,11 +769,6 @@ def compare_hsi_composite_with_rgb_mosaic(self): ortho_hsi[ortho_hsi == 0] = 255 hsi_image = Imcol(ortho_hsi) - - # Dem - self.raster_dem = rasterio.open(self.dem_reshaped) - - # Adjust Clahe hsi_image.clahe_adjustment() rgb_image.clahe_adjustment() @@ -770,11 +776,16 @@ def compare_hsi_composite_with_rgb_mosaic(self): hsi_image.to_luma(gamma=False, image_array = hsi_image.clahe_adjusted) rgb_image.to_luma(gamma=False, image_array= rgb_image.clahe_adjusted) - self.compute_sift_difference(hsi_image.luma_array, rgb_image.luma_array) + uv_vec_hsi, uv_vec_rgb, diff_AE_pixels = GeoSpatialAbstractionHSI.compute_sift_difference(hsi_image.luma_array, rgb_image.luma_array) + + image_resolution = a2 + diff_AE_meters = diff_AE_pixels*image_resolution + return uv_vec_hsi, uv_vec_rgb, diff_AE_meters, transform_pixel_projected - def resample_rgb_ortho_to_hsi_ortho(self): + @staticmethod + def resample_rgb_ortho_to_hsi_ortho(ref_ortho_path, hsi_composite_path, ref_ortho_reshaped_path): """Reproject RGB orthophoto to match the shape and projection of HSI raster. Parameters @@ -784,9 +795,9 @@ def resample_rgb_ortho_to_hsi_ortho(self): outfile : (string) path to output file tif """ - infile = self.rgb_ortho_path - match = self.hsi_composite - outfile = self.rgb_ortho_reshaped + infile = ref_ortho_path + match = hsi_composite_path + outfile = ref_ortho_reshaped_path # open input with rasterio.open(infile) as src: src_transform = src.transform @@ -825,7 +836,7 @@ def resample_rgb_ortho_to_hsi_ortho(self): dst_crs=dst_crs, resampling=Resampling.cubic) - def resample_dem_to_hsi_ortho(self): + def resample_dem_to_hsi_ortho(dem_path, hsi_composite_path, dem_reshaped): """Reproject a file to match the shape and projection of existing raster. Parameters @@ -835,9 +846,9 @@ def resample_dem_to_hsi_ortho(self): outfile : (string) path to output file tif """ - infile = self.dem_path - match = self.hsi_composite - outfile = self.dem_reshaped + infile = dem_path + match = hsi_composite_path + outfile = dem_reshaped # open input with rasterio.open(infile) as src: src_transform = src.transform @@ -876,8 +887,10 @@ def resample_dem_to_hsi_ortho(self): dst_crs=dst_crs, resampling=Resampling.cubic) + @staticmethod + def compute_sift_difference(gray1, gray2): + """Simply matches two grayscale images using SIFT""" - def compute_sift_difference(self, gray1, gray2): gray1 = (gray1 - np.min(gray1)) / (np.max(gray1) - np.min(gray1)) gray2 = (gray2 - np.min(gray2)) / (np.max(gray2) - np.min(gray2)) @@ -921,54 +934,13 @@ def compute_sift_difference(self, gray1, gray2): diff_v[i] = uv2[1] - uv1[1] img3 = cv.drawMatches(gray1, kp1, gray2, kp2, good, None, **draw_params) - plt.imshow(img3, 'gray') - plt.show() - - med_u = np.median(diff_u[np.abs(diff_u) < 10]) - med_v = np.median(diff_v[np.abs(diff_u) < 10]) -# - #print(np.mean(np.abs(diff_u[np.abs(diff_u) < 100]))) - #print(np.mean(np.abs(diff_v[np.abs(diff_u) < 100]))) -## - #print(np.median(np.abs(diff_u[np.abs(diff_u) < 100] - med_u))) - #print(np.median(np.abs(diff_v[np.abs(diff_u) < 100] - med_v))) -## - MAD_u = np.median(np.abs(diff_u[np.abs(diff_u) < 100] - med_u)) - MAD_v = np.median(np.abs(diff_v[np.abs(diff_u) < 100] - med_v)) + #plt.imshow(img3, 'gray') + #plt.show() ## - #MAD_tot = np.median(np.sqrt((diff_v[np.abs(diff_u) < 100] - med_v)**2 + (diff_u[np.abs(diff_u) < 100] - med_u)**2)) - # IF the disagreement is more than 100 pixels, omit it - diff = np.sqrt(diff_u ** 2 + diff_v ** 2) - - MAE_tot = np.median(diff[diff < 5]) + # The absolute errors + diff_AE = np.sqrt(diff_u ** 2 + diff_v ** 2) - self.feature_uv_hsi = uv_vec_hsi[diff < 5, :] - self.feature_uv_rgb = uv_vec_rgb[diff < 5, :] - print(len(self.feature_uv_rgb)) - - - #plt.imshow(gray1) - - - - - - #plt.scatter(uv_vec[:,0][np.abs(diff_u) < 100], uv_vec[:,1][np.abs(diff_u) < 100], c = diff_u[np.abs(diff_u) < 100] - np.median(diff_u[np.abs(diff_u) < 100])) - #plt.colorbar() - #plt.show() - - - #plt.hist(diff_u[np.abs(diff) < 100], 50) - #plt.title('MAD u: ' + str(np.round(MAD_u,2))) - #plt.xlim([-100, 100]) - #plt.show() - - plt.hist(diff[diff < 10]*0.002, 50) - #plt.title('MAD v: ' + str(np.round(MAD_v, 2))) - #plt.xlim([-100, 100]) - plt.show() - # - self.diff = diff + return uv_vec_hsi, uv_vec_rgb, diff_AE @@ -1009,11 +981,11 @@ def map_pixels_back_to_datacube(self, w_datacube): - (xECEF, yECEF, zECEF) = transformer.transform(xx=xp, yy=yp, zz=zp) + (x_ecef, y_ecef, z_ecef) = transformer.transform(xx=xp, yy=yp, zz=zp) - self.features_points[:, 0] = xECEF - self.offX - self.features_points[:, 1] = yECEF - self.offY - self.features_points[:, 2] = zECEF - self.offZ + self.features_points[:, 0] = x_ecef + self.features_points[:, 1] = y_ecef + self.features_points[:, 2] = z_ecef @@ -1051,10 +1023,124 @@ def map_pixels_back_to_datacube(self, w_datacube): #self.y1_y_rgb = u_rgb - np.floor(u_rgb) + def compute_reference_points_ecef(uv_vec_ref, transform_pixel_projected, dem_resampled_path, epsg_proj, epsg_geocsc=4978): + + + """Computes ECEF reference points from a features detected on the orthomosaic and the DEM""" + x = uv_vec_ref[:, 0] + y = uv_vec_ref[:, 1] + + # Sample the terrain raster to get a projected position of data + raster_dem = rasterio.open(dem_resampled_path) + xoff, a, b, yoff, d, e = transform_pixel_projected + + # Convert the pixel coordinates into true coordinates (e.g. UTM N/E) + xp = a * x + b * y + xoff + 0.5*a + yp = d * x + e * y + yoff + 0.5*e + zp = np.zeros(yp.shape) + for i in range(xp.shape[0]): + temp = [x for x in raster_dem.sample([(xp[i], yp[i])])] + zp[i] = float(temp[0]) + + # Transform points to true 3D via pyproj + geocsc = CRS.from_epsg(epsg_geocsc) + proj = CRS.from_epsg(epsg_proj) + transformer = Transformer.from_crs(proj, geocsc) + ref_points_ecef = np.zeros((xp.shape[0], 3)) + (x_ecef, y_ecef, z_ecef) = transformer.transform(xx=xp, yy=yp, zz=zp) + + ref_points_ecef[:, 0] = x_ecef + ref_points_ecef[:, 1] = y_ecef + ref_points_ecef[:, 2] = z_ecef + + return ref_points_ecef + + def compute_position_orientation_features(uv_vec_hsi, pixel_nr_image, unix_time_image, position_ecef, quaternion_ecef, time_pose, nodata): + """Returns the positions, orientations and pixel numbers corresponding to the features. + Also computes a feature mask identifying features that are invalid""" + rows = uv_vec_hsi[:, 1] + cols = uv_vec_hsi[:, 0] + + # Determine mask + x0 = np.floor(cols).astype(int) + x1 = x0 + 1 + y0 = np.floor(rows).astype(int) + y1 = y0 + 1 + + # All 4 neighbors (as used by bilinear interpolation) should be valid data points + feature_mask = np.all([pixel_nr_image[y0, x0].reshape(-1) != nodata, + pixel_nr_image[y1, x0].reshape(-1) != nodata, + pixel_nr_image[y0, x1].reshape(-1) != nodata, + pixel_nr_image[y1, x1].reshape(-1) != nodata], axis=0) + + + pixel_nr_vec = GeoSpatialAbstractionHSI.bilinear_interpolate(pixel_nr_image, x = cols, y = rows) + time_vec = GeoSpatialAbstractionHSI.bilinear_interpolate(unix_time_image, x = cols, y = rows) + + pixel_vec_valid = pixel_nr_vec[feature_mask] + time_vec_valid = time_vec[feature_mask] + + + from scipy.interpolate import interp1d + from gref4hsi.utils import geometry_utils as geom + + rotation_ecef = RotLib.from_quat(quaternion_ecef) + # Interpolates positions linearly and quaterinons by Slerp + position_vec, quaternion_vec = geom.interpolate_poses(time_pose, + position_ecef, + rotation_ecef, + timestamps_to=time_vec_valid) + + return pixel_vec_valid, time_vec_valid, position_vec, quaternion_vec, feature_mask + + + @staticmethod + def feature_matches_to_GCP(features_hsi, features_ref, pixel_nr_raster, time_raster): + """Function to establish 2D feature positions u, j in raw cube, as well as true reference positions X, Y, Z + + :param features_hsi: _description_ + :type features_hsi: _type_ + :param features_ref: _description_ + :type features_ref: _type_ + :param pixel_nr_raster: _description_ + :type pixel_nr_raster: _type_ + :param time_raster: _description_ + :type time_raster: _type_ + """ + + return None + # COPIED from https://stackoverflow.com/questions/12729228/simple-efficient-bilinear-interpolation-of-images-in-numpy-and-python + @staticmethod + def bilinear_interpolate(im, x, y): + + x = np.asarray(x) + y = np.asarray(y) + + x0 = np.floor(x).astype(int) + x1 = x0 + 1 + y0 = np.floor(y).astype(int) + y1 = y0 + 1 + + x0 = np.clip(x0, 0, im.shape[1]-1); + x1 = np.clip(x1, 0, im.shape[1]-1); + y0 = np.clip(y0, 0, im.shape[0]-1); + y1 = np.clip(y1, 0, im.shape[0]-1); + + Ia = im[ y0, x0 ] + Ib = im[ y1, x0 ] + Ic = im[ y0, x1 ] + Id = im[ y1, x1 ] + + wa = (x1-x) * (y1-y) + wb = (x1-x) * (y-y0) + wc = (x-x0) * (y1-y) + wd = (x-x0) * (y-y0) + + return wa*Ia + wb*Ib + wc*Ic + wd*Id diff --git a/gref4hsi/utils/parsing_utils.py b/gref4hsi/utils/parsing_utils.py index ce9b2a4..9fe24f5 100644 --- a/gref4hsi/utils/parsing_utils.py +++ b/gref4hsi/utils/parsing_utils.py @@ -118,7 +118,6 @@ def __init__(self, filename, config, load_datacube = True): try: processed_nav_config = config['HDF.processed_nav'] self.pos_ref = self.f[processed_nav_config['position_ecef']][()] - self.pos0 = self.f[processed_nav_config['pos0']][()] self.quat_ref = self.f[processed_nav_config['quaternion_ecef']][()] self.pose_time = self.f[processed_nav_config['timestamp']][()] except: @@ -288,9 +287,9 @@ def ardupilot_extract_pose(config, iniPath): lon = points_proj[:, 1].reshape((-1, 1)) hei = points_proj[:, 2].reshape((-1, 1)) - (xECEF, yECEF, zECEF) = transformer.transform(xx=lat, yy=lon, zz=hei) + (x_ecef, y_ecef, z_ecef) = transformer.transform(xx=lat, yy=lon, zz=hei) - pos_geocsc = np.concatenate((xECEF.reshape((-1,1)), yECEF.reshape((-1,1)), zECEF.reshape((-1,1))), axis = 1) + pos_geocsc = np.concatenate((x_ecef.reshape((-1,1)), y_ecef.reshape((-1,1)), z_ecef.reshape((-1,1))), axis = 1) #dlogr = DataLogger(pose_path, 'CameraLabel, X, Y, Z, Roll, Pitch, Yaw, RotX, RotY, RotZ') @@ -422,7 +421,6 @@ def reformat_h5_embedded_data_h5(config, config_file): # Compute interpolated absolute positions positions and orientation: position_interpolated, quaternion_interpolated = interpolate_poses(timestamp_from=timestamps_imu, pos_from=position_ref, - pos0=None, rot_from=rot_obj, timestamps_to=timestamp_hsi, use_absolute_position = True) @@ -462,30 +460,25 @@ def reformat_h5_embedded_data_h5(config, config_file): position_ref_ecef = geo_pose_ref.pos_geocsc quaternion_ref_ecef = geo_pose_ref.rot_obj_ecef.as_quat() - if is_first: - # Calculate some appropriate offsets in ECEF - pos0 = np.mean(position_ref_ecef, axis=0) - - # For stacking in CSV - rot_vec_ecef = geo_pose_ref.rot_obj_ecef.as_euler('ZYX', degrees=True) + # For stacking in CSV + rot_vec_ecef = geo_pose_ref.rot_obj_ecef.as_euler('ZYX', degrees=True) - # - total_pose = np.concatenate((timestamp_hsi.reshape((-1,1)), position_ref_ecef, rot_vec_ecef), axis=1) + partial_pose = np.concatenate((timestamp_hsi.reshape((-1,1)), position_ref_ecef, rot_vec_ecef), axis=1) + if is_first: + # Initialize + total_pose = partial_pose # Ensure that flag is not raised again. Offsets should only be set once. is_first = False else: - # Ensure that flag is not raised again. Offsets should only be set once. - is_first = False - rot_vec_ecef = geo_pose_ref.rot_obj_ecef.as_euler('ZYX', degrees=True) - partial_pose = np.concatenate((timestamp_hsi.reshape((-1,1)), position_ref_ecef, rot_vec_ecef), axis=1) + # Concatenate/stack poses total_pose = np.concatenate((total_pose, partial_pose), axis=0) + is_first = False + - position_offset_name = config['HDF.processed_nav']['pos0'] - Hyperspectral.add_dataset(data=pos0, name=position_offset_name, h5_filename=path_hdf) # Add camera position position_ref_name = config['HDF.processed_nav']['position_ecef'] - Hyperspectral.add_dataset(data=position_ref_ecef - pos0, name=position_ref_name, h5_filename=path_hdf) + Hyperspectral.add_dataset(data=position_ref_ecef, name=position_ref_name, h5_filename=path_hdf) # Add camera quaternion quaternion_ref_name = config['HDF.processed_nav']['quaternion_ecef'] @@ -520,15 +513,6 @@ def reformat_h5_embedded_data_h5(config, config_file): df.to_csv(pose_path, index=False) - - config.set('General', 'offset_x', str(pos0[0])) - config.set('General', 'offset_y', str(pos0[1])) - config.set('General', 'offset_z', str(pos0[2])) - # Exporting models with offsets might be convenient - with open(config_file, 'w') as configfile: - config.write(configfile) - return config - # The main script for aquiring pose. def export_pose(config_file): """ diff --git a/gref4hsi/utils/specim_parsing_utils.py b/gref4hsi/utils/specim_parsing_utils.py index b8c5058..4dac6bf 100644 --- a/gref4hsi/utils/specim_parsing_utils.py +++ b/gref4hsi/utils/specim_parsing_utils.py @@ -114,8 +114,7 @@ def read_fov_file(self, fov_file_path): file_name_calib = self.config['Absolute Paths']['hsi_calib_path'] - CalibHSI(file_name_cal_xml=file_name_calib, - config = self.config, + CalibHSI(file_name_cal_xml=file_name_calib, mode = 'w', param_dict = param_dict) @@ -300,7 +299,7 @@ def main(config, config_specim): out_dir = config_specim.reformatted_missions_dir config_file_path = os.path.join(out_dir, config_specim.config_file_name) - prepend_data_dir_to_relative_paths(config_path=config_file_path, DATA_DIR=out_dir) + #prepend_data_dir_to_relative_paths(config_path=config_file_path, DATA_DIR=out_dir) specim_object = Specim(mission_path=mission_dir, config=config) @@ -416,7 +415,6 @@ class Metadata: xml_cal_write_path = camera_calib_xml_dir + file_name_xml CalibHSI(file_name_cal_xml= xml_cal_write_path, - config = config, mode = 'w', param_dict = param_dict) @@ -424,7 +422,7 @@ class Metadata: # Set value in config file: config.set('Relative Paths', 'hsi_calib_path', value = xml_cal_write_path) - config_file_path = out_dir + config_specim.config_file_name + # Write the config object with open(config_file_path, 'w') as configfile: @@ -451,7 +449,7 @@ class Metadata: ENVI_CAL_IMAGE_FILE_PATH = ENVI_CAL_HDR_FILE_PATH.split('.')[0] + '.cal' # SPECTRAL does not expect this suffix by default - # For some reason, the byte order + # For some reason, the byte order is needed radiometric_image_obj = envi.open(ENVI_CAL_HDR_FILE_PATH, image = ENVI_CAL_IMAGE_FILE_PATH) cal_n_lines = int(radiometric_image_obj.metadata['lines']) @@ -483,11 +481,11 @@ class Metadata: # + # Extract the starting/stopping lines - START_STOP_DIR = mission_dir + '/start_stop_lines' + START_STOP_DIR = os.path.join(mission_dir, 'start_stop_lines') # Allow software to function if start_stop_lines not specified - if not os.path.exists(START_STOP_DIR + '/'): - os.mkdir(START_STOP_DIR + '/') + if not os.path.exists(START_STOP_DIR): + os.mkdir(START_STOP_DIR) # Chunk according to chunk size start_lines = np.arange(start=0, stop=metadata_obj.autodarkstartline, @@ -505,7 +503,7 @@ class Metadata: df_start_stop = pd.DataFrame(start_stop_data) - df_start_stop.to_csv(path_or_buf = START_STOP_DIR + '/' + 'start_stop_lines.txt', sep=' ') + df_start_stop.to_csv(path_or_buf = os.path.join(START_STOP_DIR, 'start_stop_lines.txt'), sep=' ') @@ -579,7 +577,7 @@ class Metadata: pitch = np.array(df_imu['Pitch']).reshape((-1,1)) yaw = np.array(df_imu['Yaw']).reshape((-1,1)) - # Roll pitch yaw are stacked with in an unintuitive attribute. The euler angles with rotation order ZYX are Yaw Pitch Roll + # Roll pitch yaw are stacked in attribute eul_zyx. The euler angles with rotation order ZYX are Yaw Pitch Roll specim_object.eul_zyx = np.concatenate((roll, pitch, yaw), axis = 1) # Position is stored as ECEF cartesian coordinates (mutually orthogonal axis) instead of spherioid-like lon, lat, alt @@ -614,7 +612,7 @@ class Metadata: # Define h5 file name h5_folder = config['Absolute Paths']['h5_folder'] - # It is nicer to deal with 4 byte numbers in general + # Each line in start_stop defines a transect n_transects = df_start_stop.shape[0] for transect_number in range(n_transects): start_line = df_start_stop['line_start'][transect_number] @@ -631,8 +629,7 @@ class Metadata: else: chunk_stop_idx = chunk_start_idx + transect_chunk_size - - + #print(f'Chunk from line {chunk_start_idx} to line {chunk_stop_idx} is being written') data_cube = spectral_image_obj[chunk_start_idx:chunk_stop_idx, :, :] # Calibration equation specim_object.radiance_cube = ( (data_cube - dark_frame)*radiometric_frame/(metadata_obj.t_exp_ms/1000) ).astype(dtype = dtype) # 4 Byte diff --git a/gref4hsi/utils/uhi_parsing_utils.py b/gref4hsi/utils/uhi_parsing_utils.py index a7891ba..a00d2ed 100644 --- a/gref4hsi/utils/uhi_parsing_utils.py +++ b/gref4hsi/utils/uhi_parsing_utils.py @@ -62,7 +62,7 @@ def __init__(self, time = None, value = None, time_format = 'date_num'): self.value = value def interpolate(self, time_interp): self.time_interp = time_interp - self.value_interp = interp1d(x = self.time, y = self.value, kind='linear', fill_value='extrapolate')(x=self.time_interp) + self.value_interp = interp1d(x = self.time, y = self.value, kind='nearest', fill_value='extrapolate')(x=self.time_interp) class NAV: @@ -260,8 +260,7 @@ def set_camera_model(config, config_file_path, config_uhi, model_type, binning_s CAMERA_CALIB_XML_DIR = config['Absolute Paths']['calib_folder'] xml_cal_write_path = CAMERA_CALIB_XML_DIR + file_name_xml - CalibHSI(file_name_cal_xml= xml_cal_write_path, - config = config, + CalibHSI(file_name_cal_xml= xml_cal_write_path, mode = 'w', param_dict = param_dict) @@ -287,6 +286,11 @@ def read_nav_from_mat(mat_filename): mat_contents = {} mat_contents = loadmat(filename=mat_filename) + m_att = mat_contents['ATTENTION'] + #m_att['SpotOnTime'][m_att['Note'] == 'UHI s1 start'] + + + # + """Cell defining all nav data of relevance""" @@ -326,7 +330,7 @@ def read_nav_from_mat(mat_filename): def write_nav_data_to_h5(nav, time_offset, config, H5_FILE_PATH): # The time stamp used for writing - nav_timestamp_rov = nav.roll.time + nav_timestamp_rov = nav.yaw.time # Interpolate nav data to those times nav.interpolate(time_interp=nav_timestamp_rov) diff --git a/gref4hsi/scripts/visualize.py b/gref4hsi/utils/visualize.py similarity index 90% rename from gref4hsi/scripts/visualize.py rename to gref4hsi/utils/visualize.py index d304fd0..42851cf 100644 --- a/gref4hsi/scripts/visualize.py +++ b/gref4hsi/utils/visualize.py @@ -27,17 +27,12 @@ def show_mesh_camera(config, show_mesh = True, show_pose = True, ref_frame = 'EC pose_path = config['Absolute Paths']['pose_path'] - # Offsets used for plotting - offset_x = float(config['General']['offset_x']) - offset_y = float(config['General']['offset_y']) - offset_z = float(config['General']['offset_z']) - pose = pd.read_csv( pose_path, sep=',', header=0) - points_cam_ecef = np.concatenate( (pose[" X"].values.reshape((-1,1)) - offset_x, pose[" Y"].values.reshape((-1,1)) - offset_y, pose[" Z"].values.reshape((-1,1)) - offset_z), axis = 1) + points_cam_ecef = np.concatenate( (pose[" X"].values.reshape((-1,1)), pose[" Y"].values.reshape((-1,1)), pose[" Z"].values.reshape((-1,1))), axis = 1) use_local = False if use_local == True: eul_cam = np.concatenate((pose[" Yaw"].values.reshape((-1, 1)), @@ -53,9 +48,9 @@ def show_mesh_camera(config, show_mesh = True, show_pose = True, ref_frame = 'EC # Read the mesh mesh = pv.read(mesh_path) if ref_frame == 'NED': - x = offset_x - y = offset_y - z = offset_z + x = np.mean(points_mesh_ecef[:,0]) + y = np.mean(points_mesh_ecef[:,1]) + z = np.mean(points_mesh_ecef[:,2]) lat0, lon0, hei0 = pm.ecef2geodetic(x, y, z, deg=True) R_ecef_to_ned = Rotation.from_matrix(rotation_matrix_ecef2ned(lon=lon0, lat=lat0)) @@ -66,11 +61,11 @@ def show_mesh_camera(config, show_mesh = True, show_pose = True, ref_frame = 'EC points_mesh_ecef = mesh.points # Next part is to make a NED - x_cam, y_cam, z_cam = pm.ecef2ned(x = points_cam_ecef[:,0] + x, y = points_cam_ecef[:,1] + y, z = points_cam_ecef[:,2] + z, lon0=lon0, lat0=lat0, h0=hei0) + x_cam, y_cam, z_cam = pm.ecef2ned(x = points_cam_ecef[:,0], y = points_cam_ecef[:,1], z = points_cam_ecef[:,2], lon0=lon0, lat0=lat0, h0=hei0) points_cam = np.concatenate((x_cam.reshape((-1,1)), y_cam.reshape((-1,1)), z_cam.reshape((-1,1))), axis = 1) - x_mesh, y_mesh, z_mesh = pm.ecef2ned(x = points_mesh_ecef[:,0] + x, y = points_mesh_ecef[:,1] + y, z = points_mesh_ecef[:,2] + z, lon0=lon0, lat0=lat0, h0=hei0) + x_mesh, y_mesh, z_mesh = pm.ecef2ned(x = points_mesh_ecef[:,0], y = points_mesh_ecef[:,1], z = points_mesh_ecef[:,2], lon0=lon0, lat0=lat0, h0=hei0) points_mesh = np.concatenate((x_mesh.reshape((-1,1)), y_mesh.reshape((-1,1)), z_mesh.reshape((-1,1))), axis = 1) @@ -78,9 +73,6 @@ def show_mesh_camera(config, show_mesh = True, show_pose = True, ref_frame = 'EC elif ref_frame == 'ENU': - x = offset_x - y = offset_y - z = offset_z lat0, lon0, hei0 = pm.ecef2geodetic(x, y, z, deg=True) R_ecef_to_enu = Rotation.from_matrix(rotation_matrix_ecef2enu(lon=lon0, lat=lat0)) @@ -90,11 +82,11 @@ def show_mesh_camera(config, show_mesh = True, show_pose = True, ref_frame = 'EC points_mesh_ecef = mesh.points # Next part is to make a NED - x_cam, y_cam, z_cam = pm.ecef2enu(x = points_cam_ecef[:,0] + x, y = points_cam_ecef[:,1] + y, z = points_cam_ecef[:,2] + z, lon0=lon0, lat0=lat0, h0=hei0) + x_cam, y_cam, z_cam = pm.ecef2enu(x = points_cam_ecef[:,0], y = points_cam_ecef[:,1], z = points_cam_ecef[:,2], lon0=lon0, lat0=lat0, h0=hei0) points_cam = np.concatenate((x_cam.reshape((-1,1)), y_cam.reshape((-1,1)), z_cam.reshape((-1,1))), axis = 1) - x_mesh, y_mesh, z_mesh = pm.ecef2enu(x = points_mesh_ecef[:,0] + x, y = points_mesh_ecef[:,1] + y, z = points_mesh_ecef[:,2] + z, lon0=lon0, lat0=lat0, h0=hei0) + x_mesh, y_mesh, z_mesh = pm.ecef2enu(x = points_mesh_ecef[:,0], y = points_mesh_ecef[:,1], z = points_mesh_ecef[:,2], lon0=lon0, lat0=lat0, h0=hei0) points_mesh = np.concatenate((x_mesh.reshape((-1,1)), y_mesh.reshape((-1,1)), z_mesh.reshape((-1,1))), axis = 1) mesh.points = points_mesh @@ -106,10 +98,6 @@ def show_mesh_camera(config, show_mesh = True, show_pose = True, ref_frame = 'EC - #cam_rot = Rotation.from_euler("ZYX", np.array([0, 0, 0]), degrees=True).as_matrix() - - # Compose the two - #rotMats = rotMats*cam_rot p = BackgroundPlotter(window_size=(600, 400)) if show_mesh: diff --git a/notebooks/flightrunner_specim.ipynb b/notebooks/flightrunner_specim.ipynb index a9cdcc4..4421b21 100644 --- a/notebooks/flightrunner_specim.ipynb +++ b/notebooks/flightrunner_specim.ipynb @@ -503,8 +503,8 @@ "\u001b[0;31mAttributeError\u001b[0m Traceback (most recent call last)", "Cell \u001b[0;32mIn [60], line 21\u001b[0m\n\u001b[1;32m 16\u001b[0m src_folder \u001b[38;5;241m=\u001b[39m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124m/home/notebook/shared-seabee-ns9879k/ntnu/specim_processing_data/Lab_Calibrations\u001b[39m\u001b[38;5;124m'\u001b[39m\n\u001b[1;32m 17\u001b[0m dst_folder \u001b[38;5;241m=\u001b[39m temp_dir \u001b[38;5;241m+\u001b[39m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mLab_calibration\u001b[39m\u001b[38;5;124m'\u001b[39m\n\u001b[0;32m---> 21\u001b[0m \u001b[43mspecim_process\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mmain\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mstr\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43mconfig_yaml\u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mstr\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43mspecim_mission_folder\u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mgeoid_path\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mconfig_template_path\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mlab_calibration_path\u001b[49m\u001b[43m)\u001b[49m\n", "File \u001b[0;32m~/gitprojects/gref4hsi/notebooks/specim_process.py:152\u001b[0m, in \u001b[0;36mmain\u001b[0;34m(config_yaml, specim_mission_folder, geoid_path, config_template_path, lab_calibration_path)\u001b[0m\n\u001b[1;32m 149\u001b[0m parsing_utils\u001b[38;5;241m.\u001b[39mexport_model(config_file_mission)\n\u001b[1;32m 151\u001b[0m \u001b[38;5;66;03m# Georeference the line scans of the hyperspectral imager. Utilizes parsed data\u001b[39;00m\n\u001b[0;32m--> 152\u001b[0m \u001b[43mgeoreference\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mmain\u001b[49m\u001b[43m(\u001b[49m\u001b[43mconfig_file_mission\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 154\u001b[0m orthorectification\u001b[38;5;241m.\u001b[39mmain(config_file_mission)\n", - "File \u001b[0;32m/opt/conda/lib/python3.10/site-packages/gref4hsi/scripts/georeference.py:171\u001b[0m, in \u001b[0;36mmain\u001b[0;34m(iniPath, viz)\u001b[0m\n\u001b[1;32m 163\u001b[0m \u001b[38;5;66;03m# Define the rays in ECEF for each frame. Note that if there is no position offset, pos0 is a 1x3 of zeros\u001b[39;00m\n\u001b[1;32m 164\u001b[0m hsi_geometry \u001b[38;5;241m=\u001b[39m define_hsi_ray_geometry(pos_ref_ecef \u001b[38;5;241m=\u001b[39m hyp\u001b[38;5;241m.\u001b[39mpos_ref, \n\u001b[1;32m 165\u001b[0m quat_ref_ecef \u001b[38;5;241m=\u001b[39m hyp\u001b[38;5;241m.\u001b[39mquat_ref, \n\u001b[1;32m 166\u001b[0m time_pose \u001b[38;5;241m=\u001b[39m hyp\u001b[38;5;241m.\u001b[39mpose_time, \n\u001b[1;32m 167\u001b[0m pos0 \u001b[38;5;241m=\u001b[39m hyp\u001b[38;5;241m.\u001b[39mpos0, \n\u001b[1;32m 168\u001b[0m intrinsic_geometry_dict \u001b[38;5;241m=\u001b[39m intrinsic_geometry_dict)\n\u001b[0;32m--> 171\u001b[0m \u001b[43mhsi_geometry\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mintersectWithMesh\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmesh\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43m \u001b[49m\u001b[43mmesh\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mmax_ray_length\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mmax_ray_length\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 173\u001b[0m \u001b[38;5;66;03m# Computes the view angles in the local NED. Computationally intensive as local NED is defined for each intersection\u001b[39;00m\n\u001b[1;32m 174\u001b[0m hsi_geometry\u001b[38;5;241m.\u001b[39mcompute_view_directions_local_tangent_plane()\n", - "File \u001b[0;32m/opt/conda/lib/python3.10/site-packages/gref4hsi/utils/geometry_utils.py:324\u001b[0m, in \u001b[0;36mCameraGeometry.intersectWithMesh\u001b[0;34m(self, mesh, max_ray_length)\u001b[0m\n\u001b[1;32m 322\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39munix_time_grid \u001b[38;5;241m=\u001b[39m np\u001b[38;5;241m.\u001b[39meinsum(\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mijk, ik -> ijk\u001b[39m\u001b[38;5;124m'\u001b[39m, np\u001b[38;5;241m.\u001b[39mones((n, m, \u001b[38;5;241m1\u001b[39m), dtype\u001b[38;5;241m=\u001b[39mnp\u001b[38;5;241m.\u001b[39mfloat64), \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mtime\u001b[38;5;241m.\u001b[39mreshape((\u001b[38;5;241m-\u001b[39m\u001b[38;5;241m1\u001b[39m,\u001b[38;5;241m1\u001b[39m)))\n\u001b[1;32m 323\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39munix_time \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39munix_time_grid\u001b[38;5;241m.\u001b[39mreshape((\u001b[38;5;241m-\u001b[39m\u001b[38;5;241m1\u001b[39m, \u001b[38;5;241m1\u001b[39m)) \u001b[38;5;66;03m# Vector-form\u001b[39;00m\n\u001b[0;32m--> 324\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mpixel_nr_grid \u001b[38;5;241m=\u001b[39m \u001b[43mnp\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mmatlib\u001b[49m\u001b[38;5;241m.\u001b[39mrepmat(np\u001b[38;5;241m.\u001b[39marange(m), n, \u001b[38;5;241m1\u001b[39m)\n\u001b[1;32m 325\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mframe_nr_grid \u001b[38;5;241m=\u001b[39m np\u001b[38;5;241m.\u001b[39mmatlib\u001b[38;5;241m.\u001b[39mrepmat(np\u001b[38;5;241m.\u001b[39marange(n)\u001b[38;5;241m.\u001b[39mreshape(\u001b[38;5;241m-\u001b[39m\u001b[38;5;241m1\u001b[39m,\u001b[38;5;241m1\u001b[39m), \u001b[38;5;241m1\u001b[39m, m)\n", + "File \u001b[0;32m/opt/conda/lib/python3.10/site-packages/gref4hsi/scripts/georeference.py:171\u001b[0m, in \u001b[0;36mmain\u001b[0;34m(iniPath, viz)\u001b[0m\n\u001b[1;32m 163\u001b[0m \u001b[38;5;66;03m# Define the rays in ECEF for each frame. Note that if there is no position offset, pos0 is a 1x3 of zeros\u001b[39;00m\n\u001b[1;32m 164\u001b[0m hsi_geometry \u001b[38;5;241m=\u001b[39m define_hsi_ray_geometry(pos_ref_ecef \u001b[38;5;241m=\u001b[39m hyp\u001b[38;5;241m.\u001b[39mpos_ref, \n\u001b[1;32m 165\u001b[0m quat_ref_ecef \u001b[38;5;241m=\u001b[39m hyp\u001b[38;5;241m.\u001b[39mquat_ref, \n\u001b[1;32m 166\u001b[0m time_pose \u001b[38;5;241m=\u001b[39m hyp\u001b[38;5;241m.\u001b[39mpose_time, \n\u001b[1;32m 167\u001b[0m pos0 \u001b[38;5;241m=\u001b[39m hyp\u001b[38;5;241m.\u001b[39mpos0, \n\u001b[1;32m 168\u001b[0m intrinsic_geometry_dict \u001b[38;5;241m=\u001b[39m intrinsic_geometry_dict)\n\u001b[0;32m--> 171\u001b[0m \u001b[43mhsi_geometry\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mintersect_with_mesh\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmesh\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43m \u001b[49m\u001b[43mmesh\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mmax_ray_length\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mmax_ray_length\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 173\u001b[0m \u001b[38;5;66;03m# Computes the view angles in the local NED. Computationally intensive as local NED is defined for each intersection\u001b[39;00m\n\u001b[1;32m 174\u001b[0m hsi_geometry\u001b[38;5;241m.\u001b[39mcompute_view_directions_local_tangent_plane()\n", + "File \u001b[0;32m/opt/conda/lib/python3.10/site-packages/gref4hsi/utils/geometry_utils.py:324\u001b[0m, in \u001b[0;36mCameraGeometry.intersect_with_mesh\u001b[0;34m(self, mesh, max_ray_length)\u001b[0m\n\u001b[1;32m 322\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39munix_time_grid \u001b[38;5;241m=\u001b[39m np\u001b[38;5;241m.\u001b[39meinsum(\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mijk, ik -> ijk\u001b[39m\u001b[38;5;124m'\u001b[39m, np\u001b[38;5;241m.\u001b[39mones((n, m, \u001b[38;5;241m1\u001b[39m), dtype\u001b[38;5;241m=\u001b[39mnp\u001b[38;5;241m.\u001b[39mfloat64), \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mtime\u001b[38;5;241m.\u001b[39mreshape((\u001b[38;5;241m-\u001b[39m\u001b[38;5;241m1\u001b[39m,\u001b[38;5;241m1\u001b[39m)))\n\u001b[1;32m 323\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39munix_time \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39munix_time_grid\u001b[38;5;241m.\u001b[39mreshape((\u001b[38;5;241m-\u001b[39m\u001b[38;5;241m1\u001b[39m, \u001b[38;5;241m1\u001b[39m)) \u001b[38;5;66;03m# Vector-form\u001b[39;00m\n\u001b[0;32m--> 324\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mpixel_nr_grid \u001b[38;5;241m=\u001b[39m \u001b[43mnp\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mmatlib\u001b[49m\u001b[38;5;241m.\u001b[39mrepmat(np\u001b[38;5;241m.\u001b[39marange(m), n, \u001b[38;5;241m1\u001b[39m)\n\u001b[1;32m 325\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mframe_nr_grid \u001b[38;5;241m=\u001b[39m np\u001b[38;5;241m.\u001b[39mmatlib\u001b[38;5;241m.\u001b[39mrepmat(np\u001b[38;5;241m.\u001b[39marange(n)\u001b[38;5;241m.\u001b[39mreshape(\u001b[38;5;241m-\u001b[39m\u001b[38;5;241m1\u001b[39m,\u001b[38;5;241m1\u001b[39m), \u001b[38;5;241m1\u001b[39m, m)\n", "File \u001b[0;32m/opt/conda/lib/python3.10/site-packages/numpy/__init__.py:311\u001b[0m, in \u001b[0;36m__getattr__\u001b[0;34m(attr)\u001b[0m\n\u001b[1;32m 308\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mtesting\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m Tester\n\u001b[1;32m 309\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m Tester\n\u001b[0;32m--> 311\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mAttributeError\u001b[39;00m(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mmodule \u001b[39m\u001b[38;5;132;01m{!r}\u001b[39;00m\u001b[38;5;124m has no attribute \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 312\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;132;01m{!r}\u001b[39;00m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;241m.\u001b[39mformat(\u001b[38;5;18m__name__\u001b[39m, attr))\n", "\u001b[0;31mAttributeError\u001b[0m: module 'numpy' has no attribute 'matlib'" ] diff --git a/notebooks/specim_process.py b/notebooks/specim_process.py index 5dae0b5..796c652 100644 --- a/notebooks/specim_process.py +++ b/notebooks/specim_process.py @@ -10,7 +10,7 @@ from gref4hsi.scripts import georeference from gref4hsi.scripts import orthorectification from gref4hsi.utils import parsing_utils, specim_parsing_utils -from gref4hsi.scripts import visualize +from gref4hsi.utils import visualize from gref4hsi.utils.config_utils import prepend_data_dir_to_relative_paths, customize_config