Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

M11/M12 variables derrived from S1 Correction workflow incorrect #99

Closed
jhkennedy opened this issue Jul 9, 2024 · 1 comment
Closed

Comments

@jhkennedy
Copy link
Contributor

Some of the S1 velocity granules produced for the ITS_LIVE project have incorrect M11/M12 variables (nasa-jpl/its_live_production#18), which either need to be reprocessed or corrected.

Ideally, instead of reprocessing the granules, we'd be able to use the S1 correction workflow to derive M11/M12 using the script below since M11/M12 only depends on the output Geogrid GeoTIFFs.

If I re-process this pair using HyP3 AutoRIFT:

python -m hyp3_autorift \
    S1A_IW_SLC__1SDV_20170203T162106_20170203T162132_015121_018B9A_1380 \
    S1A_IW_SLC__1SDV_20170215T162105_20170215T162133_015296_019127_6E1E

I get M11/M12 variables with these attributes (Note: only showing ones of interest):

        short M11(y, x) ;
                M11:_FillValue = -32767s ;
                M11:dr_to_vr_factor = 70.8575391973598 ;
                M11:scale_factor = 0.0002382158f ;
                M11:add_offset = 0.009221043f ;
        short M12(y, x) ;
                M12:_FillValue = -32767s ;
                M12:dr_to_vr_factor = 70.8575391973598 ;
                M12:scale_factor = 0.0002571182f ;
                M12:add_offset = 0.002896906f ;

and notably, if I run the M11/M12 script below on the Geogrid GeoTIFFs produced when reprocessing the velocity granule, I get the same M11/M12 variables out.

However, if I run the correction workflow using HyP3 AutoRIFT:

python -m hyp3_autorift ++process s1_correction \
    S1A_IW_SLC__1SDV_20170203T162106_20170203T162132_015121_018B9A_1380

I get M11/M12 variables that generally look the same:
(conference internet isn't letting me upload the image; will try again at the hotel room tonight)

but are ~1 order of magnitude different:

        short M11(y, x) ;
                M11:_FillValue = -32767s ;
                M11:dr_to_vr_factor = 850.290171871093 ;
                M11:scale_factor = 1.985132e-05f ;
                M11:add_offset = 0.0007684205f ;
        short M12(y, x) ;
                M12:_FillValue = -32767s ;
                M12:dr_to_vr_factor = 850.290171871093 ;
                M12:scale_factor = 2.142653e-05f ;
                M12:add_offset = 0.0002414089f ;

Comparing the GeoTIFFs produced, we see that only the window_rdr_off2vel_*_vec.tif tiffs differ (Golden = Velocity; New = Correction):

velocity/window_rdr_off2vel_x_vec.tif
    Files differ at the binary level.
    Band 1 checksum difference:
      Golden: 50572
      New:    51651
      Pixels Differing: 3233592
      Maximum Pixel Difference: 31260.497209914174
    Metadata value difference for key "STATISTICS_MAXIMUM"
      Golden: "2841.8644710603"
      New:    "34102.361680974"
    Metadata value difference for key "STATISTICS_MEAN"
      Golden: "113.75025851907"
      New:    "1365.00262304"
    Metadata value difference for key "STATISTICS_MINIMUM"
      Golden: "-2574.8556382044"
      New:    "-30898.256811516"
    Metadata value difference for key "STATISTICS_STDDEV"
      Golden: "25.426873336873"
      New:    "305.12237292567"
    Band 2 checksum difference:
      Golden: 36306
      New:    14559
      Pixels Differing: 3233592
      Maximum Pixel Difference: 56192.52815895011
    Metadata value difference for key "STATISTICS_MAXIMUM"
      Golden: "5108.4136071684"
      New:    "61300.941766118"
    Metadata value difference for key "STATISTICS_MEAN"
      Golden: "-90.117419601817"
      New:    "-1081.4086555897"
    Metadata value difference for key "STATISTICS_MINIMUM"
      Golden: "-3776.844615261"
      New:    "-45322.119472649"
    Metadata value difference for key "STATISTICS_STDDEV"
      Golden: "56.616892264538"
      New:    "679.40246866858"
    Band 3 checksum difference:
      Golden: 12420
      New:    23464
      Pixels Differing: 3233668
      Maximum Pixel Difference: 779.4326326737331
    Metadata value difference for key "STATISTICS_MAXIMUM"
      Golden: "70.85753919736"
      New:    "850.29017187109"
    Metadata value difference for key "STATISTICS_MEAN"
      Golden: "70.85753919736"
      New:    "850.29017187109"
    Metadata value difference for key "STATISTICS_MINIMUM"
      Golden: "70.85753919736"
      New:    "850.29017187109"
    Differences Found: 15
velocity/window_rdr_off2vel_y_vec.tif
    Files differ at the binary level.
    Band 1 checksum difference:
      Golden: 3316
      New:    43421
      Pixels Differing: 3233592
      Maximum Pixel Difference: 6632.928769183602
    Metadata value difference for key "STATISTICS_MAXIMUM"
      Golden: "602.99375539805"
      New:    "7235.9225245817"
    Metadata value difference for key "STATISTICS_MEAN"
      Golden: "24.077211167926"
      New:    "288.92643258652"
    Metadata value difference for key "STATISTICS_MINIMUM"
      Golden: "-546.56122833504"
      New:    "-6558.7324375553"
    Metadata value difference for key "STATISTICS_STDDEV"
      Golden: "5.374412392715"
      New:    "64.492926071561"
    Band 2 checksum difference:
      Golden: 58196
      New:    52534
      Pixels Differing: 3233592
      Maximum Pixel Difference: 16722.641869582112
    Metadata value difference for key "STATISTICS_MAXIMUM"
      Golden: "1520.2407521643"
      New:    "18242.882621746"
    Metadata value difference for key "STATISTICS_MEAN"
      Golden: "417.20712971368"
      New:    "5006.4837990206"
    Metadata value difference for key "STATISTICS_MINIMUM"
      Golden: "-365.22612668026"
      New:    "-4382.7119815973"
    Metadata value difference for key "STATISTICS_STDDEV"
      Golden: "12.002570821559"
      New:    "144.03079929633"
    Band 3 checksum difference:
      Golden: 5835
      New:    14625
      Pixels Differing: 3233668
      Maximum Pixel Difference: 4701.900461393899
    Metadata value difference for key "STATISTICS_MAXIMUM"
      Golden: "427.44566018801"
      New:    "5129.3461215819"
    Metadata value difference for key "STATISTICS_MEAN"
      Golden: "426.82521626944"
      New:    "5121.9007971729"
    Metadata value difference for key "STATISTICS_MINIMUM"
      Golden: "426.05713445327"
      New:    "5112.6838186144"
    Metadata value difference for key "STATISTICS_STDDEV"
      Golden: "0.37396807675987"
      New:    "4.4876153458703"
    Differences Found: 16

but also visually look the same (when the colorscale is stretched to +/- 2 sdv for each band).


Script to produce M11/M12 NetCDF file from the S1 Correction workflow output GeoTIFFs:

I've uploaded the script to produce a netCDF File containing the M11/M12 variables derived from output GeoGrid GeoTIFFs to:

aws s3 ls --no-sign-request s3://jhk-shenanigans/ITS_LIVE/s1_correction/m11m12/

The files under that prefix are:

s3://jhk-shenanigans/ITS_LIVE/s1_correction/m11m12/
├── correction
│   ├── S1A_IW_SLC__1SDV_20170203T162106_20170203T162132_015121_018B9A_1380_conversion_matrices.nc
│   ├── window_chip_size_max.tif
│   ├── window_chip_size_min.tif
│   ├── window_location.tif
│   ├── window_offset.tif
│   ├── window_rdr_off2vel_x_vec.tif
│   ├── window_rdr_off2vel_y_vec.tif
│   ├── window_scale_factor.tif
│   ├── window_search_range.tif
│   └── window_stable_surface_mask.tif
├── m11m12.py
└── velocity
    ├── S1A_IW_SLC__1SDV_20170203T162106_20170203T162132_015121_018B9A_1380_conversion_matrices.nc
    ├── window_chip_size_max.tif
    ├── window_chip_size_min.tif
    ├── window_location.tif
    ├── window_offset.tif
    ├── window_rdr_off2vel_x_vec.tif
    ├── window_rdr_off2vel_y_vec.tif
    ├── window_scale_factor.tif
    ├── window_search_range.tif
    └── window_stable_surface_mask.tif

where the velocity directory contains GeoGrid GeoTIFFs from the full S1 velocity pair workflow, and the correction directory contains GeoGrid GeoTIFFs from the S1 correction workflow.

The full script is:

from datetime import datetime

import numpy as np
from autoRIFT import __version__ as version
from netCDF4 import Dataset

from hyp3_autorift import utils
from hyp3_autorift.process import DEFAULT_PARAMETER_FILE

scene = 'S1A_IW_SLC__1SDV_20170203T162106_20170203T162132_015121_018B9A_1380'
epsg: int = 32635

grid_location_path: str = 'window_location.tif'
search_range_path: str = 'window_search_range.tif'
offset2vx_path: str = 'window_rdr_off2vel_x_vec.tif'
offset2vy_path: str = 'window_rdr_off2vel_y_vec.tif'
scale_factor_path: str = 'window_scale_factor.tif'
parameter_file: str = DEFAULT_PARAMETER_FILE

xGrid, tran, _, srs, nodata = utils.load_geospatial(grid_location_path, band=1)

offset2vy_1, _, _, _, _ = utils.load_geospatial(offset2vy_path, band=1)
offset2vy_1[offset2vy_1 == nodata] = np.nan

offset2vy_2, _, _, _, _ = utils.load_geospatial(offset2vy_path, band=2)
offset2vy_2[offset2vy_2 == nodata] = np.nan

offset2vx_1, _, _, _, _ = utils.load_geospatial(offset2vx_path, band=1)
offset2vx_1[offset2vx_1 == nodata] = np.nan

offset2vx_2, _, _, _, _ = utils.load_geospatial(offset2vx_path, band=2)
offset2vx_2[offset2vx_2 == nodata] = np.nan

offset2vr, _, _, _, _ = utils.load_geospatial(offset2vx_path, band=3)
offset2vr[offset2vr == nodata] = np.nan

scale_factor_1, _, _, _, _ = utils.load_geospatial(scale_factor_path, band=1)
scale_factor_1[scale_factor_1 == nodata] = np.nan

dimidY, dimidX = xGrid.shape
noDataMask = xGrid == nodata

y = np.arange(tran[3], tran[3] + tran[5] * dimidY, tran[5])
x = np.arange(tran[0], tran[0] + tran[1] * dimidX, tran[1])

dr_2_vr_factor = np.median(offset2vr[np.logical_not(np.isnan(offset2vr))])

chunk_lines = np.min([np.ceil(8192 / dimidY) * 128, dimidY])
ChunkSize = [chunk_lines, dimidX]

nc_outfile = Dataset(f'{scene}_conversion_matrices.nc', 'w', clobber=True, format='NETCDF4')

nc_outfile.setncattr('GDAL_AREA_OR_POINT', 'Area')
nc_outfile.setncattr('Conventions', 'CF-1.8')
nc_outfile.setncattr('date_created', datetime.now().strftime("%d-%b-%Y %H:%M:%S"))
nc_outfile.setncattr('title', 'autoRIFT S1 Corrections')
nc_outfile.setncattr('autoRIFT_software_version', version)
nc_outfile.setncattr('autoRIFT_parameter_file', parameter_file)

nc_outfile.createDimension('x', len(x))
nc_outfile.createDimension('y', len(y))

var = nc_outfile.createVariable('x', np.dtype('float64'), 'x', fill_value=None)
var.setncattr('standard_name', 'projection_x_coordinate')
var.setncattr('description', 'x coordinate of projection')
var.setncattr('units', 'm')
var[:] = x

var = nc_outfile.createVariable('y', np.dtype('float64'), 'y', fill_value=None)
var.setncattr('standard_name', 'projection_y_coordinate')
var.setncattr('description', 'y coordinate of projection')
var.setncattr('units', 'm')
var[:] = y

mapping_var_name = 'mapping'
var = nc_outfile.createVariable(mapping_var_name, 'U1', (), fill_value=None)
if srs.GetAttrValue('PROJECTION') == 'Polar_Stereographic':
    var.setncattr('grid_mapping_name', 'polar_stereographic')
    var.setncattr('straight_vertical_longitude_from_pole', srs.GetProjParm('central_meridian'))
    var.setncattr('false_easting', srs.GetProjParm('false_easting'))
    var.setncattr('false_northing', srs.GetProjParm('false_northing'))
    var.setncattr('latitude_of_projection_origin', np.sign(srs.GetProjParm('latitude_of_origin')) * 90.0)
    var.setncattr('latitude_of_origin', srs.GetProjParm('latitude_of_origin'))
    var.setncattr('semi_major_axis', float(srs.GetAttrValue('GEOGCS|SPHEROID', 1)))
    var.setncattr('scale_factor_at_projection_origin', 1)
    var.setncattr('inverse_flattening', float(srs.GetAttrValue('GEOGCS|SPHEROID', 2)))
    var.setncattr('spatial_ref', srs.ExportToWkt())
    var.setncattr('crs_wkt', srs.ExportToWkt())
    var.setncattr('proj4text', srs.ExportToProj4())
    var.setncattr('spatial_epsg', epsg)
    var.setncattr('GeoTransform', ' '.join(str(x) for x in tran))

elif srs.GetAttrValue('PROJECTION') == 'Transverse_Mercator':
    var.setncattr('grid_mapping_name', 'universal_transverse_mercator')
    zone = epsg - np.floor(epsg / 100) * 100
    var.setncattr('utm_zone_number', zone)
    var.setncattr('longitude_of_central_meridian', srs.GetProjParm('central_meridian'))
    var.setncattr('false_easting', srs.GetProjParm('false_easting'))
    var.setncattr('false_northing', srs.GetProjParm('false_northing'))
    var.setncattr('latitude_of_projection_origin', srs.GetProjParm('latitude_of_origin'))
    var.setncattr('semi_major_axis', float(srs.GetAttrValue('GEOGCS|SPHEROID', 1)))
    var.setncattr('scale_factor_at_central_meridian', srs.GetProjParm('scale_factor'))
    var.setncattr('inverse_flattening', float(srs.GetAttrValue('GEOGCS|SPHEROID', 2)))
    var.setncattr('spatial_ref', srs.ExportToWkt())
    var.setncattr('crs_wkt', srs.ExportToWkt())
    var.setncattr('proj4text', srs.ExportToProj4())
    var.setncattr('spatial_epsg', epsg)
    var.setncattr('GeoTransform', ' '.join(str(x) for x in tran))
else:
    raise Exception(f'Projection {srs.GetAttrValue("PROJECTION")} not recognized for this program')

NoDataValue = -32767

var = nc_outfile.createVariable('M11', np.dtype('int16'), ('y', 'x'), fill_value=NoDataValue,
                                zlib=True, complevel=2, shuffle=True, chunksizes=ChunkSize)
var.setncattr('standard_name', 'conversion_matrix_element_11')
var.setncattr(
    'description',
    'conversion matrix element (1st row, 1st column) that can be multiplied with vx to give range pixel '
    'displacement dr (see Eq. A18 in https://www.mdpi.com/2072-4292/13/4/749)'
)
var.setncattr('units', 'pixel/(meter/year)')
var.setncattr('grid_mapping', mapping_var_name)
var.setncattr('dr_to_vr_factor', dr_2_vr_factor)
var.setncattr('dr_to_vr_factor_description', 'multiplicative factor that converts slant range '
                                             'pixel displacement dr to slant range velocity vr')

M11 = offset2vy_2 / (offset2vx_1 * offset2vy_2 - offset2vx_2 * offset2vy_1) / scale_factor_1

x1 = np.nanmin(M11[:])
x2 = np.nanmax(M11[:])
y1 = -50
y2 = 50

C = [(y2 - y1) / (x2 - x1), y1 - x1 * (y2 - y1) / (x2 - x1)]
var.setncattr('scale_factor', np.float32(1 / C[0]))
var.setncattr('add_offset', np.float32(-C[1] / C[0]))

M11[noDataMask] = NoDataValue * np.float32(1 / C[0]) + np.float32(-C[1] / C[0])
var[:] = M11

var = nc_outfile.createVariable('M12', np.dtype('int16'), ('y', 'x'), fill_value=NoDataValue,
                                zlib=True, complevel=2, shuffle=True, chunksizes=ChunkSize)
var.setncattr('standard_name', 'conversion_matrix_element_12')
var.setncattr(
    'description',
    'conversion matrix element (1st row, 2nd column) that can be multiplied with vy to give range pixel '
    'displacement dr (see Eq. A18 in https://www.mdpi.com/2072-4292/13/4/749)'
)
var.setncattr('units', 'pixel/(meter/year)')
var.setncattr('grid_mapping', mapping_var_name)
var.setncattr('dr_to_vr_factor', dr_2_vr_factor)
var.setncattr('dr_to_vr_factor_description',
              'multiplicative factor that converts slant range pixel displacement dr to slant range velocity vr')

M12 = -offset2vx_2 / (offset2vx_1 * offset2vy_2 - offset2vx_2 * offset2vy_1) / scale_factor_1

x1 = np.nanmin(M12[:])
x2 = np.nanmax(M12[:])
y1 = -50
y2 = 50

C = [(y2 - y1) / (x2 - x1), y1 - x1 * (y2 - y1) / (x2 - x1)]
var.setncattr('scale_factor', np.float32(1 / C[0]))
var.setncattr('add_offset', np.float32(-C[1] / C[0]))

M12[noDataMask] = NoDataValue * np.float32(1 / C[0]) + np.float32(-C[1] / C[0])
var[:] = M12
@jhkennedy
Copy link
Contributor Author

jhkennedy commented Jul 10, 2024

Okay, I think I've got this figured out. The S1 correction workflow only depends on a secondary scene for the dt between the reference and secondary pair. In the HyP3 autoRIFT correction workflow, we only use the reference scene and spoof the dt for the fake secondary scene:
https://github.com/ASFHyP3/hyp3-autorift/blob/develop/src/hyp3_autorift/s1_isce2.py#L312-L315

Looking at this pair of S1 scenes, which were acquired almost exactly 12 days apart:

S1A_IW_SLC__1SDV_20170203T162106_20170203T162132_015121_018B9A_1380
S1A_IW_SLC__1SDV_20170215T162105_20170215T162133_015296_019127_6E1E

If I change the m11m12.py script to divide by 12.0 when loading in the window_rdr_off2vel_*_vec.tif GeoTIFFs:

offset2vy_1, _, _, _, _ = utils.load_geospatial(offset2vy_path, band=1)
offset2vy_1[offset2vy_1 == nodata] = np.nan
offset2vy_1 /= 12.

offset2vy_2, _, _, _, _ = utils.load_geospatial(offset2vy_path, band=2)
offset2vy_2[offset2vy_2 == nodata] = np.nan
offset2vy_2 /= 12.

offset2vx_1, _, _, _, _ = utils.load_geospatial(offset2vx_path, band=1)
offset2vx_1[offset2vx_1 == nodata] = np.nan
offset2vx_1 /= 12.

offset2vx_2, _, _, _, _ = utils.load_geospatial(offset2vx_path, band=2)
offset2vx_2[offset2vx_2 == nodata] = np.nan
offset2vx_2 /= 12.

offset2vr, _, _, _, _ = utils.load_geospatial(offset2vx_path, band=3)
offset2vr[offset2vr == nodata] = np.nan
offset2vr /= 12.

I can reproduce very close to the correct M11/M12 data values and dr_to_vr_factor attribute:

        short M11(y, x) ;
                M11:_FillValue = -32767s ;
                M11:dr_to_vr_factor = 70.8575143225911 ;
                M11:scale_factor = 0.0002382159f ;
                M11:add_offset = 0.009221046f ;
        short M12(y, x) ;
                M12:_FillValue = -32767s ;
                M12:dr_to_vr_factor = 70.8575143225911 ;
                M12:scale_factor = 0.0002571183f ;
                M12:add_offset = 0.002896907f ;

To correct the M11/M12 variables in an S1 velocity granule, the values computed for M11/M12 and the dr_to_vr_factor via the correction workflow must be divided by the pair separation (in days).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant