diff --git a/_toc.yml b/_toc.yml
index 9f8b13f0..252db5fd 100644
--- a/_toc.yml
+++ b/_toc.yml
@@ -38,10 +38,14 @@ parts:
- file: notebooks/advanced/merge_gcm_runs_and_visualize
- file: notebooks/advanced/dynamical_spinup
+ - caption: RGI-TOPO
+ chapters:
+ - file: notebooks/others/rgitopo_rgi6
+ - file: notebooks/others/rgitopo_rgi7
+
- caption: Related to OGGM
chapters:
- file: notebooks/others/holoviz_intro
- - file: notebooks/others/dem_comparison
- caption: In (re-)construction
chapters:
diff --git a/notebooks/10minutes/dynamical_spinup.ipynb b/notebooks/10minutes/dynamical_spinup.ipynb
index 6844b7c8..117dfb1c 100644
--- a/notebooks/10minutes/dynamical_spinup.ipynb
+++ b/notebooks/10minutes/dynamical_spinup.ipynb
@@ -11,7 +11,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "In this example, we highlight a recent addition to OGGM: a dynamical spinup during the historical period."
+ "In this example, we highlight a recent addition to OGGM: a dynamical spinup during the historical period. We explain why this was added and how it works."
]
},
{
@@ -28,7 +28,7 @@
"\n",
"# Locals\n",
"import oggm.cfg as cfg\n",
- "from oggm import utils, workflow, tasks\n",
+ "from oggm import utils, workflow, tasks, DEFAULT_BASE_URL\n",
"from oggm.shop import gcm_climate"
]
},
@@ -63,14 +63,14 @@
"cfg.PATHS['working_dir'] = utils.gettempdir('OGGM_gcm_run', reset=True)\n",
"\n",
"# RGI glacier \n",
- "rgi_ids = utils.get_rgi_glacier_entities(['RGI60-11.00897'])"
+ "rgi_ids = 'RGI60-11.00897'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "To fetch the preprocessed directories including spinup, we have to tell OGGM where to find them:"
+ "To fetch the preprocessed directories including spinup, we have to tell OGGM where to find them. The default URL contains the runs with spinup:"
]
},
{
@@ -81,10 +81,7 @@
},
"outputs": [],
"source": [
- "# Currently only available with border 160\n",
- "cfg.PARAMS['border'] = 160 \n",
- "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.1/elev_bands/W5E5_spinup'\n",
- "gdirs = workflow.init_glacier_directories(rgi_ids, from_prepro_level=5, prepro_base_url=base_url)"
+ "gdirs = workflow.init_glacier_directories(rgi_ids, from_prepro_level=5, prepro_base_url=DEFAULT_BASE_URL)"
]
},
{
@@ -100,7 +97,7 @@
"tags": []
},
"source": [
- "These directories are very similar to the default ones (same input data, same baseline climate...). In addition, they include a new historical simulation run with a dynamic spinup. Let's open it and compare it to the old historical run without a spinup."
+ "These directories are very similar to the \"old\" ones (same input data, same baseline climate...). But in addition, they include a new historical simulation run with a dynamic spinup. Let's open it and compare it to the old historical run without a spinup:"
]
},
{
@@ -133,6 +130,13 @@
"plt.legend();"
]
},
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Let's have a look at what happens here."
+ ]
+ },
{
"cell_type": "markdown",
"metadata": {},
@@ -164,16 +168,14 @@
"source": [
"We achieve this by searching for a glacier state in 1979 which evolves to match the area at the RGI date. Therefore, you can see that the areas around the RGI date (2003) are very close.\n",
"\n",
- "However, the volumes show some difference around the RGI date, as it was not tried to match the values. The reason is that the current workflow can match area OR volume. By default, we decided to match area as it is a direct observation (from the RGI outlines), in contrast to a model guess for the volume (e.g. [Farinotti et al. 2019](https://www.nature.com/articles/s41561-019-0300-3)).\n",
- "\n",
- "Another advantage of the dynamic spinup is that we do not expect any problems due to an 'initial shock' at the start of our model run."
+ "However, the volumes show some difference around the RGI date, as we did not attempt to match the volume. The current workflow can match area OR volume and, by default, we decided to match area as it is a direct observation (from the RGI outlines), in contrast to a model guess for the volume (e.g. [Farinotti et al. 2019](https://www.nature.com/articles/s41561-019-0300-3))."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "### Dynamically recalibrated *melt_f*"
+ "### Dynamical spinup also uses a dynamically recalibrated melt factor *melt_f*"
]
},
{
@@ -182,15 +184,17 @@
"source": [
"The second big difference is not directly visible, but during the dynamic spinup, we check that the dynamically modelled geodetic mass balance fits the given observations from [Hugonnet et al. 2021](https://www.nature.com/articles/s41586-021-03436-z). To achieve this, we use the *melt_f* of the mass balance as a tuning variable.\n",
"\n",
- "We need this step because the initial mass balance model calibration (see this [tutorial](../advanced/massbalance_calibration.ipynb)) assumes constant glacier surface geometry, defined by the RGI outline. However, the observed geodetic mass balance also contains surface geometry changes, which we only can consider during a dynamic model run.\n",
+ "We need this step because the initial mass balance model calibration (see this [tutorial](../advanced/massbalance_calibration.ipynb)) assumes constant glacier surface geometry, as defined by the RGI outline. However, the observed geodetic mass balance also contains surface geometry changes, which we only can consider during a dynamic model run.\n",
"\n",
- "Let's check that the dynamic geodetic mass balance fits inside the given observations:"
+ "Let's check that the dynamically calibrated geodetic mass balance fits the given observations:"
]
},
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"gdir = gdirs[0]\n",
@@ -221,6 +225,38 @@
"print(f\"Dynamically calibrated melt_f: {gdir.read_json('mb_calib')['melt_f']:.1f} kg m-2 day-1 °C-1\")"
]
},
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "This fits quite well! The default in OGGM is to try to match the observations within 20% of the reported error by Hugonnet et al. This is a model option, and can be changed at wish."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Dynamical spinup addresses \"initial shock\" problems"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "This is not really visible in the plots above, but the \"old\" method of initialisation in OGGM had another issue. It assumed dynamical steady state at the begining of the simulation (the RGI date), which was required by the bed inversion process. This could lead to artifacts (mainly in the glacier length and area, as well as velocities) during the first few years of the simulation. The dynamical spinup addresses this issue by starting the simulation in 1979."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "# TODO: showcase the velocities in the fl diagnostics"
+ ]
+ },
{
"cell_type": "markdown",
"metadata": {},
@@ -232,7 +268,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "We recommend that you use the provided preprocessed directories for your analysis. However, if you want to learn more about how the dynamic spinup works in detail or if you plan to use it in your workflow, maybe with different data, you should check out the more comprehensive tutorial ['Dynamic spinup and dynamic melt_f calibration for past simulations'](../advanced/dynamical_spinup.ipynb). And do not hesitate to [reach out to us](https://oggm.org/community/)!"
+ "We recommend that you use the provided preprocessed directories for your analysis. However, if you want to learn more about how the dynamic spinup works in detail or if you plan to use it in your workflow, maybe with different data, you should check out the more comprehensive tutorial: [Dynamic spinup and dynamic melt_f calibration for past simulations](../advanced/dynamical_spinup.ipynb). And do not hesitate to [reach out](https://oggm.org/community) if you have any questions!"
]
},
{
@@ -241,7 +277,7 @@
"source": [
"## What's next?\n",
"\n",
- "- Look at the more comprehensive tutorial ['Dynamic spinup and dynamic melt_f calibration for past simulations'](../advanced/dynamical_spinup.ipynb)\n",
+ "- Look at the more comprehensive tutorial [Dynamic spinup and dynamic melt_f calibration for past simulations](../advanced/dynamical_spinup.ipynb)\n",
"- return to the [OGGM documentation](https://docs.oggm.org)\n",
"- back to the [table of contents](../welcome.ipynb)"
]
@@ -264,7 +300,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.10.6"
+ "version": "3.10.9"
},
"latex_envs": {
"LaTeX_envs_menu_present": true,
diff --git a/notebooks/10minutes/elevation_bands_vs_centerlines.ipynb b/notebooks/10minutes/elevation_bands_vs_centerlines.ipynb
index 53083331..b839ca45 100644
--- a/notebooks/10minutes/elevation_bands_vs_centerlines.ipynb
+++ b/notebooks/10minutes/elevation_bands_vs_centerlines.ipynb
@@ -79,7 +79,7 @@
"cfg.PATHS['working_dir'] = utils.gettempdir(dirname='OGGM-centerlines', reset=True)\n",
"\n",
"# We start from prepro level 3 with all data ready - note the url here\n",
- "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.1/centerlines/W5E5/'\n",
+ "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.3/centerlines/W5E5/'\n",
"gdirs = workflow.init_glacier_directories(rgi_ids, from_prepro_level=3, prepro_border=80, prepro_base_url=base_url)\n",
"gdir_cl = gdirs[0]\n",
"gdir_cl"
@@ -88,7 +88,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"# Elevation band flowline\n",
@@ -96,7 +98,7 @@
"cfg.PATHS['working_dir'] = utils.gettempdir(dirname='OGGM-elevbands', reset=True)\n",
"\n",
"# Note the new url\n",
- "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.1/elev_bands/W5E5/'\n",
+ "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.3/elev_bands/W5E5/'\n",
"gdirs = workflow.init_glacier_directories(rgi_ids, from_prepro_level=3, prepro_border=80, prepro_base_url=base_url)\n",
"gdir_eb = gdirs[0]\n",
"gdir_eb"
@@ -113,7 +115,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "we wrote a bit of information about the differences between these to. First, go to the [glacier flowlines](https://docs.oggm.org/en/stable/flowlines.html#glacier-flowlines) documentation where you can find detailed information about the two flowline types and also a [guideline when to use which flowline method](https://docs.oggm.org/en/stable/flowlines.html#pros-and-cons-of-both-methods).\n",
+ "We wrote a bit of information about the differences between these to. First, go to the [glacier flowlines](https://docs.oggm.org/en/stable/flowlines.html#glacier-flowlines) documentation where you can find detailed information about the two flowline types and also a [guideline when to use which flowline method](https://docs.oggm.org/en/stable/flowlines.html#pros-and-cons-of-both-methods).\n",
"\n",
"The examples below illustrate these differences, without much text for now because of lack of time:"
]
@@ -128,7 +130,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"fls_cl = gdir_cl.read_pickle('model_flowlines')\n",
@@ -138,7 +142,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"f, (ax1, ax2) = plt.subplots(2, 1, figsize=(10, 14), sharex=True, sharey=True)\n",
@@ -210,7 +216,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "For the ice dynamics simulations, the commands are exactly the same as well. The only difference is that centerlines require the more flexible \"FluxBased\" numerical model, while the elevation bands can also use the more robust \"SemiImplicit\" one. The runs are considerabily faster with the elevation bands flowlines."
+ "For the ice dynamics simulations, the commands are exactly the same as well. The only difference is that centerlines require the more flexible \"FluxBased\" numerical model, while the elevation bands can also use the more robust \"SemiImplicit\" one. **The runs are considerabily faster with the elevation bands flowlines.**"
]
},
{
@@ -392,7 +398,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "We are however working on a [better representation of retreating glaciers](https://github.com/OGGM/oggm/pull/1490) for outreach. Stay tuned!"
+ "We are however working on a better representation of retreating glaciers for outreach. Have a look at [this tutorial](../beginner/distribute_flowline.ipynb)!"
]
},
{
@@ -439,7 +445,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.10.6"
+ "version": "3.10.9"
},
"latex_envs": {
"LaTeX_envs_menu_present": true,
diff --git a/notebooks/10minutes/machine_learning.ipynb b/notebooks/10minutes/machine_learning.ipynb
index 59519cfd..a017329d 100644
--- a/notebooks/10minutes/machine_learning.ipynb
+++ b/notebooks/10minutes/machine_learning.ipynb
@@ -64,7 +64,7 @@
"# Local working directory (where OGGM will write its output)\n",
"cfg.PATHS['working_dir'] = utils.gettempdir('OGGM_Toy_Thickness_Model')\n",
"# We use the directories with the shop data in it: \"W5E5_w_data\"\n",
- "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.1/elev_bands/W5E5_w_data/'\n",
+ "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.3/elev_bands/W5E5_w_data/'\n",
"gdirs = workflow.init_glacier_directories(['RGI60-01.16195'], from_prepro_level=3, prepro_base_url=base_url, prepro_border=10)"
]
},
@@ -993,6 +993,13 @@
"df_agg.plot.scatter(x='thick', y='consensus_thick', ax=ax4);\n",
"ax4.set_xlim([-25, 220]); ax4.set_ylim([-25, 220]); ax4.set_title('Farinotti 2019');"
]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": []
}
],
"metadata": {
@@ -1012,7 +1019,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.10.6"
+ "version": "3.10.9"
},
"toc": {
"base_numbering": 1,
diff --git a/notebooks/10minutes/preprocessed_directories.ipynb b/notebooks/10minutes/preprocessed_directories.ipynb
index a8d32b20..af6562c6 100644
--- a/notebooks/10minutes/preprocessed_directories.ipynb
+++ b/notebooks/10minutes/preprocessed_directories.ipynb
@@ -11,7 +11,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "The OGGM workflow is best explained with an example. In the following, we will show you the OGGM fundamentals ([Doc page: model structure and fundamentals](https://docs.oggm.org/en/stable/structure.html)). This example is also meant to guide you through a first-time setup if you are using OGGM on your own computer. If you prefer not to install OGGM on your computer, you can always run this notebook on [OGGM-Hub](https://docs.oggm.org/cloud.html) instead!"
+ "The OGGM workflow is best explained with an example. In the following, we will show you the OGGM fundamentals ([Doc page: model structure and fundamentals](https://docs.oggm.org/en/stable/structure.html)). This example is also meant to guide you through a first-time setup if you are using OGGM on your own computer. If you prefer not to install OGGM on your computer, you can always run this notebook [online](https://docs.oggm.org/stable/cloud.html) instead!"
]
},
{
@@ -54,7 +54,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"from oggm import cfg, utils\n",
@@ -71,7 +73,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"cfg.PARAMS['melt_f'], cfg.PARAMS['ice_density'], cfg.PARAMS['continue_on_error']"
@@ -87,7 +91,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"# You can try with or without multiprocessing: with two glaciers, OGGM could run on two processors\n",
@@ -114,7 +120,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"from oggm import workflow"
@@ -137,7 +145,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"cfg.PATHS['working_dir'] = utils.gettempdir(dirname='OGGM-GettingStarted-10m', reset=True)\n",
@@ -148,11 +158,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "We use a temporary directory for this example, but in practice you will set this working directory yourself (for example: `/home/erika/OGGM_output`. The size of this directory will depend on how many glaciers you'll simulate!\n",
+ "We use a temporary directory for this example, but in practice you will set this working directory yourself (for example: `/home/shruti/OGGM_output`. The size of this directory will depend on how many glaciers you'll simulate!\n",
"\n",
"
\n",
" \n",
- " In the OGGM design, this working directory is meant to be persistent, at least as long as you need the data for. For example, you can stop your processing workflow after any task, and restart from an existing working directory at a later stage, simply by using the same working directory.\n",
+ " In the OGGM design, this working directory is meant to be persistent, at least as long as you need the data. For example, you can stop your processing workflow after any task, and restart from an existing working directory at a later stage, simply by using the same working directory.\n",
" \n",
"
\n",
"\n",
@@ -169,7 +179,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"rgi_ids = ['RGI60-11.01328', 'RGI60-11.00897'] "
@@ -204,27 +216,51 @@
"source": [
"The OGGM workflow is organized as a list of **tasks** that have to be applied to a list of glaciers. The vast majority of tasks are called **entity tasks**: they are standalone operations to be realized on one single glacier entity. These tasks are executed sequentially (one after another): they often need input generated by the previous task(s): for example, the climate calibration needs the glacier flowlines, which can be only computed after the topography data has been processed, and so on.\n",
"\n",
- "To handle this situation, OGGM uses a workflow based on data persistence on disk: instead of passing data as python variables from one task to another, each task will read the data from disk and then write the computation results back to the disk, making these new data available for the next task in the queue.\n",
+ "To handle this situation, OGGM uses a workflow based on data persistence on disk: instead of passing data as python variables from one task to another, each task will read the data from disk and then write the computation results back to the disk, making these new data available for the next task in the queue. These glacier specific data are located in [glacier directories](https://docs.oggm.org/en/stable/glacierdir.html#glacier-directories). \n",
"\n",
- "These glacier specific data are located in [glacier directories](https://docs.oggm.org/en/stable/glacierdir.html#glacier-directories). In the model, these directories are initialized with the following command (this can take a little while on the first call, as OGGM needs to download some data):"
+ "One main advantage of this workflow is that OGGM can prepare data and make it available to everyone! Here is an example of an url where such data can be found:"
]
},
{
"cell_type": "code",
"execution_count": null,
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "from oggm import DEFAULT_BASE_URL\n",
+ "DEFAULT_BASE_URL"
+ ]
+ },
+ {
+ "cell_type": "markdown",
"metadata": {},
+ "source": [
+ "Let's use OGGM to download the glacier directories for our two selected glaciers:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
- "# Where to fetch the pre-processed directories\n",
- "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.1/elev_bands/W5E5'\n",
- "gdirs = workflow.init_glacier_directories(rgi_ids, prepro_base_url=base_url, from_prepro_level=3, prepro_border=80)"
+ "gdirs = workflow.init_glacier_directories(\n",
+ " rgi_ids, # which glaciers?\n",
+ " prepro_base_url=DEFAULT_BASE_URL, # where to fetch the data?\n",
+ " from_prepro_level=4, # what kind of data? \n",
+ " prepro_border=80 # how big of a map?\n",
+ ")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "- the keyword `from_prepro_level` indicates that we will start from [pre-processed directories](https://docs.oggm.org/en/stable/shop.html#pre-processed-directories), i.e. data that are already prepared by the OGGM team. In many cases you will want to start from pre-processed directories, and from level 3 or 5. Here we start from level 3 and add some data to the processing in order to demonstrate the OGGM workflow.\n",
+ "- the keyword `from_prepro_level` indicates that we will start from [pre-processed directories](https://docs.oggm.org/en/stable/shop.html#pre-processed-directories), i.e. data that are already prepared by the OGGM team. In many cases you will want to start from pre-processed directories, and from level 3 or 5. Here we start from level 4 and add some data to the processing in order to demonstrate the OGGM workflow.\n",
"- the `prepro_border` keyword indicates the number of grid points which we'd like to add to each side of the glacier for the local map: the larger the glacier will grow, the larger the border parameter should be. The available pre-processed border values are: **10, 80, 160, 240** (depending on the model set-ups there might be more or less options). These are the fixed map sizes we prepared for you - any other map size will require a full processing (see the [further DEM sources example](../advanced/dem_sources.ipynb) for a tutorial)."
]
},
@@ -238,7 +274,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"type(gdirs), type(gdirs[0])"
@@ -254,7 +292,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"gdir = gdirs[0] # take Unteraar glacier\n",
@@ -271,7 +311,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"gdir"
@@ -280,7 +322,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"gdir.rgi_date # date at which the outlines are valid"
@@ -296,7 +340,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"from oggm import graphics\n",
@@ -313,7 +359,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"# Fetch the LOCAL pre-processed directories - note that no arguments are used!\n",
@@ -388,7 +436,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"import os\n",
@@ -427,7 +477,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"from oggm import tasks\n",
@@ -448,7 +500,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"inversion_output = gdir.read_pickle('inversion_output')\n",
@@ -508,7 +562,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.10.6"
+ "version": "3.10.9"
},
"latex_envs": {
"LaTeX_envs_menu_present": true,
diff --git a/notebooks/10minutes/run_with_gcm.ipynb b/notebooks/10minutes/run_with_gcm.ipynb
index ad15806d..a79d0a51 100644
--- a/notebooks/10minutes/run_with_gcm.ipynb
+++ b/notebooks/10minutes/run_with_gcm.ipynb
@@ -33,7 +33,7 @@
"\n",
"# Locals\n",
"import oggm.cfg as cfg\n",
- "from oggm import utils, workflow, tasks\n",
+ "from oggm import utils, workflow, tasks, DEFAULT_BASE_URL\n",
"from oggm.shop import gcm_climate"
]
},
@@ -72,31 +72,34 @@
"\n",
"# Go - get the pre-processed glacier directories\n",
"# You have to explicitly indicate the url from where you want to start from\n",
- "base_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/L3-L5_files/2023.1/elev_bands/W5E5/'\n",
- "gdirs = workflow.init_glacier_directories(rgi_ids, from_prepro_level=5, prepro_base_url=base_url)"
+ "gdirs = workflow.init_glacier_directories(rgi_ids, from_prepro_level=5, prepro_base_url=DEFAULT_BASE_URL)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "## The `_historical` runs"
+ "## The `_spinup_historical` runs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "The level 5 files now come with a pre-computed model run from the RGI outline date to the last possible date given by the historical climate data. In case of the new default climate dataset [GSWP3_W5E5](https://www.isimip.org/gettingstarted/input-data-bias-adjustment/details/80/), this is until the end of 2019, so the volume is computed until January 1st, 2020. These files are stored in the directory with a `_historical` suffix. Let's compile them into a single file for our two glaciers: "
+ "The level 5 files now come with a pre-computed model run from the RGI outline date to the last possible date given by the historical climate data. In case of the new default climate dataset [GSWP3_W5E5](https://www.isimip.org/gettingstarted/input-data-bias-adjustment/details/80/), this is until the end of 2019, so the volume is computed until January 1st, 2020. These files are stored in the directory with a `_spinup_historical` suffix (see the [\"10 minutes to... a dynamical spinup\"](dynamical_spinup.ipynb) tutorial for context).\n",
+ "\n",
+ "Let's compile them into a single file for our two glaciers: "
]
},
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
- "ds = utils.compile_run_output(gdirs, input_filesuffix='_historical')\n",
+ "ds = utils.compile_run_output(gdirs, input_filesuffix='_spinup_historical')\n",
"ds.volume.plot(hue='rgi_id');"
]
},
@@ -104,13 +107,15 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Each RGI glacier has an \"inventory date\", the time at which the ouline is valid. It can change between glaciers and this is why the two timeseries start at a different date:"
+ "Each RGI glacier has an \"inventory date\", the time at which the ouline is valid:"
]
},
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"gdirs[0].rgi_date, gdirs[1].rgi_date"
@@ -120,7 +125,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "One thing to remember here is that although we try hard to avoid spin-up issues, the glacier after the inversion is not in a perfect dynamical state. Some variable (in particular glacier length) might need some years to adjust. In the [\"10 minutes to... a dynamical spinup\"](dynamical_spinup.ipynb) tutorial, we talk about ways to deal with this problem. For now, these files are perfect for our purpose, since we plan to start our simulation in 2020."
+ "The glacier volume and area changes before that date are highly uncertain and serve the purpose of spinup only! In the [\"10 minutes to... a dynamical spinup\"](dynamical_spinup.ipynb) tutorial, we talk about why. For now, these files are perfect for our purpose, since we plan to start our simulation in 2020."
]
},
{
@@ -140,7 +145,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"# you can choose one of these 5 different GCMs:\n",
@@ -171,7 +178,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"gdirs[0].get_climate_info()"
@@ -188,13 +197,15 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "We now run OGGM under various scenarios **and start from the end year of the historical run**:"
+ "We now run OGGM under various scenarios **and start from the end year of the historical spinup run**:"
]
},
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"for ssp in ['ssp126', 'ssp370', 'ssp585']:\n",
@@ -202,7 +213,7 @@
" workflow.execute_entity_task(tasks.run_from_climate_data, gdirs,\n",
" climate_filename='gcm_data', # use gcm_data, not climate_historical\n",
" climate_input_filesuffix=rid, # use the chosen scenario\n",
- " init_model_filesuffix='_historical', # this is important! Start from 2020 glacier\n",
+ " init_model_filesuffix='_spinup_historical', # this is important! Start from 2020 glacier\n",
" output_filesuffix=rid, # recognize the run for later\n",
" );"
]
@@ -217,7 +228,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"f, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 4))\n",
@@ -253,56 +266,71 @@
"source": [
"ISIMIP data is very useful because it is bias corrected. Furthermore, it offers daily data (which we will soon make available in OGGM).\n",
"\n",
- "But you may want a higher diversity of models or scenarios: for this, you may also use the CMIP5 or CMIP5 GCMs directly. These need to be bias-corrected first to the applied baseline historical data (see [process_gcm_data](https://docs.oggm.org/en/stable/generated/oggm.tasks.process_gcm_data.html#oggm.shop.gcm_climate.process_gcm_data). This relatively simple bias-correction is automatically done by `process_cmip_data` and is very important, as the model is very sensitive to temperature variability.\n",
+ "But you may want a higher diversity of models or scenarios: for this, you may also use the CMIP5 or CMIP6 GCMs directly. These need to be bias-corrected first to the applied baseline historical data (see [process_gcm_data](https://docs.oggm.org/en/stable/generated/oggm.tasks.process_gcm_data.html#oggm.shop.gcm_climate.process_gcm_data). This relatively simple bias-correction is automatically done by `process_cmip_data` and is very important, as the model is very sensitive to temperature variability.\n",
"- CMIP5 has 4 different RCP scenarios and a variety of GCMs, online you can find them [here](https://cluster.klima.uni-bremen.de/~oggm/cmip5-ng). The above mentioned storage contains information about the data, [how to cite them](https://cluster.klima.uni-bremen.de/~oggm/cmip5-ng/ABOUT) and [tabular summaries](https://cluster.klima.uni-bremen.de/~oggm/cmip5-ng/gcm_table.html) of the available GCMs. \n",
"- CMIP6 has 4 different SSP scenarios, see [this table](https://cluster.klima.uni-bremen.de/~oggm/cmip6/gcm_table.html) for a summary of available GCMs. There are even some CMIP6 runs that go until [2300](https://cluster.klima.uni-bremen.de/~oggm/cmip6/gcm_table_2300.html).\n",
"\n",
"> Note, that the CMIP5 and CMIP6 files are much larger than the ISIMIP3b files. This is because we use a simple processing trick for the ISIMIP3b GCM files as we only save the glacier gridpoints, instead of each longitude and latitude. \n",
"\n",
- "**Therefore: run the following code only if it is ok to download a few gigabytes of data:** "
+ "**Therefore: run the following code only if it is ok to download a few gigabytes of data.** Set the variable below to true to run it. "
]
},
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
- "bp = 'https://cluster.klima.uni-bremen.de/~oggm/cmip5-ng/pr/pr_mon_CCSM4_{}_r1i1p1_g025.nc'\n",
- "bt = 'https://cluster.klima.uni-bremen.de/~oggm/cmip5-ng/tas/tas_mon_CCSM4_{}_r1i1p1_g025.nc'\n",
+ "download_cmip5_data = False"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "if download_cmip5_data:\n",
"\n",
- "color_dict_rcp={'rcp26':'blue', 'rcp45':'violet', 'rcp85':'red'}\n",
+ " bp = 'https://cluster.klima.uni-bremen.de/~oggm/cmip5-ng/pr/pr_mon_CCSM4_{}_r1i1p1_g025.nc'\n",
+ " bt = 'https://cluster.klima.uni-bremen.de/~oggm/cmip5-ng/tas/tas_mon_CCSM4_{}_r1i1p1_g025.nc'\n",
"\n",
- "# Download and bias correct the data\n",
- "for rcp in ['rcp26', 'rcp45', 'rcp85']: # 'rcp60' would also be available\n",
- " # Download the files\n",
- " ft = utils.file_downloader(bt.format(rcp))\n",
- " fp = utils.file_downloader(bp.format(rcp))\n",
- " # bias correct them\n",
- " workflow.execute_entity_task(gcm_climate.process_cmip_data, gdirs, \n",
- " filesuffix='_CMIP5_CCSM4_{}'.format(rcp), # recognize the climate file for later\n",
- " fpath_temp=ft, # temperature projections\n",
- " fpath_precip=fp, # precip projections\n",
- " );\n",
+ " color_dict_rcp={'rcp26':'blue', 'rcp45':'violet', 'rcp85':'red'}\n",
"\n",
- "# Run OGGM\n",
- "for rcp in ['rcp26', 'rcp45', 'rcp85']: #'rcp60',\n",
- " rid = '_CMIP5_CCSM4_{}'.format(rcp)\n",
- " workflow.execute_entity_task(tasks.run_from_climate_data, gdirs, ys=2020, \n",
- " climate_filename='gcm_data', # use gcm_data, not climate_historical\n",
- " climate_input_filesuffix=rid, # use the chosen scenario\n",
- " init_model_filesuffix='_historical', # this is important! Start from 2020 glacier\n",
- " output_filesuffix=rid, # recognize the run for later\n",
- " );\n",
+ " # Download and bias correct the data\n",
+ " for rcp in ['rcp26', 'rcp45', 'rcp85']: # 'rcp60' would also be available\n",
+ " # Download the files\n",
+ " ft = utils.file_downloader(bt.format(rcp))\n",
+ " fp = utils.file_downloader(bp.format(rcp))\n",
+ " # bias correct them\n",
+ " workflow.execute_entity_task(gcm_climate.process_cmip_data, gdirs, \n",
+ " filesuffix='_CMIP5_CCSM4_{}'.format(rcp), # recognize the climate file for later\n",
+ " fpath_temp=ft, # temperature projections\n",
+ " fpath_precip=fp, # precip projections\n",
+ " );\n",
"\n",
- "# Plot\n",
- "f, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 4))\n",
- "for rcp in ['rcp26', 'rcp45', 'rcp85']: #'rcp60',\n",
- " rid = '_CMIP5_CCSM4_{}'.format(rcp)\n",
- " ds = utils.compile_run_output(gdirs, input_filesuffix=rid)\n",
- " ds.isel(rgi_id=0).volume.plot(ax=ax1, label=rcp, c=color_dict_rcp[rcp]);\n",
- " ds.isel(rgi_id=1).volume.plot(ax=ax2, label=rcp, c=color_dict_rcp[rcp]);\n",
- "plt.legend();"
+ " # Run OGGM\n",
+ " for rcp in ['rcp26', 'rcp45', 'rcp85']: #'rcp60',\n",
+ " rid = '_CMIP5_CCSM4_{}'.format(rcp)\n",
+ " workflow.execute_entity_task(tasks.run_from_climate_data, gdirs, ys=2020, \n",
+ " climate_filename='gcm_data', # use gcm_data, not climate_historical\n",
+ " climate_input_filesuffix=rid, # use the chosen scenario\n",
+ " init_model_filesuffix='_historical', # this is important! Start from 2020 glacier\n",
+ " output_filesuffix=rid, # recognize the run for later\n",
+ " );\n",
+ "\n",
+ " # Plot\n",
+ " f, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 4))\n",
+ " for rcp in ['rcp26', 'rcp45', 'rcp85']: #'rcp60',\n",
+ " rid = '_CMIP5_CCSM4_{}'.format(rcp)\n",
+ " ds = utils.compile_run_output(gdirs, input_filesuffix=rid)\n",
+ " ds.isel(rgi_id=0).volume.plot(ax=ax1, label=rcp, c=color_dict_rcp[rcp]);\n",
+ " ds.isel(rgi_id=1).volume.plot(ax=ax2, label=rcp, c=color_dict_rcp[rcp]);\n",
+ " plt.legend();"
]
},
{
@@ -328,7 +356,9 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"if download_cmip6_data:\n",
@@ -375,6 +405,7 @@
"source": [
"## What's next?\n",
"\n",
+ "- checkout the 10 mins tutorial on the [dynamical spinup](dynamical_spinup.ipynb)\n",
"- see also the tutorial on [Merge, analyse and visualize OGGM GCM runs](../advanced/merge_gcm_runs_and_visualize.ipynb)\n",
"- return to the [OGGM documentation](https://docs.oggm.org)\n",
"- back to the [table of contents](../welcome.ipynb)"
@@ -398,7 +429,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.10.6"
+ "version": "3.10.9"
},
"latex_envs": {
"LaTeX_envs_menu_present": true,
diff --git a/notebooks/beginner/distribute_flowline.ipynb b/notebooks/beginner/distribute_flowline.ipynb
index cbbd4e64..278d9d50 100644
--- a/notebooks/beginner/distribute_flowline.ipynb
+++ b/notebooks/beginner/distribute_flowline.ipynb
@@ -382,7 +382,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.10.11"
+ "version": "3.10.9"
}
},
"nbformat": 4,
diff --git a/notebooks/others/dem_comparison.ipynb b/notebooks/others/rgitopo_rgi6.ipynb
similarity index 96%
rename from notebooks/others/dem_comparison.ipynb
rename to notebooks/others/rgitopo_rgi6.ipynb
index 36a4a010..0ce07a62 100644
--- a/notebooks/others/dem_comparison.ipynb
+++ b/notebooks/others/rgitopo_rgi6.ipynb
@@ -4,14 +4,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Compare different DEMs for individual glaciers"
+ "# Compare different DEMs for individual glaciers: RGI-TOPO for RGI v6.0https://cluster.klima.uni-bremen.de/data/gdirs/dems_v2/default https://cluster.klima.uni-bremen.de/data/gdirs/dems_v2/default https://cluster.klima.uni-bremen.de/data/gdirs/dems_v2/default "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "For most glaciers in the world there are several digital elevation models (DEM) which cover the respective glacier. In OGGM we have currently implemented 10 different open access DEMs to choose from. Some are regional and only available in certain areas (e.g. Greenland or Antarctica) and some cover almost the entire globe. For more information, visit the [rgitools documentation about DEMs](https://rgitools.readthedocs.io/en/latest/dems.html).\n",
+ "For most glaciers in the world there are several digital elevation models (DEM) which cover the respective glacier. In OGGM we have currently implemented more than 10 different open access DEMs to choose from. Some are regional and only available in certain areas (e.g. Greenland or Antarctica) and some cover almost the entire globe. \n",
"\n",
"This notebook allows to see which of the DEMs are available for a selected glacier and how they compare to each other. That way it is easy to spot systematic differences and also invalid points in the DEMs."
]
@@ -178,7 +178,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Note that you could reach the same goal by downloading the data manually from https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.4/rgitopo/ "
+ "Note that you could reach the same goal by downloading the data manually from https://cluster.klima.uni-bremen.de/data/gdirs/dems_v2/default \n",
+ "(high resolution version: https://cluster.klima.uni-bremen.de/data/gdirs/dems_v1/highres)"
]
},
{
@@ -188,10 +189,9 @@
"outputs": [],
"source": [
"# URL of the preprocessed GDirs\n",
- "gdir_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.4/rgitopo/'\n",
+ "gdir_url = 'https://cluster.klima.uni-bremen.de/data/gdirs/dems_v2/default'\n",
"# We use OGGM to download the data\n",
- "gdir = init_glacier_directories([rgi_id], from_prepro_level=1, prepro_border=10, \n",
- " prepro_rgi_version='62', prepro_base_url=gdir_url)[0]"
+ "gdir = init_glacier_directories([rgi_id], from_prepro_level=1, prepro_border=10, prepro_base_url=gdir_url)[0]"
]
},
{
@@ -293,13 +293,6 @@
"y_size = x_size / n_cols * n_rows"
]
},
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": []
- },
{
"cell_type": "markdown",
"metadata": {},
@@ -713,7 +706,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.8.15"
+ "version": "3.10.9"
},
"latex_envs": {
"LaTeX_envs_menu_present": true,
diff --git a/notebooks/others/rgitopo_rgi7.ipynb b/notebooks/others/rgitopo_rgi7.ipynb
new file mode 100644
index 00000000..eb680d0d
--- /dev/null
+++ b/notebooks/others/rgitopo_rgi7.ipynb
@@ -0,0 +1,780 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# RGI-TOPO for RGI 7.0"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "OGGM was used to generate the topography data used to compute the topographical attributes and the centerlines products for RGI v7.0.\n",
+ "\n",
+ "Here we show how to access this data from OGGM."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Input parameters "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "This notebook can be run as a script with parameters using [papermill](https://github.com/nteract/papermill), but it is not necessary. The following cell contains the parameters you can choose from:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "papermill": {
+ "duration": 0.017019,
+ "end_time": "2019-05-02T12:29:41.613572",
+ "exception": false,
+ "start_time": "2019-05-02T12:29:41.596553",
+ "status": "completed"
+ },
+ "tags": [
+ "parameters"
+ ]
+ },
+ "outputs": [],
+ "source": [
+ "# The RGI Id of the glaciers you want to look for\n",
+ "# Use the original shapefiles or the GLIMS viewer to check for the ID: https://www.glims.org/maps/glims\n",
+ "rgi_id = 'RGI2000-v7.0-G-01-06486' # Denali\n",
+ "\n",
+ "# The default is to test for all sources available for this glacier\n",
+ "# Set to a list of source names to override this\n",
+ "sources = None\n",
+ "# Where to write the plots. Default is in the current working directory\n",
+ "plot_dir = f'outputs/{rgi_id}'\n",
+ "# The RGI version to use\n",
+ "# V62 is an unofficial modification of V6 with only minor, backwards compatible modifications\n",
+ "prepro_rgi_version = 62\n",
+ "# Size of the map around the glacier. Currently only 10 and 40 are available\n",
+ "prepro_border = 10\n",
+ "# Degree of processing level. Currently only 1 is available.\n",
+ "from_prepro_level = 1"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Check input and set up"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "papermill": {
+ "duration": 0.015677,
+ "end_time": "2019-05-02T12:29:41.666761",
+ "exception": false,
+ "start_time": "2019-05-02T12:29:41.651084",
+ "status": "completed"
+ },
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "# The sources can be given as parameters\n",
+ "if sources is not None and isinstance(sources, str):\n",
+ " sources = sources.split(',')"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "papermill": {
+ "duration": 0.015098,
+ "end_time": "2019-05-02T12:29:41.691832",
+ "exception": false,
+ "start_time": "2019-05-02T12:29:41.676734",
+ "status": "completed"
+ },
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "# Plotting directory as well\n",
+ "if not plot_dir:\n",
+ " plot_dir = './' + rgi_id\n",
+ "import os\n",
+ "plot_dir = os.path.abspath(plot_dir)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "papermill": {
+ "duration": 1.830809,
+ "end_time": "2019-05-02T12:29:43.532252",
+ "exception": false,
+ "start_time": "2019-05-02T12:29:41.701443",
+ "status": "completed"
+ },
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "from oggm import cfg, utils, workflow, tasks, graphics, GlacierDirectory\n",
+ "import pandas as pd\n",
+ "import numpy as np\n",
+ "import xarray as xr\n",
+ "import rioxarray as rioxr\n",
+ "import geopandas as gpd\n",
+ "import salem\n",
+ "import matplotlib.pyplot as plt\n",
+ "from mpl_toolkits.axes_grid1 import AxesGrid\n",
+ "import itertools\n",
+ "\n",
+ "from oggm.utils import DEM_SOURCES\n",
+ "from oggm.workflow import init_glacier_directories"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "papermill": {
+ "duration": 0.093459,
+ "end_time": "2019-05-02T12:29:43.661588",
+ "exception": false,
+ "start_time": "2019-05-02T12:29:43.568129",
+ "status": "completed"
+ },
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "# Make sure the plot directory exists\n",
+ "utils.mkdir(plot_dir);\n",
+ "# Use OGGM to download the data\n",
+ "cfg.initialize()\n",
+ "cfg.PATHS['working_dir'] = utils.gettempdir(dirname='OGGM-RGITOPO-RGI7', reset=True)\n",
+ "cfg.PARAMS['use_intersects'] = False"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Download the data using OGGM utility functions "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Note that you could reach the same goal by downloading the data manually from "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# URL of the preprocessed GDirs\n",
+ "gdir_url = 'https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.6/rgitopo/2023.1/'\n",
+ "# We use OGGM to download the data\n",
+ "gdir = init_glacier_directories([rgi_id], from_prepro_level=1, prepro_border=10, prepro_rgi_version='70', prepro_base_url=gdir_url)[0]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "gdir"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Read the DEMs and store them all in a dataset "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "papermill": {
+ "duration": 0.028343,
+ "end_time": "2019-05-02T12:29:44.137034",
+ "exception": false,
+ "start_time": "2019-05-02T12:29:44.108691",
+ "status": "completed"
+ },
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "if sources is None:\n",
+ " sources = [src for src in os.listdir(gdir.dir) if src in utils.DEM_SOURCES]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "papermill": {
+ "duration": 0.019044,
+ "end_time": "2019-05-02T12:29:44.166408",
+ "exception": false,
+ "start_time": "2019-05-02T12:29:44.147364",
+ "status": "completed"
+ },
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "print('RGI ID:', rgi_id)\n",
+ "print('Available DEM sources:', sources)\n",
+ "print('Plotting directory:', plot_dir)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "papermill": {
+ "duration": 0.081067,
+ "end_time": "2019-05-02T12:30:18.702292",
+ "exception": false,
+ "start_time": "2019-05-02T12:30:18.621225",
+ "status": "completed"
+ },
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "# We use xarray to store the data\n",
+ "ods = xr.Dataset()\n",
+ "for src in sources:\n",
+ " demfile = os.path.join(gdir.dir, src) + '/dem.tif'\n",
+ " with rioxr.open_rasterio(demfile) as ds:\n",
+ " data = ds.sel(band=1).load() * 1.\n",
+ " ods[src] = data.where(data > -100, np.NaN)\n",
+ " \n",
+ " sy, sx = np.gradient(ods[src], gdir.grid.dx, gdir.grid.dx)\n",
+ " ods[src + '_slope'] = ('y', 'x'), np.arctan(np.sqrt(sy**2 + sx**2))\n",
+ "\n",
+ "with rioxr.open_rasterio(gdir.get_filepath('glacier_mask')) as ds:\n",
+ " ods['mask'] = ds.sel(band=1).load()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "papermill": {
+ "duration": 0.0212,
+ "end_time": "2019-05-02T12:30:18.877473",
+ "exception": false,
+ "start_time": "2019-05-02T12:30:18.856273",
+ "status": "completed"
+ },
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "# Decide on the number of plots and figure size\n",
+ "ns = len(sources)\n",
+ "x_size = 12\n",
+ "n_cols = 3\n",
+ "n_rows = -(-ns // n_cols)\n",
+ "y_size = x_size / n_cols * n_rows"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Raw topography data "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "papermill": {
+ "duration": 3.510211,
+ "end_time": "2019-05-02T12:30:22.402979",
+ "exception": false,
+ "start_time": "2019-05-02T12:30:18.892768",
+ "status": "completed"
+ },
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "smap = salem.graphics.Map(gdir.grid, countries=False)\n",
+ "smap.set_shapefile(gdir.read_shapefile('outlines'))\n",
+ "smap.set_plot_params(cmap='topo')\n",
+ "smap.set_lonlat_contours(add_tick_labels=False)\n",
+ "smap.set_plot_params(vmin=np.nanquantile([ods[s].min() for s in sources], 0.25),\n",
+ " vmax=np.nanquantile([ods[s].max() for s in sources], 0.75))\n",
+ "\n",
+ "fig = plt.figure(figsize=(x_size, y_size))\n",
+ "grid = AxesGrid(fig, 111,\n",
+ " nrows_ncols=(n_rows, n_cols),\n",
+ " axes_pad=0.7,\n",
+ " cbar_mode='each',\n",
+ " cbar_location='right',\n",
+ " cbar_pad=0.1\n",
+ " )\n",
+ "\n",
+ "for i, s in enumerate(sources):\n",
+ " data = ods[s]\n",
+ " smap.set_data(data)\n",
+ " ax = grid[i]\n",
+ " smap.visualize(ax=ax, addcbar=False, title=s)\n",
+ " if np.isnan(data).all():\n",
+ " grid[i].cax.remove()\n",
+ " continue\n",
+ " cax = grid.cbar_axes[i]\n",
+ " smap.colorbarbase(cax)\n",
+ " \n",
+ "# take care of uneven grids\n",
+ "if ax != grid[-1] and not grid[-1].title.get_text():\n",
+ " grid[-1].remove()\n",
+ " grid[-1].cax.remove()\n",
+ "if ax != grid[-2] and not grid[-2].title.get_text():\n",
+ " grid[-2].remove()\n",
+ " grid[-2].cax.remove()\n",
+ "\n",
+ "plt.savefig(os.path.join(plot_dir, 'dem_topo_color.png'), dpi=150, bbox_inches='tight')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Shaded relief "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "papermill": {
+ "duration": 3.282248,
+ "end_time": "2019-05-02T12:30:25.712385",
+ "exception": false,
+ "start_time": "2019-05-02T12:30:22.430137",
+ "status": "completed"
+ },
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "fig = plt.figure(figsize=(x_size, y_size))\n",
+ "grid = AxesGrid(fig, 111,\n",
+ " nrows_ncols=(n_rows, n_cols),\n",
+ " axes_pad=0.7,\n",
+ " cbar_location='right',\n",
+ " cbar_pad=0.1\n",
+ " )\n",
+ "smap.set_plot_params(cmap='Blues')\n",
+ "smap.set_shapefile()\n",
+ "for i, s in enumerate(sources):\n",
+ " data = ods[s].copy().where(np.isfinite(ods[s]), 0)\n",
+ " smap.set_data(data * 0)\n",
+ " ax = grid[i]\n",
+ " smap.set_topography(data)\n",
+ " smap.visualize(ax=ax, addcbar=False, title=s)\n",
+ " \n",
+ "# take care of uneven grids\n",
+ "if ax != grid[-1] and not grid[-1].title.get_text():\n",
+ " grid[-1].remove()\n",
+ " grid[-1].cax.remove()\n",
+ "if ax != grid[-2] and not grid[-2].title.get_text():\n",
+ " grid[-2].remove()\n",
+ " grid[-2].cax.remove()\n",
+ "\n",
+ "plt.savefig(os.path.join(plot_dir, 'dem_topo_shade.png'), dpi=150, bbox_inches='tight')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Slope "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "fig = plt.figure(figsize=(x_size, y_size))\n",
+ "grid = AxesGrid(fig, 111,\n",
+ " nrows_ncols=(n_rows, n_cols),\n",
+ " axes_pad=0.7,\n",
+ " cbar_mode='each',\n",
+ " cbar_location='right',\n",
+ " cbar_pad=0.1\n",
+ " )\n",
+ "\n",
+ "smap.set_topography();\n",
+ "smap.set_plot_params(vmin=0, vmax=0.7, cmap='Blues')\n",
+ "\n",
+ "for i, s in enumerate(sources):\n",
+ " data = ods[s + '_slope']\n",
+ " smap.set_data(data)\n",
+ " ax = grid[i]\n",
+ " smap.visualize(ax=ax, addcbar=False, title=s + ' (slope)')\n",
+ " cax = grid.cbar_axes[i]\n",
+ " smap.colorbarbase(cax)\n",
+ " \n",
+ "# take care of uneven grids\n",
+ "if ax != grid[-1] and not grid[-1].title.get_text():\n",
+ " grid[-1].remove()\n",
+ " grid[-1].cax.remove()\n",
+ "if ax != grid[-2] and not grid[-2].title.get_text():\n",
+ " grid[-2].remove()\n",
+ " grid[-2].cax.remove()\n",
+ "\n",
+ "plt.savefig(os.path.join(plot_dir, 'dem_slope.png'), dpi=150, bbox_inches='tight')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Some simple statistics about the DEMs "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "df = pd.DataFrame()\n",
+ "for s in sources:\n",
+ " df[s] = ods[s].data.flatten()[ods.mask.data.flatten() == 1]\n",
+ "\n",
+ "dfs = pd.DataFrame()\n",
+ "for s in sources:\n",
+ " dfs[s] = ods[s + '_slope'].data.flatten()[ods.mask.data.flatten() == 1]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "dfs = df.describe()\n",
+ "dfs.loc['range'] = dfs.loc['max'] - dfs.loc['min']\n",
+ "dfs"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Comparison matrix plot "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Table of differences between DEMS\n",
+ "df_diff = pd.DataFrame()\n",
+ "done = []\n",
+ "for s1, s2 in itertools.product(sources, sources):\n",
+ " if s1 == s2:\n",
+ " continue\n",
+ " if (s2, s1) in done:\n",
+ " continue\n",
+ " df_diff[s1 + '-' + s2] = df[s1] - df[s2]\n",
+ " done.append((s1, s2))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Decide on plot levels\n",
+ "max_diff = df_diff.quantile(0.99).max()\n",
+ "base_levels = np.array([-8, -5, -3, -1.5, -1, -0.5, -0.2, -0.1, 0, 0.1, 0.2, 0.5, 1, 1.5, 3, 5, 8])\n",
+ "if max_diff < 10:\n",
+ " levels = base_levels\n",
+ "elif max_diff < 100:\n",
+ " levels = base_levels * 10\n",
+ "elif max_diff < 1000:\n",
+ " levels = base_levels * 100\n",
+ "else:\n",
+ " levels = base_levels * 1000\n",
+ "levels = [l for l in levels if abs(l) < max_diff]\n",
+ "if max_diff > 10:\n",
+ " levels = [int(l) for l in levels]\n",
+ "levels"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "papermill": {
+ "duration": 3.367876,
+ "end_time": "2019-05-02T12:30:29.111637",
+ "exception": false,
+ "start_time": "2019-05-02T12:30:25.743761",
+ "status": "completed"
+ },
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "smap.set_plot_params(levels=levels, cmap='PuOr', extend='both')\n",
+ "smap.set_shapefile(gdir.read_shapefile('outlines'))\n",
+ "\n",
+ "fig = plt.figure(figsize=(14, 14))\n",
+ "grid = AxesGrid(fig, 111,\n",
+ " nrows_ncols=(ns - 1, ns - 1),\n",
+ " axes_pad=0.3,\n",
+ " cbar_mode='single',\n",
+ " cbar_location='right',\n",
+ " cbar_pad=0.1\n",
+ " )\n",
+ "done = []\n",
+ "for ax in grid:\n",
+ " ax.set_axis_off()\n",
+ "for s1, s2 in itertools.product(sources, sources):\n",
+ " if s1 == s2:\n",
+ " continue\n",
+ " if (s2, s1) in done:\n",
+ " continue\n",
+ " data = ods[s1] - ods[s2]\n",
+ " ax = grid[sources.index(s1) * (ns - 1) + sources[1:].index(s2)]\n",
+ " ax.set_axis_on()\n",
+ " smap.set_data(data)\n",
+ " smap.visualize(ax=ax, addcbar=False)\n",
+ " done.append((s1, s2))\n",
+ " ax.set_title(s1 + '-' + s2, fontsize=8)\n",
+ " \n",
+ "cax = grid.cbar_axes[0]\n",
+ "smap.colorbarbase(cax);\n",
+ "\n",
+ "plt.savefig(os.path.join(plot_dir, 'dem_diffs.png'), dpi=150, bbox_inches='tight')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Comparison scatter plot "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "papermill": {
+ "duration": 28.675102,
+ "end_time": "2019-05-02T12:30:57.924205",
+ "exception": false,
+ "start_time": "2019-05-02T12:30:29.249103",
+ "status": "completed"
+ },
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "import seaborn as sns\n",
+ "sns.set(style=\"ticks\")\n",
+ "\n",
+ "l1, l2 = (utils.nicenumber(df.min().min(), binsize=50, lower=True), \n",
+ " utils.nicenumber(df.max().max(), binsize=50, lower=False))\n",
+ "\n",
+ "def plot_unity(xdata, ydata, **kwargs):\n",
+ " points = np.linspace(l1, l2, 100)\n",
+ " plt.gca().plot(points, points, color='k', marker=None,\n",
+ " linestyle=':', linewidth=3.0)\n",
+ "\n",
+ "g = sns.pairplot(df.dropna(how='all', axis=1).dropna(), plot_kws=dict(s=50, edgecolor=\"C0\", linewidth=1));\n",
+ "g.map_offdiag(plot_unity)\n",
+ "for asx in g.axes:\n",
+ " for ax in asx:\n",
+ " ax.set_xlim((l1, l2))\n",
+ " ax.set_ylim((l1, l2))\n",
+ "\n",
+ "plt.savefig(os.path.join(plot_dir, 'dem_scatter.png'), dpi=150, bbox_inches='tight')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Table statistics "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "papermill": {
+ "duration": 0.074215,
+ "end_time": "2019-05-02T12:30:58.035917",
+ "exception": false,
+ "start_time": "2019-05-02T12:30:57.961702",
+ "status": "completed"
+ },
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "df.describe()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "papermill": {
+ "duration": 0.065549,
+ "end_time": "2019-05-02T12:30:58.159184",
+ "exception": false,
+ "start_time": "2019-05-02T12:30:58.093635",
+ "status": "completed"
+ },
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "df.corr()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "df_diff.describe()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "df_diff.abs().describe()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## What's next?\n",
+ "\n",
+ "- return to the [OGGM documentation](https://docs.oggm.org)\n",
+ "- back to the [table of contents](../welcome.ipynb)"
+ ]
+ }
+ ],
+ "metadata": {
+ "celltoolbar": "Tags",
+ "hide_input": false,
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.9"
+ },
+ "latex_envs": {
+ "LaTeX_envs_menu_present": true,
+ "autoclose": false,
+ "autocomplete": true,
+ "bibliofile": "biblio.bib",
+ "cite_by": "apalike",
+ "current_citInitial": 1,
+ "eqLabelWithNumbers": true,
+ "eqNumInitial": 1,
+ "hotkeys": {
+ "equation": "Ctrl-E",
+ "itemize": "Ctrl-I"
+ },
+ "labels_anchors": false,
+ "latex_user_defs": false,
+ "report_style_numbering": false,
+ "user_envs_cfg": false
+ },
+ "nbTranslate": {
+ "displayLangs": [
+ "*"
+ ],
+ "hotkey": "alt-t",
+ "langInMainMenu": true,
+ "sourceLang": "en",
+ "targetLang": "fr",
+ "useGoogleTranslate": true
+ },
+ "papermill": {
+ "duration": 78.878142,
+ "end_time": "2019-05-02T12:30:59.784271",
+ "environment_variables": {},
+ "exception": null,
+ "input_path": "dem_comparison.ipynb",
+ "output_path": "out-param.ipynb",
+ "parameters": {
+ "rgi_id": "RGI60-03.02489"
+ },
+ "start_time": "2019-05-02T12:29:40.906129",
+ "version": "1.0.0"
+ },
+ "toc": {
+ "base_numbering": 1,
+ "nav_menu": {},
+ "number_sections": false,
+ "sideBar": true,
+ "skip_h1_title": true,
+ "title_cell": "Table of Contents",
+ "title_sidebar": "Contents",
+ "toc_cell": false,
+ "toc_position": {},
+ "toc_section_display": true,
+ "toc_window_display": false
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}
diff --git a/notebooks/welcome.ipynb b/notebooks/welcome.ipynb
index 09a410ee..05e805a2 100644
--- a/notebooks/welcome.ipynb
+++ b/notebooks/welcome.ipynb
@@ -54,9 +54,12 @@
"- [merge_gcm_runs_and_visualize](advanced/merge_gcm_runs_and_visualize.ipynb): how to merge different GCM runs into one dataset, analyse them on a regional scale and visualize with HoloViz\n",
"- [dynamical_spinup](advanced/dynamical_spinup.ipynb): a deeper dive into the dynamical spinup for past simulations\n",
"\n",
+ "**RGI-TOPO:**\n",
+ "- [rgitopo_rgi6](others/rgitopo_rgi6.ipynb): RGI-TOPO for RGI v6.0\n",
+ "- [rgitopo_rgi7](others/rgitopo_rgi7.ipynb): RGI-TOPO for RGI v7.0 (**new!**)\n",
+ "\n",
"**Related to OGGM:**\n",
"- [holoviz_intro](others/holoviz_intro.ipynb): an introduction to the HoloViz vizualisation ecosystem (previously called PyViz)\n",
- "- [dem_comparison](others/dem_comparison.ipynb): compare the various DEMs available in OGGM\n",
"\n",
"**Tutorials in (re-)construction:**\n",
"- [inversion_with_frontal_ablation](construction/inversion_with_frontal_ablation.ipynb): a case study about ice thickness inversion with frontal ablation\n",
@@ -108,7 +111,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.10.11"
+ "version": "3.10.9"
},
"latex_envs": {
"LaTeX_envs_menu_present": true,