Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for Surrogate Modeling Toolbox Machine Learning (GENN) Models #1169

Merged
merged 10 commits into from
Sep 29, 2023
Merged
Show file tree
Hide file tree
Changes from 8 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/checks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ jobs:
- python-version: "3.8" # avoid uploading coverage for full matrix
use_coverage: true
- python-version: "3.8" # this is to avoid installing optional dependencies in all environments
optional-dependencies: tensorflow sympy torch scikit-learn
optional-dependencies: tensorflow sympy torch scikit-learn smt
env:
# uncomment this to debug Qt initialization errors
# QT_DEBUG_PLUGINS: '1'
Expand Down
24 changes: 14 additions & 10 deletions docs/source/chapt_surrogates/mlaiplugin.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,21 +24,19 @@ launched.
Keras SavedModel format (folder containing .pb data files), or serialized
to an architecture dictionary (.json) with separately saved model weights
(.h5). Additionally, this tool supports PyTorch models saved in the standard
format (.pt) and Scikit-learn models serialized in the standard Python pickle
format (.pkl). The examples folder contains demonstrative training and class
scripts for models containing no custom layer (see below for more information
on adding custom layers), a custom layer with a preset normalization option
format (.pt), and Scikit-learn and Surrogate Modeling Toolbox models serialized
in the standard Python pickle format (.pkl). The examples folder contains demonstrative training and class scripts for models containing no custom layer (see below for more information on adding custom layers), a custom layer with a preset normalization option
and a custom layer with a custom normalization function, as well as models
saved in all supported file formats. To use this tool, users must train and
export a machine leanring model and place the file in the appropriate folder
export a machine learning model and place the file in the appropriate folder
*user_ml_ai_plugins* in the working directory, as shown below. Optionally,
users may save Keras models with custom attributes to display on the node,
such as variable labels and bounds. While training a Keras model with custom
attributes is not required to use the plugin tool, users must provide the
necessary class script if the Keras model does contain a custom object (see
below for further information on creating custom objects). PyTorch and
Scikit-learn models do not have this requirement and the class script does not
need to exist in the plugins folder. This model type is used in the same manner
below for further information on creating custom objects). PyTorch, Scikit-learn, and
Surrogate Modeling Toolbox models do not have this requirement and the class script
does not need to exist in the plugins folder. This model type is used in the same manner
as Pymodel Plugins, per the workflow in Section :ref:`tutorial.surrogate.fs`.

Custom Model Attributes
Expand Down Expand Up @@ -86,6 +84,13 @@ https://scikit-learn.org/stable/index.html and further information on deep learn
capabilities as well:
https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPRegressor.html#sklearn.neural_network.MLPRegressor.

Surrogate Modeling Toolbox is an open-source Python package supporting a number of surrogate
modeling methods, including gradient-enhanced neural network (GENN) models. GENN models train
parameters by minimizing a modified Least Squares Estimator which accounts for partial derivative predictions, leading to better accuracy on fewer training points compared to non-gradient-enhanced models. Gradient methods are applicable when training use cases where
system data is generally known, such as continuous physics-based problems like aerodynamics.
If gradient data is not known, users may run a gradient generation tool provided within FOQUS and can consults the tool documentation here: :ref:`gengrad`. Users may find further information on GENN models within Surrogate Modeling Toolbox in
the documentation: https://smt.readthedocs.io/en/stable/_src_docs/surrogate_models/genn.html.

The examples files located in *FOQUS.examples.other_files.ML_AI_Plugin* show how users
may train new models or re-save loaded models with a custom layer.

Expand Down Expand Up @@ -258,8 +263,7 @@ to obtain the correct output values for the entered inputs.
To run the models, copy the appropriate model files or folders ('h5_model.h5',
'saved_model/', 'json_model.json', 'json_model_weights.h5') and any custom layer
scripts ('model_name.py') into the working directory folder 'user_ml_ai_models'.
As mentioned earlier, PyTorch and Scikit-learn models only require the model file
('pt_model.pt' or 'skl_model.pkl').
As mentioned earlier, PyTorch, Scikit-learn and Surrogate Modeling Toolbox models only require the model file ('pt_model.pt', 'skl_model.pkl' or 'smt_model.pkl').
For example, the model name below is 'mea_column_model' and is saved in H5 format,
and the files *FOQUS.examples.other_files.ML_AI_Plugin.TensorFlow_2-10_Models.mea_column_model.h5*
and *FOQUS.examples.other_files.ML_AI_Plugin.mea_column_model.py* should be copied to
Expand Down
6 changes: 5 additions & 1 deletion docs/source/references.rst
Original file line number Diff line number Diff line change
Expand Up @@ -59,4 +59,8 @@ S. Marcel, Y. Rodriguez, "Torchvision the machine-vision package of torch."" In

.. _Buitinck_2013:

L. Buitinck, G. Louppe, M.Blondel, et al., "API design for machine learning software: experiences from the scikit-learn project." European Conference on Machine Learning and Principles and Practices of Knowledge Discovery in Databases, September 2013.
L. Buitinck, G. Louppe, M.Blondel, et al., "API design for machine learning software: experiences from the scikit-learn project." European Conference on Machine Learning and Principles and Practices of Knowledge Discovery in Databases, September 2013.

.. _Bouhlel_2019:

M. A. Bouhlel, J. T. Hwang, N. Bartoli, et al., "A Python surrogate modeling framework with derivatives." Advances in Engineering Software, Vol 135 (pp. 102662), September 2019.
Binary file not shown.
122 changes: 122 additions & 0 deletions examples/other_files/ML_AI_Plugin/mea_column_model_training_smtgenn.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
#################################################################################
# FOQUS Copyright (c) 2012 - 2023, by the software owners: Oak Ridge Institute
# for Science and Education (ORISE), TRIAD National Security, LLC., Lawrence
# Livermore National Security, LLC., The Regents of the University of
# California, through Lawrence Berkeley National Laboratory, Battelle Memorial
# Institute, Pacific Northwest Division through Pacific Northwest National
# Laboratory, Carnegie Mellon University, West Virginia University, Boston
# University, the Trustees of Princeton University, The University of Texas at
# Austin, URS Energy & Construction, Inc., et al. All rights reserved.
#
# Please see the file LICENSE.md for full copyright and license information,
# respectively. This file is also available online at the URL
# "https://github.com/CCSI-Toolset/FOQUS".
#################################################################################
import numpy as np
import pandas as pd
from smt.utils.neural_net.model import Model
import pickle
from types import SimpleNamespace


# Example follows the sequence below:
# 1) Code at end of file to import data and create model
# 2) Call create_model() to define inputs and outputs
# 3) Call CustomLayer to define network structure, which uses
# call() to define layer connections and get_config to attach
# attributes to CustomLayer class object
# 4) Back to create_model() to compile and train model
# 5) Back to code at end of file to save, load and test model


# method to create model
def create_model(x_train, z_train, grad_train):

# already have X, Y and J, don't need to create and populate GENN() to
# load SMT data into Model(); GENN() doesn't support multiple outputs

# Model() does support multiple outputs, so we just need to reshape the
# arrays so that Model() can use them
# we have x_train = (n_m, n_x), z_train = (n_m, n_y) and grad_train = (n_y, n_m, n_x)
n_m, n_x = np.shape(x_train)
_, n_y = np.shape(z_train)

# check dimensions using grad_train
assert np.shape(grad_train) == (n_y, n_m, n_x)

# reshape arrays
X = np.reshape(x_train, (n_x, n_m))
Y = np.reshape(z_train, (n_y, n_m))
J = np.reshape(grad_train, (n_y, n_x, n_m))

# set up and train model

# Train neural net
model = Model.initialize(
X.shape[0], Y.shape[0], deep=2, wide=6
) # 2 hidden layers with 6 neurons each
model.train(
X=X, # input data
Y=Y, # output data
J=J, # gradient data
num_iterations=25, # number of optimizer iterations per mini-batch
mini_batch_size=int(
np.floor(n_m / 5)
), # used to divide data into training batches (use for large data sets)
num_epochs=20, # number of passes through data
alpha=0.15, # learning rate that controls optimizer step size
beta1=0.99, # tuning parameter to control ADAM optimization
beta2=0.99, # tuning parameter to control ADAM optimization
lambd=0.1, # lambd = 0. = no regularization, lambd > 0 = regularization
gamma=0.0001, # gamma = 0. = no grad-enhancement, gamma > 0 = grad-enhancement
seed=None, # set to value for reproducibility
silent=True, # set to True to suppress training output
)

model.custom = SimpleNamespace(
input_labels=xlabels,
output_labels=zlabels,
input_bounds=xdata_bounds,
output_bounds=zdata_bounds,
normalized=False, # SMT GENN models are normalized during training, this should always be False
)

return model


# Main code

# import data
data = pd.read_csv(r"MEA_carbon_capture_dataset_mimo.csv")
grad0_data = pd.read_csv(r"gradients_output0.csv", index_col=0) # ignore 1st col
grad1_data = pd.read_csv(r"gradients_output1.csv", index_col=0) # ignore 1st col

xdata = data.iloc[:, :6] # there are 6 input variables/columns
zdata = data.iloc[:, 6:] # the rest are output variables/columns
xlabels = xdata.columns.tolist() # set labels as a list (default) from pandas
zlabels = zdata.columns.tolist() # is a set of IndexedDataSeries objects
xdata_bounds = {i: (xdata[i].min(), xdata[i].max()) for i in xdata} # x bounds
zdata_bounds = {j: (zdata[j].min(), zdata[j].max()) for j in zdata} # z bounds

xmax, xmin = xdata.max(axis=0), xdata.min(axis=0)
zmax, zmin = zdata.max(axis=0), zdata.min(axis=0)
xdata, zdata = np.array(xdata), np.array(zdata) # (n_m, n_x) and (n_m, n_y)
gdata = np.stack([np.array(grad0_data), np.array(grad1_data)]) # (2, n_m, n_x)

model_data = np.concatenate(
(xdata, zdata), axis=1
) # Surrogate Modeling Toolbox requires a Numpy array as input

# define x and z data, not used but will add to variable dictionary
xdata = model_data[:, :-2]
zdata = model_data[:, -2:]

# create model
model = create_model(x_train=xdata, z_train=zdata, grad_train=gdata)

with open("mea_column_model_smt.pkl", "wb") as file:
pickle.dump(model, file)

# load model as pickle format
with open("mea_column_model_smt.pkl", "rb") as file:
loaded_model = pickle.load(file)
1 change: 1 addition & 0 deletions foqus_lib/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -165,6 +165,7 @@ def install_ml_ai_model_files(
ts_models_base_path / "mea_column_model_customnormform_json_weights.h5",
other_models_base_path / "mea_column_model_customnormform_pytorch.pt",
other_models_base_path / "mea_column_model_customnormform_scikitlearn.pkl",
other_models_base_path / "mea_column_model_smt.pkl",
]:
shutil.copy2(path, models_dir)
# unzip the zip file (could be generalized later to more files if needed)
Expand Down
Loading
Loading