From 944b8d6bd5bb56bde75453b98a3bb3fea582f2e2 Mon Sep 17 00:00:00 2001 From: mikelam-us-aixplain Date: Tue, 23 Jul 2024 08:32:10 -0700 Subject: [PATCH] Removed confusing duplicate documentation Signed-off-by: mikelam-us-aixplain --- docs/user/README.md | 120 --------------------------------------- docs/user/model_setup.md | 24 ++++---- 2 files changed, 12 insertions(+), 132 deletions(-) delete mode 100644 docs/user/README.md diff --git a/docs/user/README.md b/docs/user/README.md deleted file mode 100644 index a8ac2a1..0000000 --- a/docs/user/README.md +++ /dev/null @@ -1,120 +0,0 @@ -# How to develop an aiXplain hosted model? - -## Model development - -The model-interfaces package organizes your model's code in a standardized format in order to deploy these models on aixplain model hosting instances. The following description covers how to organize your aiXplain hosted model. - -### Model directory structure - -The directory structure needs to be following: -``` -src -│ model.py -│ bash.sh [Optional] -│ requirements.txt [Optional] -| Addtional files required by your code [Optional] -``` - -### The model artefact directory - -The hosted model might depend on files for loading parameters, configurations or other model assets. Create a model directory having the same name as the value provided in `ASSET_URI` and place all your dependant model assets in this directory. - -Note: -1. The contents of this directory would be accessed or loaded by the model class' load function. -2. The environment variable `ASSET_URI` defaults to the value `asset`. - -### Implementing the model.py file - -Each hosted model needs to be an instance of a function based aiXplain model. If the model that your are building is a translation model, the model class implementation should inherit the TranslationModel class interface as shown below. - -``` -class ExampleTranslationModel(TranslationModel): -``` - -To implement the model interface, define the following functions: - -#### Load function - -Implement the load function to load all model artefacts from the model directory specified in `ASSET_URI`. The model artefacts loaded here can be used by the model during prediction time, i.e. executing run_model(). -Set the value self.ready as 'True' to indicate that loading has successfully executed. - -``` - def load(self): - model_path = AssetResolver.resolve_path() - if not os.path.exists(model_path): - raise ValueError('Model not found') - self.model = pickle.load(os.path.join(model_path, 'model.pkl')) - self.ready = True -``` - -#### Run model function - -The run model function should contain the business logic to obtain a prediction from the loaded model. - -Input: -The input to the run model function is a dictionary. This dictionary has a key "instances" having values in a list containing AI function based subclass of APIInput values, for example TranslationInput. - -Output: -The output to the run model function is a dictionary. This dictionary has a key "predictions" having values in a list containing AI function based subclass of APIOutput values, for example TranslationOutput. -The output is expected to return the predictions from the model in the same order as the input instances were received. - -``` - def run_model(self, api_input: Dict[str, List[TranslationInput]]) -> Dict[str, List[TranslationOutput]]: - src_text = self.parse_inputs(api_input["instances"]) - - translated = self.model.generate( - **self.tokenizer( - src_text, return_tensors="pt", padding=True - ) - ) - - predictions = [] - for t in translated: - data = self.tokenizer.decode(t, skip_special_tokens=True) - details = TextSegmentDetails(text=data) - output_dict = { - "data": data, - "details": details - } - translation_output = TranslationOutput(**output_dict) - predictions.append(translation_output) - predict_output = {"predictions": predictions} - return predict_output -``` - - -### The system and Python requirements files - -The bash.sh file: -This file implementation should include installing any system dependencies using bash commands. - -The requirements.txt file: -Include all python packages that you need to run the model by extracting the requirements using the command below - -``` -pip freeze >> requirements.txt -``` - -Remove model-interfaces, CUDA, Torch and Tensorflow requirements from this file as their latest versions come pre-baked into the hosting server. - - -### Testing the model locally - -Run your model with the following command: -``` -ASSET_DIR= ASSET_URI= python -m model -``` - -Make an inference call: - -``` -ASSET_URI= -curl -v -H http://localhost:8080/v1/models/$ASSET_URI:predict -d '{"instances": [{"supplier": , "function": , "data": }]}' -``` - -The input parameter in request above needs to be modified according to the target model's function input. Refer to the [function input definition documentation.](/src/model_interfaces/schemas/function_input.py) - -### The environment variables - - - `ASSET_DIR`: The relative or absolute path of the model artefacts directory (ASSET_URI) on your system. This defaults to current directory. - - `ASSET_URI`: The name of the model artefacts directory. The default name is `asset`. diff --git a/docs/user/model_setup.md b/docs/user/model_setup.md index bb2ea87..c07e34a 100644 --- a/docs/user/model_setup.md +++ b/docs/user/model_setup.md @@ -2,7 +2,7 @@ ## Model development -The aixplain-models package organizes your model's code in a standardized format in order to deploy these models on aixplain model hosting instances. The following description covers how to organize your aiXplain hosted model. +The model-interfaces package organizes your model's code in a standardized format in order to deploy these models on aiXplain model hosting instances. The following description covers how to organize your aiXplain hosted model. ### Model directory structure @@ -15,17 +15,17 @@ src | Addtional files required by your code [Optional] ``` -### The model artefact directory +### The model artifact directory -The hosted model might depend on files for loading parameters, configurations or other model assets. Create a model directory having the same name as the value provided in `ASSET_URI` and place all your dependant model assets in this directory. +The hosted model might depend on files for loading parameters, configurations or other model assets. Create a model directory having the same name as the value provided in `ASSET_URI` and place all your dependent model assets in this directory. Note: -1. The contents of this directory would be accessed or loaded by the model class' load function. +1. The contents of this directory would be accessed or loaded by the model class's load function. 2. The environment variable `ASSET_URI` defaults to the value `asset`. ### Implementing the model.py file -Each hosted model needs to be an instance of a function based aiXplain model. If the model that your are building is a translation model, for example, the model class implementation should inherit the TranslationModel class interface as shown below. +Each hosted model needs to be an instance of a function-based aiXplain model. If the model that your are building is a translation model, for example, the model class implementation should inherit the TranslationModel class interface as shown below. ``` from aixplain.model_interfaces.interfaces.function_models import TranslationModel @@ -39,7 +39,7 @@ To implement the model interface, define the following functions: #### Load function -Implement the load function to load all model artefacts from the model directory specified in `ASSET_URI`. The model artefacts loaded here can be used by the model during prediction time, i.e. executing run_model(). +Implement the load function to load all model artifacts from the model directory specified in `ASSET_URI`. The model artifacts loaded here can be used by the model during prediction time, i.e. executing run_model(). Set the value self.ready as 'True' to indicate that loading has successfully executed. ``` @@ -56,10 +56,10 @@ Set the value self.ready as 'True' to indicate that loading has successfully exe The run model function should contain the business logic to obtain a prediction from the loaded model. Input: -The input to the run model function is a dictionary. This dictionary has a key "instances" having values in a list containing AI function based subclass of APIInput values, for example TranslationInput. +The input to the run model function is a dictionary. This dictionary possesses a key ("instances") containing values in a list containing AI function based subclass of APIInput values like, for example, TranslationInput. Output: -The output to the run model function is a dictionary. This dictionary has a key "predictions" having values in a list containing AI function based subclass of APIOutput values, for example TranslationOutput. +The output to the run model function is a dictionary. This dictionary possesses a key ("predictions") containing values in a list containing an AI function-based subclass of APIOutput values like, for example, TranslationOutput. The output is expected to return the predictions from the model in the same order as the input instances were received. ``` @@ -105,7 +105,7 @@ pip freeze >> requirements.txt Run your model with the following command: ``` -ASSET_DIR= ASSET_URI= python -m model +ASSET_DIR= ASSET_URI= python -m model ``` Make an inference call: @@ -115,7 +115,7 @@ ASSET_URI= curl -v -H http://localhost:8080/v1/models/$ASSET_URI:predict -d '{"instances": [{"supplier": , "function": , "data": }]}' ``` -The input parameter in request above needs to be modified according to the target model's function input. Refer to the [function input definition documentation.](/aixplain/model_interfaces/schemas/function_input.py) +The input parameter in request above needs to be modified according to the target model's function input. Refer to the [function input definition documentation.](/aixplain/model_interfaces/schemas/function/function_input.py) ### Dockerfile Create an image using the following sample Dockerfile. Add features as needed: @@ -136,5 +136,5 @@ CMD python -m model ### The environment variables - - `ASSET_DIR`: The relative or absolute path of the model artefacts directory (ASSET_URI) on your system. This defaults to current directory. - - `ASSET_URI`: The name of the model artefacts directory. The default name is `asset`. + - `ASSET_DIR`: The relative or absolute path of the model artifacts directory (ASSET_URI) on your system. This defaults to current directory. + - `ASSET_URI`: The name of the model artifacts directory. The default name is `asset`.