Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat:Update OpenAPI Specification for Replicate HTTP API and Enhance Documentation #39

Merged
merged 1 commit into from
Oct 8, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -6,101 +6,48 @@ public partial interface IReplicateApi
{
/// <summary>
/// Create a prediction using a deployment<br/>
/// Start a new prediction for a deployment of a model using inputs you provide.<br/>
/// Example request body:<br/>
/// ```json<br/>
/// {<br/>
/// "input": {<br/>
/// "text": "Alice"<br/>
/// }<br/>
/// }<br/>
/// ```<br/>
/// Create a prediction for the deployment and inputs you provide.<br/>
/// Example cURL request:<br/>
/// ```console<br/>
/// curl -s -X POST \<br/>
/// -d '{"input": {"text": "Alice"}}' \<br/>
/// curl -s -X POST -H 'Prefer: wait' \<br/>
/// -d '{"input": {"prompt": "A photo of a bear riding a bicycle over the moon"}}' \<br/>
/// -H "Authorization: Bearer $REPLICATE_API_TOKEN" \<br/>
/// -H 'Content-Type: application/json' \<br/>
/// "https://api.replicate.com/v1/deployments/replicate/hello-world/predictions"<br/>
/// ```<br/>
/// The response will be the prediction object:<br/>
/// ```json<br/>
/// {<br/>
/// "id": "86b6trbv99rgp0cf1h886f69ew",<br/>
/// "model": "replicate/hello-world",<br/>
/// "version": "dp-8e43d61c333b5ddc7a921130bc3ab3ea",<br/>
/// "input": {<br/>
/// "text": "Alice"<br/>
/// },<br/>
/// "logs": "",<br/>
/// "error": null,<br/>
/// "status": "starting",<br/>
/// "created_at": "2024-04-23T18:55:52.138Z",<br/>
/// "urls": {<br/>
/// "cancel": "https://api.replicate.com/v1/predictions/86b6trbv99rgp0cf1h886f69ew/cancel",<br/>
/// "get": "https://api.replicate.com/v1/predictions/86b6trbv99rgp0cf1h886f69ew"<br/>
/// }<br/>
/// }<br/>
/// https://api.replicate.com/v1/deployments/acme/my-app-image-generator/predictions<br/>
/// ```<br/>
/// As models can take several seconds or more to run, the output will not be available immediately. To get the final result of the prediction you should either provide a `webhook` HTTPS URL for us to call when the results are ready, or poll the [get a prediction](#predictions.get) endpoint until it has finished.<br/>
/// Input and output (including any files) will be automatically deleted after an hour, so you must save a copy of any files in the output if you'd like to continue using them.<br/>
/// Output files are served by `replicate.delivery` and its subdomains. If you use an allow list of external domains for your assets, add `replicate.delivery` and `*.replicate.delivery` to it.
/// The request will wait up to 60 seconds for the model to run. If this time is exceeded the prediction will be returned in a `"starting"` state and need to be retrieved using the `predictions.get` endpiont.<br/>
/// For a complete overview of the `deployments.predictions.create` API check out our documentation on [creating a prediction](https://replicate.com/docs/topics/predictions/create-a-prediction) which covers a variety of use cases.
/// </summary>
/// <param name="deploymentOwner"></param>
/// <param name="deploymentName"></param>
/// <param name="prefer"></param>
/// <param name="request"></param>
/// <param name="cancellationToken">The token to cancel the operation with</param>
/// <exception cref="global::System.InvalidOperationException"></exception>
global::System.Threading.Tasks.Task DeploymentsPredictionsCreateAsync(
string deploymentOwner,
string deploymentName,
global::Replicate.PredictionRequest request,
string? prefer = default,
global::System.Threading.CancellationToken cancellationToken = default);

/// <summary>
/// Create a prediction using a deployment<br/>
/// Start a new prediction for a deployment of a model using inputs you provide.<br/>
/// Example request body:<br/>
/// ```json<br/>
/// {<br/>
/// "input": {<br/>
/// "text": "Alice"<br/>
/// }<br/>
/// }<br/>
/// ```<br/>
/// Create a prediction for the deployment and inputs you provide.<br/>
/// Example cURL request:<br/>
/// ```console<br/>
/// curl -s -X POST \<br/>
/// -d '{"input": {"text": "Alice"}}' \<br/>
/// curl -s -X POST -H 'Prefer: wait' \<br/>
/// -d '{"input": {"prompt": "A photo of a bear riding a bicycle over the moon"}}' \<br/>
/// -H "Authorization: Bearer $REPLICATE_API_TOKEN" \<br/>
/// -H 'Content-Type: application/json' \<br/>
/// "https://api.replicate.com/v1/deployments/replicate/hello-world/predictions"<br/>
/// ```<br/>
/// The response will be the prediction object:<br/>
/// ```json<br/>
/// {<br/>
/// "id": "86b6trbv99rgp0cf1h886f69ew",<br/>
/// "model": "replicate/hello-world",<br/>
/// "version": "dp-8e43d61c333b5ddc7a921130bc3ab3ea",<br/>
/// "input": {<br/>
/// "text": "Alice"<br/>
/// },<br/>
/// "logs": "",<br/>
/// "error": null,<br/>
/// "status": "starting",<br/>
/// "created_at": "2024-04-23T18:55:52.138Z",<br/>
/// "urls": {<br/>
/// "cancel": "https://api.replicate.com/v1/predictions/86b6trbv99rgp0cf1h886f69ew/cancel",<br/>
/// "get": "https://api.replicate.com/v1/predictions/86b6trbv99rgp0cf1h886f69ew"<br/>
/// }<br/>
/// }<br/>
/// https://api.replicate.com/v1/deployments/acme/my-app-image-generator/predictions<br/>
/// ```<br/>
/// As models can take several seconds or more to run, the output will not be available immediately. To get the final result of the prediction you should either provide a `webhook` HTTPS URL for us to call when the results are ready, or poll the [get a prediction](#predictions.get) endpoint until it has finished.<br/>
/// Input and output (including any files) will be automatically deleted after an hour, so you must save a copy of any files in the output if you'd like to continue using them.<br/>
/// Output files are served by `replicate.delivery` and its subdomains. If you use an allow list of external domains for your assets, add `replicate.delivery` and `*.replicate.delivery` to it.
/// The request will wait up to 60 seconds for the model to run. If this time is exceeded the prediction will be returned in a `"starting"` state and need to be retrieved using the `predictions.get` endpiont.<br/>
/// For a complete overview of the `deployments.predictions.create` API check out our documentation on [creating a prediction](https://replicate.com/docs/topics/predictions/create-a-prediction) which covers a variety of use cases.
/// </summary>
/// <param name="deploymentOwner"></param>
/// <param name="deploymentName"></param>
/// <param name="prefer"></param>
/// <param name="input">
/// The model's input as a JSON object. The input schema depends on what model you are running. To see the available inputs, click the "API" tab on the model you are running or [get the model version](#models.versions.get) and look at its `openapi_schema` property. For example, [stability-ai/sdxl](https://replicate.com/stability-ai/sdxl) takes `prompt` as an input.<br/>
/// Files should be passed as HTTP URLs or data URLs.<br/>
Expand Down Expand Up @@ -145,6 +92,7 @@ public partial interface IReplicateApi
string deploymentOwner,
string deploymentName,
global::Replicate.PredictionRequestInput input,
string? prefer = default,
bool? stream = default,
string? webhook = default,
global::System.Collections.Generic.IList<global::Replicate.PredictionRequestWebhookEventsFilterItem>? webhookEventsFilter = default,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,101 +6,48 @@ public partial interface IReplicateApi
{
/// <summary>
/// Create a prediction using an official model<br/>
/// Start a new prediction for an official model using the inputs you provide.<br/>
/// Example request body:<br/>
/// ```json<br/>
/// {<br/>
/// "input": {<br/>
/// "prompt": "Write a short poem about the weather."<br/>
/// }<br/>
/// }<br/>
/// ```<br/>
/// Create a prediction for the deployment and inputs you provide.<br/>
/// Example cURL request:<br/>
/// ```console<br/>
/// curl -s -X POST \<br/>
/// curl -s -X POST -H 'Prefer: wait' \<br/>
/// -d '{"input": {"prompt": "Write a short poem about the weather."}}' \<br/>
/// -H "Authorization: Bearer $REPLICATE_API_TOKEN" \<br/>
/// -H 'Content-Type: application/json' \<br/>
/// https://api.replicate.com/v1/models/meta/meta-llama-3-70b-instruct/predictions<br/>
/// ```<br/>
/// The response will be the prediction object:<br/>
/// ```json<br/>
/// {<br/>
/// "id": "25s2s4n7rdrgg0cf1httb3myk0",<br/>
/// "model": "replicate-internal/llama3-70b-chat-vllm-unquantized",<br/>
/// "version": "dp-cf04fe09351e25db628e8b6181276547",<br/>
/// "input": {<br/>
/// "prompt": "Write a short poem about the weather."<br/>
/// },<br/>
/// "logs": "",<br/>
/// "error": null,<br/>
/// "status": "starting",<br/>
/// "created_at": "2024-04-23T19:36:28.355Z",<br/>
/// "urls": {<br/>
/// "cancel": "https://api.replicate.com/v1/predictions/25s2s4n7rdrgg0cf1httb3myk0/cancel",<br/>
/// "get": "https://api.replicate.com/v1/predictions/25s2s4n7rdrgg0cf1httb3myk0"<br/>
/// }<br/>
/// }<br/>
/// ```<br/>
/// As models can take several seconds or more to run, the output will not be available immediately. To get the final result of the prediction you should either provide a `webhook` HTTPS URL for us to call when the results are ready, or poll the [get a prediction](#predictions.get) endpoint until it has finished.<br/>
/// All input parameters, output values, and logs are automatically removed after an hour, by default, for predictions created through the API.<br/>
/// Output files are served by `replicate.delivery` and its subdomains. If you use an allow list of external domains for your assets, add `replicate.delivery` and `*.replicate.delivery` to it.
/// The request will wait up to 60 seconds for the model to run. If this time is exceeded the prediction will be returned in a `"starting"` state and need to be retrieved using the `predictions.get` endpiont.<br/>
/// For a complete overview of the `deployments.predictions.create` API check out our documentation on [creating a prediction](https://replicate.com/docs/topics/predictions/create-a-prediction) which covers a variety of use cases.
/// </summary>
/// <param name="modelOwner"></param>
/// <param name="modelName"></param>
/// <param name="prefer"></param>
/// <param name="request"></param>
/// <param name="cancellationToken">The token to cancel the operation with</param>
/// <exception cref="global::System.InvalidOperationException"></exception>
global::System.Threading.Tasks.Task<global::Replicate.PredictionResponse> ModelsPredictionsCreateAsync(
string modelOwner,
string modelName,
global::Replicate.PredictionRequest request,
string? prefer = default,
global::System.Threading.CancellationToken cancellationToken = default);

/// <summary>
/// Create a prediction using an official model<br/>
/// Start a new prediction for an official model using the inputs you provide.<br/>
/// Example request body:<br/>
/// ```json<br/>
/// {<br/>
/// "input": {<br/>
/// "prompt": "Write a short poem about the weather."<br/>
/// }<br/>
/// }<br/>
/// ```<br/>
/// Create a prediction for the deployment and inputs you provide.<br/>
/// Example cURL request:<br/>
/// ```console<br/>
/// curl -s -X POST \<br/>
/// curl -s -X POST -H 'Prefer: wait' \<br/>
/// -d '{"input": {"prompt": "Write a short poem about the weather."}}' \<br/>
/// -H "Authorization: Bearer $REPLICATE_API_TOKEN" \<br/>
/// -H 'Content-Type: application/json' \<br/>
/// https://api.replicate.com/v1/models/meta/meta-llama-3-70b-instruct/predictions<br/>
/// ```<br/>
/// The response will be the prediction object:<br/>
/// ```json<br/>
/// {<br/>
/// "id": "25s2s4n7rdrgg0cf1httb3myk0",<br/>
/// "model": "replicate-internal/llama3-70b-chat-vllm-unquantized",<br/>
/// "version": "dp-cf04fe09351e25db628e8b6181276547",<br/>
/// "input": {<br/>
/// "prompt": "Write a short poem about the weather."<br/>
/// },<br/>
/// "logs": "",<br/>
/// "error": null,<br/>
/// "status": "starting",<br/>
/// "created_at": "2024-04-23T19:36:28.355Z",<br/>
/// "urls": {<br/>
/// "cancel": "https://api.replicate.com/v1/predictions/25s2s4n7rdrgg0cf1httb3myk0/cancel",<br/>
/// "get": "https://api.replicate.com/v1/predictions/25s2s4n7rdrgg0cf1httb3myk0"<br/>
/// }<br/>
/// }<br/>
/// ```<br/>
/// As models can take several seconds or more to run, the output will not be available immediately. To get the final result of the prediction you should either provide a `webhook` HTTPS URL for us to call when the results are ready, or poll the [get a prediction](#predictions.get) endpoint until it has finished.<br/>
/// All input parameters, output values, and logs are automatically removed after an hour, by default, for predictions created through the API.<br/>
/// Output files are served by `replicate.delivery` and its subdomains. If you use an allow list of external domains for your assets, add `replicate.delivery` and `*.replicate.delivery` to it.
/// The request will wait up to 60 seconds for the model to run. If this time is exceeded the prediction will be returned in a `"starting"` state and need to be retrieved using the `predictions.get` endpiont.<br/>
/// For a complete overview of the `deployments.predictions.create` API check out our documentation on [creating a prediction](https://replicate.com/docs/topics/predictions/create-a-prediction) which covers a variety of use cases.
/// </summary>
/// <param name="modelOwner"></param>
/// <param name="modelName"></param>
/// <param name="prefer"></param>
/// <param name="input">
/// The model's input as a JSON object. The input schema depends on what model you are running. To see the available inputs, click the "API" tab on the model you are running or [get the model version](#models.versions.get) and look at its `openapi_schema` property. For example, [stability-ai/sdxl](https://replicate.com/stability-ai/sdxl) takes `prompt` as an input.<br/>
/// Files should be passed as HTTP URLs or data URLs.<br/>
Expand Down Expand Up @@ -145,6 +92,7 @@ public partial interface IReplicateApi
string modelOwner,
string modelName,
global::Replicate.PredictionRequestInput input,
string? prefer = default,
bool? stream = default,
string? webhook = default,
global::System.Collections.Generic.IList<global::Replicate.PredictionRequestWebhookEventsFilterItem>? webhookEventsFilter = default,
Expand Down
Loading