This repository contains a collection of sample API proxies that you can deploy and run on Apigee X or hybrid.
The samples provide a jump-start for developers who wish to design and create Apigee API proxies.
You are an Apigee API proxy developer, or you would like to learn about developing APIs that run on Apigee X & hybrid. At a minimum, we assume you're familiar with Apigee and how to create simple API proxies. To learn more, we recommend this getting started tutorial.
-
See the full list of Prerequisites for installing Apigee.
-
You'll need access to a Google Cloud Platform account and project. Sign up for a free GCP trial account.
-
If you don't have one, you'll need to provision an Apigee instance. Create a free Apigee eval instance.
-
Clone this project from GitHub to your system.
Most developers begin by identifying an interesting sample based on a specific use case or need. You'll find all the samples in the root folder.
Sample | Description | Cloud Shell Tutorial | |
---|---|---|---|
1 | deploy-apigee-proxy | Deploy Apigee proxy using Apigee Maven plugin and Cloud Build | |
2 | deploy-apigee-sharedflow | Deploy Apigee sharedflow using Apigee Maven plugin and Cloud Build | |
3 | deploy-apigee-config | Deploy Apigee configurations using Apigee Maven plugin and Cloud Build | |
4 | authorize-idp-access-tokens | Authorize JWT access tokens issued by an Identity Provider | |
5 | oauth-client-credentials | A sample proxy which uses the OAuth 2.0 client credentials grant type flow | |
6 | oauth-client-credentials-with-scope | A sample proxy which uses the OAuth 2.0 client credentials grant type flow and limit access using OAuth2 scopes | |
7 | cloud-logging | A sample proxy that logs custom messages to Google Cloud Logging | |
8 | basic-caching | An example showing how to cache responses and other data using Apigee's built in policies | |
9 | basic-quota | A sample which shows how to implement a basic API consumption quota | |
10 | apiproduct-operations | Shows the behavior of API Product Operations | |
11 | cloud-run | A sample proxy to invoke Cloud Run Service from Apigee | |
12 | integrated-developer-portal | This sample demonstrates how to create an Apigee Integrated portal and shows how to expose your API products to its catalog | |
13 | drupal-developer-portal | This sample demonstrates how to create a Drupal developer portal using the GCP Marketplace and shows how to expose your Apigee API products to its catalog | |
14 | exposing-to-internet | This sample demonstrates how to expose an Apigee instance to the internet using a Google Cloud external HTTP(S) Load Balancer | |
15 | json-web-tokens | This sample demonstrates how to generate and verify JSON Web Tokens using the out of the box Apigee JWT policies | |
16 | cors | This sample lets you create an API that uses the cross-origin resource sharing (CORS) mechanism to allow requests from external webpages and applications | |
17 | extract-variables | This sample demonstrates how to extract variables and set headers using the out of the box Apigee AssignMessage and ExtractVariable policies | |
18 | websockets | This sample shows how to deploy a sample websockets echo server in Cloud Run and how to use Apigee to expose that service to developers securely | |
19 | grpc | This sample shows how to deploy a sample gRPC Hello World application in Cloud Run and how to use Apigee to expose that service to developers securely | |
20 | mtls-northbound | This sample shows how to configure mTLS using a GCP Private CA (Certificate Authority) on an existing GLB (global load balancer). | |
21 | property-set | This sample lets you create an API that uses a property set and shows how to get data from it using a mediation policy (AssignMessage). | |
22 | data-deidentification | Invokes the Data Loss Prevention (DLP) API to perform data masking (de-identification) on JSON and XML payloads. | |
23 | publish-to-apigee-portal | Publish OpenAPI Spec to Apigee Integrated Portal using Maven plugin and Cloud Build | |
24 | threat-protection | Threat Protection sample in Apigee | |
25 | composite-api | Composite API sample in Apigee | |
26 | cloud-functions | A sample proxy that connects to an app running in Cloud Functions | |
27 | grpc-web | A sample proxy that connects to a gRPC-Web service running in Cloud Run | |
28 | monolith-to-microservices-based-on-paths | Sample facade to facilitate the migration from a monolith to a microservice architecture. |
You can find video walkthroughs of many of these samples in this YouTube playlist
The rise of Large Language Models (LLMs) presents an unparalleled opportunity for AI productization, but also necessitates a robust platform to manage, scale, secure, and govern access to them. While specialized tools and platforms are emerging, organizations can leverage their existing investment in a best-in-class API Management platform like Apigee to effectively handle all their LLM serving needs.
Apigee X plays a crucial role in LLM serving by acting as an intermediary between clients and the LLM endpoints. It provides a secure, reliable, and scalable way to expose LLMs as APIs while offering essential features like:
- Security: Authentication, authorization, rate limiting, and protection against attacks.
- Reliability: Load balancing, circuit breaking, and failover mechanisms.
- Performance: Caching, request/response transformation, and optimized routing.
- Observability: Logging, monitoring, and tracing for troubleshooting and analysis.
- Governance: API lifecycle management, versioning, and productization.
This repository explores common LLM serving patterns using Apigee X as a robust and feature-rich API management platform. While the primary focus is on serving Gemini models, the principles and patterns discussed here can be adapted for other LLMs.
Sample | Description | Open Notebook | |
---|---|---|---|
1 | llm-token-limits | Apigee's API Products provide real-time monitoring and enforcement of token usage limits for LLMs, enabling effective management of token consumption across different providers and consumers. | |
2 | llm-semantic-cache | This sample performs a cache lookup of responses on Apigee's Cache layer and Vector Search as an embeddings database. | |
3 | llm-circuit-breaking | Apigee enhances the resilience and prevents outages in Retrieval Augmented Generation applications that utilize multiple Large Language Models by intelligently managing traffic and implementing circuit breaking to avoid exceeding endpoint quotas. | |
4 | llm-logging | Logging prompts and responses of large language models facilitates performance analysis, security monitoring, and bias detection, ultimately enabling model improvement and risk mitigation. | |
5 | llm-routing | Sample to route to different LLM providers using Apigee's routing capabilities. |
Feel free to modify and build upon the sample proxies. You can make changes in the Apigee management UI or by using our Cloud Code extension for local development in Visual Studio Code. Whichever approach is comfortable for you.
Simply redeploy the proxies for changes to take effect.
The Apigee Forum on the Google Cloud Community site is a great place to ask questions and find answers about developing API proxies.
The Apigee docs are located here.
New samples should be added as a root level directory in this repository.
For more details on how to contribute please see the guidelines.
All solutions within this repository are provided under the Apache 2.0 license. Please see the LICENSE file for more detailed terms and conditions.
This is not an officially supported Google product, nor is it part of an official Google product.
If you need support or assistance, you can try inquiring on Google Cloud Community forum dedicated to Apigee.