Skip to content

aws-samples/aws-serverless-openai-chatbot-demo

Introduction

ChatGPT is a very popular artificial intelligence (AI) technology that enables natural language conversations between humans and machines. It is based on the open-source GPT-3 language model developed by OpenAI, the model has been used in a variety of applications, from customer service chatbots to virtual assistants. It has also been used to generate human-like text in a wide range of formats, including conversation, story-telling, news articles, and more. ChatGPT has received positive feedback from the public and the research community for its ability to understand natural language, generate high-quality, coherent text, and meaningful responses. As the technology continues to evolve, it is expected that ChatGPT will become an increasingly important tool for businesses and individuals alike.
In this sample , we will demonstrate using the GPT-3 text completion endpoint from OpenAI, to build a web application as your personal AI assistant on AWS using a serverless architecture. And the services used in this project are all eligible for free tier. The services to be used are :

  • Amazon API Gateway
  • Amazon Lambda
  • Amazon S3
  • Amazon DynamoDB

Architecture

This application is built on a totally serverless architecture:

  • An Amazon S3 bucket is hosting the HTML, JS, CSS files of the front-end client.
  • An Amazon API Gateway is deployed to route the requests from client devices to the backend services.
  • The backend services are built on top of Amazon Lambda, which includes a function to authorize the request, a function to process user sign in, a function to handle chat requests from the client and revoke OpenAI SDK function to get the response text from the OpenAI server.
  • An Amazon DynamoDB table also needs to be created to store the username and credential to give some basic authorization of this application. Architecture

A new architecture adds a WebSocket API and SNS topic subscription to decouple the remote call to OpenAI from API gateway, thus to extend the 30s timeout limitation, and reduce 503 error. please refer to v2-websocket

architecture-v2

Prerequisites

  • You need to have an OpenAI account, and create an API key in the portal. openaikey
  • An AWS account. If your account is still eligible for free tier, then your application might cost zero under the free tier quota.

Setup local environment

  • Install Node.js (if you've already installed it, please skip this) Install Node.js in your local environment, to build static website files and some dependencies packages for Amazon Lambda.
  • Get source code and build the packages The server folder contains the code for the Lambda functions. The client folder contains the code for the front-end website.
  • Go to each Lambda function folder under the server folder, install the dependencies and make .zip file archives for uploading to Amazon Lambda. For example, to make a lambda_login.zip:
    >cd server/lambda_login
    >npm install <br/>
    >zip -r lambda_login.zip
    We will create 3 Amazon Lambda functions, that means we will make 3 .zip files (lambda_auth.zip/ lambda_login.zip/ lambda_chat.zip)in total.

Create Lambda functions

Create a Lambda function to handle chat.

  1. In AWS console, create a Lambda function from scratch named openai-chat, choose Node.js 18.x for the runtime.

If you are using a Mac with M1/M2 chips to build the NodeJS dependencies locally, please remember to choose "arm64" for architecture option.

  1. Upload the lambda_chat.zip created from last step to the Lambda.
  2. Configure your own OpenAI API key in environment variables as key named "OPENAI_API_KEY".
    createlambda1
  3. OpenAI needs time to process the request, which is longer than 3 secs, so please change the runtime timeout to a greater value, e.g 1 min.
    timeout

Create a Lambda authorizer for API request authorization.

  1. In AWS console, create a Lambda function from scratch named chat-authorizer, choose Node.js 18.x for the runtime.
  2. Configure an arbitrary value as your own token key. We use jsonwebtoken to authorize and validate the request, and this key will be used to sign the token. Store this token key in environment variables named "TOKEN_KEY".
  3. Upload the lambda_auth.zip file to the console, similar to the openai-chat function.

Create a Lambda function to handle user login.

  1. In the AWS console, create a Lambda function from scratch named openai-login, choose Node.js 18.x for the runtime.
  2. Add the environment variables for TOKEN_KEY as same as the lambda authorizer function.
  3. This function will invoke DynamoDB service API to verify the user credentials, so we need to attach AmazonDynamoDBReadOnlyAccess policy or create an inline policy to the role of this function.
  4. Upload the lambda_login.zip file to the console, similar to the openai-chat function.

Create API gateway

  1. Create the HTTP API gateway.
    apigw1
  2. Create two Routes using POST method: /chat , /login.
  3. For /chat route, we need to attach the Lambda authorizer and integrate it to the openchat Lambda function.
    -- Create a Lambda authorizer and attach it to /chat. In Lambda function field, select the "chat-authorizer" that you have created above. lambdaauth -- Create and attach an integration. In the Lambda function field, select the "openai-chat" that you have created above.
    4 For /login route, we need to integrate to the Lambda function created for login. In Lambda function field, select the "openai-login" that you have created above.
    5 Set the CORS configuration as below:
    cors

Create DynamoDB table

We use Amazon DynamoDB to store username and password credentials. To simplify the demo, we will not implement the signup function, you can directly add the user and unencrypted password into the table. Or you can use the AWS CLI command in the code package.

  1. Sign in to the AWS Management Console and open the Amazon DynamoDB console, create a table name chat_user_info with the partition key username. dynamodb.
  2. Add your username and password pairs to the table.
    dynamodb.

Host the website in S3

When you configure a bucket as a static website, you must enable static website hosting, configure an index document, and set the permissions. You can read the detail from AWS doc Reference.

  1. Create an S3 bucket named bucket-name on the Amazon S3 console.
  2. Enable static website hosting of this bucket. In Index document, enter the file name of the index document index.html.
  3. By default, the S3 bucket blocks public access. You need to change the setting by unchecking the option in the "Permissions" tab of the bucket detail page.
  4. Add the policy below to the bucket policy to allow public access.
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "PublicReadGetObject",
                "Effect": "Allow",
                "Principal": "*",
                "Action": [
                   "s3:GetObject"
                ],
                "Resource": [
                   "arn:aws:s3:::bucket-name/*"
                ]
            }
        ]
    }
    
  5. Then your Amazon S3 website follows one of these two formats:

Build the static files for client

  1. In your local environment, go to the client folder, change the first line of apigw.js to the actual API gateway endpoint which you created in the previous step.

const API_endpoint = 'https://xxx.amazonaws.com/';

  1. Then run these commands to install and build the packages for the front-end website.
    >cd client
    >npm install
    >npm run build

  2. After the "npm run build" completes, it will create a folder named "build" in the client folder. That folder has all the files to be deployed to the Amazon S3 website. You can upload this folder to the bucket through AWS S3 console or use AWS CLI as below:
    `>aws s3 sync ./build/ s3://bucket-name/

  3. After all steps are done, now you can visit the S3 website endpoint in your PC/Mobile browser, and login the page with your username and password you stored in the DynamoDB table. UI.

  4. You can change your model settings in the menu (This is a new feature updated on Feb-11-2023) Settings.

Security

See CONTRIBUTING for more information.

License

This library is licensed under the MIT-0 License. See the LICENSE file.

About

No description, website, or topics provided.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published