An AWS Lambda function that retrieves log files via SFTP, checks for new entries, and sends them to Papertrail. Uses S3 for maintaining state.
Several manual steps are needed for initial setup. If you want to be able to automatically re-deploy updates later (such as updates you make, or updates you pull down from me), there's also an option to set that up with Travis CI.
Full instructions are coming soon. The steps will look something like this:
- Create an AWS S3 bucket for remembering the state of old log files
- Create an AWS Lambda function, setting the runtime to Node.js 6.10
- Create an AWS User and Role (example policy coming soon)
- Set your environment variables in your new Lambda function:
STP_SFTP_HOST
eg. ftp.example.comSTP_SFTP_PORT
(if not 22)STP_SFTP_PATH
eg. logs/access_logSTP_SFTP_USERNAME
eg. [email protected]STP_SFTP_PASSWORD
eg. ThisIsNotASecurePassword (you can also encrypt this within Lambda)STP_PAPERTRAIL_HOST
eg. logs1.papertrailapp.comSTP_PAPERTRAIL_PORT
eg. 12345STP_S3_BUCKET
eg. example-com-log-storageSTP_S3_REGION
(if not us-east-1)
- Set the timeout for your function to something larger than the default 3 seconds - you might want to try 1 minute, or even higher if you know you'll have a lot of logs
- Cron your new Lambda function to run as often as you like (more often means new events get to Papertrail quicker, but also more S3 gets and puts)
Then, for manual deployments:
- Clone/download this repo to your machine
- Run
yarn
(ornpm install
) within your copy of the project - Zip up everything, and upload it to your Lambda function (under the Function Code heading)
- Do a test run to make sure it's all working!
OR, for automatic deployments:
- Fork this repo
- Log in to Travis CI with your GitHub account and enable your newly forked repo
- Edit
.travis.yml
and configure your region, function name, role, and AWS access keys (please encrypt the access key) - Push your changes to GitHub if you haven't already, and it should build and deploy to AWS
- Do a test run of the function from your Lambda console to make sure it's all working!
If you used the automatic method above, you can make changes to the function (or merge in changes I've made) and every time you push to GitHub, your function will be re-built and deployed automatically.
Otherwise, to update a manual installation, you'll need to pull (or re-download) the repo if you want to get any changes I've made; reinstall dependencies if they've changed (yarn
or npm install
); then re-zip and re-upload the function to Lambda.
It's always a good idea to re-test through the Lambda console after updating, just in case something has gone wrong.
Issues and pull requests welcomed. This is my first Lambda function, created to solve a problem I encountered at work; I'd love any improvements.
The easiest way to contribute is to fork and clone the repo locally, install dependencies (yarn
), and then run yarn docker-tests
to execute the function locally. You'll be prompted to export some environment variables so the function can do it's thing. You will need access to an SFTP server, an S3 bucket and a Papertrail account to run through everything.
- Potentially add support for globbing or listing & downloading entire directories
- Potentially add support for multiple SFTP accounts at once
- Work out the difference between various S3 errors and if it's a connection failure
- Add additional tests with mocked SFTP, Winston and AWS modules
- Link up Docker tests to run through Travis
MIT.