s3: Amazon S3

Available in AxoSyslog version 4.4 and later.

The s3() destination sends log messages to the Amazon Simple Storage Service (Amazon S3) object storage service. You can send log messages over TCP, or encrypted with TLS.

Prerequisites

  • An existing S3 bucket configured for programmatic access, and the related ACCESS_KEY and SECRET_KEY of a user that can access it.
  • If you are not using the venv (/usr/bin/syslog-ng-update-virtualenv) created by AxoSyslog, you must install the boto3 and/or botocore Python dependencies.

To use the s3() driver, the scl.conf file must be included in your AxoSyslog configuration:

   @include "scl.conf"

The s3() driver is actually a reusable configuration snippet. For details on using or writing such configuration snippets, see Reusing configuration blocks. You can find the source of this configuration snippet on GitHub.

Declaration

s3(
    url("http://localhost:9000")
    bucket("syslog-ng")
    access-key("my-access-key")
    secret-key("my-secret-key")
    object-key("${HOST}/my-logs")
    template("${MESSAGE}\n")
);

Creating objects

AxoSyslog can create a new object based on the following strategies:

  • Based on object size: The max-object-size() option configures AxoSyslog to finish an object if it reaches a certain size. AxoSyslog appends an index ("-1", “-2”, …) to the end of the object key, then starts a new object.
  • Based on timestamp: The object-key-timestamp() option can be used to set a datetime-related template, which is appended to the end of the object, for example: "${R_MONTH_ABBREV}${R_DAY}". When a log message arrives with a newer timestamp template resolution, the previous timestamped object gets finished and a new one is started with the new timestamp. If an older message arrives, it doesn`t reopen the old object, but starts a new object with the key having an index appended to the old object.
  • Based on timeout: The flush-grace-period() option sets the number of minutes to wait for new messages to arrive after the last one. If the timeout expires, AxoSyslog closes the object, and opens a new object (with an appended index) when a new message arrives.

All of these strategies can be used individually, or together.

Upload options

AxoSyslog uploads objects using the multipart upload API. AxoSyslog composes chunks locally. When a chunk reaches the size set in chunk-size() (by default 5 MiB), the chunk is uploaded. When an object is finished, the multipart upload is completed and S3 merges the chunks.

You can influence the upload via the chunk-size(), upload-threads(), and the max-pending-uploads() options.

Options

The following options are specific to the s3() destination.

access-key()

Type:string
Default:N/A

Description: The ACCESS_KEY of the service account used to access the S3 bucket. (Together with secret-key().)

Starting with version 4.7, you can use the AWS_... environment variables or credentials files from the ~/.aws/ directory instead of this option. For details, see the official documentation.

bucket()

Type:string
Default:

Description: The name of the S3 bucket, for example, my-bucket

canned-acl()

Type:string
Default:empty

Description: The ACL assigned to the object, if specified, for example, bucket-owner-read. The following values are valid:

authenticated-read, aws-exec-read, bucket-owner-full-control, bucket-owner-read, log-delivery-write, private, public-read, public-read-write

If you configure an invalid value, the default is used.

chunk-size()

Type:string
Default:5MiB

Description: The size of log messages that AxoSyslog writes to the S3 object in a batch. If compression is enabled, the chunk-size() refers to the compressed size.

compression()

Type:boolean
Default:no

Description: Setting compression(yes) enables gzip compression, and implicitly adds a .gz suffix to the created object’s key. You can set the level of the compression using the compresslevel() option (0-9).

compresslevel()

Type:integer (0-9)
Default:9

Description: Only has effect if compression() is set to yes. You can set the level of the compression using the compresslevel() option (0-9).

flush-grace-period()

Type:integer [minutes]
Default:60

Description: After the grace period expires and no new messages are routed to the destination, AxoSyslog flushes the contents of the buffer to the S3 object even if the volume of the messages in the buffer is lower than chunk-size().

log-fifo-size()

Type:number
Default:Use global setting.

Description: The number of messages that the output queue can store.

max-object-size()

Type:string
Default:5120GiB

Description: The maximal size of the S3 object. If an object reaches this size, AxoSyslog appends an index ("-1", “-2”, …) to the end of the object key and starts a new object after rotation.

max-pending-uploads()

Type:integer
Default:32

Description: The max-pending-uploads() and upload-threads() options configure the upload of the chunks. Uploading happens in multiple threads to minimize network overhead.

  • upload-threads() limits the maximum number of parallel uploads.
  • max-pending-uploads() limits the number of chunks that are waiting in the work queue of the upload threads to get uploaded.

object-key()

Type:template
Default:N/A

Description: The object key (or key name), which uniquely identifies the object in an Amazon S3 bucket.

object-key-timestamp()

Type:template
Default:

Description: The object-key-timestamp() option can be used to set a datetime-related template, which is appended to the end of the object, for example: "${R_MONTH_ABBREV}${R_DAY}". When a log message arrives with a newer timestamp template resolution, the previous timestamped object gets finished and a new one is started with the new timestamp. If an older message arrives, it doesn`t reopen the old object, but starts a new object with the key having an index appended to the old object.

persist-name()

Type:string
Default:N/A

Description: If you receive the following error message during AxoSyslog startup, set the persist-name() option of the duplicate drivers:

   Error checking the uniqueness of the persist names, please override it with persist-name option. Shutting down.

This error happens if you use identical drivers in multiple sources, for example, if you configure two file sources to read from the same file. In this case, set the persist-name() of the drivers to a custom string, for example, persist-name("example-persist-name1").

region()

Type:string
Default:

Description: The regional endpoint where the bucket is stored. For example, us-east-1

secret-key()

Type:string
Default:N/A

Description: The SECRET_KEY of the service account used to access the S3 bucket. (Together with access-key().)

Starting with version 4.7, you can use the AWS_... environment variables or credentials files from the ~/.aws/ directory instead of this option. For details, see the official documentation.

storage-class()

Type:string
Default:STANDARD

Description: The storage class of the object, for example, REDUCED_REDUNDANCY. The following values are valid:

DEEP_ARCHIVE, GLACIER, GLACIER_IR, INTELLIGENT_TIERING, ONEZONE_IA, OUTPOSTS, REDUCED_REDUNDANCY, SNOW, STANDARD, STANDARD_IA

If you configure an invalid value, the default is used.

upload-threads()

Type:integer
Default:8

Description: The number of AxoSyslog worker threads that are used to upload data to S3 from this destination.

template()

Type:template or template function
Default:${MESSAGE}\n

Description: The message as written to the Amazon S3 object. You can use templates and template functions to format the message.

url()

Type:string
Default:N/A

Description: The URL of the S3 bucket, for example, https://my-bucket.s3.us-west-2.amazonaws.com