Golang library for consuming Kinesis stream data
Find a file
Harlow Ward 6ca2abb7f8 User Log package for default logger
* Modify the default logger to it also logs Line Numbers
* Update README w/ logging instructions
2015-05-23 13:52:57 -07:00
.gitignore Add tags to gitignore 2014-12-13 13:25:36 -08:00
all_pass_filter.go Use golint to update Golang styles 2014-12-10 15:38:19 -08:00
awsbackoff.go Add Retries to Redshift Basic Emitter 2015-05-23 12:36:59 -07:00
awsbackoff_test.go Silence log warnings during test run 2015-05-23 12:48:31 -07:00
buffer.go Add StringToString transformer 2014-11-15 17:07:12 -08:00
checkpoint.go Use golint to update Golang styles 2014-12-10 15:38:19 -08:00
CONTRIBUTING.md Rename License file and add Contributing sections 2015-05-23 10:24:53 -07:00
discard_logger.go Add Discard Logger 2015-05-22 23:38:06 -07:00
emitter.go Rename Model to Record 2014-11-15 15:54:54 -08:00
filter.go Use golint to update Golang styles 2014-12-10 15:38:19 -08:00
kinesis.go Remove custom config functions 2014-12-20 19:40:25 -08:00
logger.go User Log package for default logger 2015-05-23 13:52:57 -07:00
manifest.go Add Redshift Manifest functionality 2014-12-10 21:59:42 -08:00
MIT-LICENSE Rename License file and add Contributing sections 2015-05-23 10:24:53 -07:00
pipeline.go Checkpoint after filtered messages in the pipeline 2015-05-23 12:56:38 -07:00
README.md User Log package for default logger 2015-05-23 13:52:57 -07:00
record_buffer.go Checkpoint after filtered messages in the pipeline 2015-05-23 12:56:38 -07:00
record_buffer_test.go Use golint to update Golang styles 2014-12-10 15:38:19 -08:00
redis_checkpoint.go Use golint to update Golang styles 2014-12-10 15:38:19 -08:00
redis_checkpoint_test.go Base pipeline components 2014-11-14 20:45:34 -08:00
redshift_basic_emitter.go Add Retries to Redshift Basic Emitter 2015-05-23 12:36:59 -07:00
redshift_basic_emitter_test.go Add Logger interface 2015-05-03 21:01:14 -07:00
redshift_manifest_emitter.go Add default logger 2015-05-22 23:19:58 -07:00
redshift_manifest_emitter_test.go Add Redshift Manifest functionality 2014-12-10 21:59:42 -08:00
s3_emitter.go Add default logger 2015-05-22 23:19:58 -07:00
s3_emitter_test.go Add Redshift Manifest functionality 2014-12-10 21:59:42 -08:00
s3_manifest_emitter.go Add default logger 2015-05-22 23:19:58 -07:00
string_to_string_transformer.go Use golint to update Golang styles 2014-12-10 15:38:19 -08:00
transformer.go Sort exported fields alphabetically 2014-11-15 17:20:25 -08:00

Golang Kinesis Connectors

Note: This codebase is a under active development.

Kinesis connector applications written in Go

Inspired by the Amazon Kinesis Connector Library. These components are used for extracting streaming event data into S3, Redshift, DynamoDB, and more. See the API Docs for package documentation.

Overview

Each Amazon Kinesis connector application is a pipeline that determines how records from an Amazon Kinesis stream will be handled. Records are retrieved from the stream, transformed according to a user-defined data model, buffered for batch processing, and then emitted to the appropriate AWS service.

golang_kinesis_connector

A connector pipeline uses the following interfaces:

  • Pipeline: The pipeline implementation itself.
  • Transformer: Defines the transformation of records from the Amazon Kinesis stream in order to suit the user-defined data model. Includes methods for custom serializer/deserializers.
  • Filter: Defines a method for excluding irrelevant records from the processing.
  • Buffer: Defines a system for batching the set of records to be processed. The application can specify three thresholds: number of records, total byte count, and time. When one of these thresholds is crossed, the buffer is flushed and the data is emitted to the destination.
  • Emitter: Defines a method that makes client calls to other AWS services and persists the records stored in the buffer. The records can also be sent to another Amazon Kinesis stream.

Usage

Install the library:

$ go get github.com/harlow/kinesis-connectors

Logging

Default logging is handled by Package log. An application can override the defualt package logging by changing it's logger variable:

connector.SetLogger(NewCustomLogger())

The customer logger must implement the Logger interface.

Example Pipeline

The S3 Connector Pipeline performs the following steps:

  1. Pull records from Kinesis and buffer them untill the desired threshold is met.
  2. Upload the batch of records to an S3 bucket.
  3. Set the current Shard checkpoint in Redis.

The config vars are loaded done with gcfg.

package main

import (
	"fmt"
	"os"

	"code.google.com/p/gcfg"
	"github.com/harlow/kinesis-connectors"
	"github.com/sendgridlabs/go-kinesis"
)

type Config struct {
	Pipeline struct {
		Name string
	}
	Kinesis struct {
		BufferSize int
		ShardCount int
		StreamName string
	}
	S3 struct {
		BucketName string
	}
}

func newS3Pipeline(cfg Config) *connector.Pipeline {
	f := &connector.AllPassFilter{}
	b := &connector.RecordBuffer{
		NumRecordsToBuffer: cfg.Kinesis.BufferSize,
	}
	t := &connector.StringToStringTransformer{}
	c := &connector.RedisCheckpoint{
		AppName:    cfg.Pipeline.Name,
		StreamName: cfg.Kinesis.StreamName,
	}
	e := &connector.S3Emitter{
		S3Bucket: cfg.S3.BucketName,
	}
	return &connector.Pipeline{
		Buffer:      b,
		Checkpoint:  c,
		Emitter:     e,
		Filter:      f,
		StreamName:  cfg.Kinesis.StreamName,
		Transformer: t,
	}
}

func main() {
	// Load config vars
	var cfg Config
	err := gcfg.ReadFileInto(&cfg, "pipeline.cfg")

	// Set up kinesis client and stream
	accessKey := os.Getenv("AWS_ACCESS_KEY")
	secretKey := os.Getenv("AWS_SECRET_KEY")
	ksis := kinesis.New(accessKey, secretKey, kinesis.Region{})
	connector.CreateStream(ksis, cfg.Kinesis.StreamName, cfg.Kinesis.ShardCount)

	// Fetch stream info
	args := kinesis.NewArgs()
	args.Add("StreamName", cfg.Kinesis.StreamName)
	streamInfo, err := ksis.DescribeStream(args)
	if err != nil {
		fmt.Printf("Unable to connect to %s stream. Aborting.", cfg.Kinesis.StreamName)
		return
	}

	// Process kinesis shards
	for _, shard := range streamInfo.StreamDescription.Shards {
		fmt.Printf("Processing %s on %s\n", shard.ShardId, cfg.Kinesis.StreamName)
		p := newS3Pipeline(cfg)
		go p.ProcessShard(ksis, shard.ShardId)
	}

	// Keep alive
	<-make(chan int)
}

Contributing

Please see CONTRIBUTING.md. Thank you, contributors!

License

Copyright (c) 2015 Harlow Ward. It is free software, and may be redistributed under the terms specified in the LICENSE file.