As we move towards consumer groups we'll need to support the current
"consume all shards" strategy, and setup the codebase for the
anticipated "consume balanced shards."
Having consumers start at latest record seems like a reasonable default.
This can also be overridden with optional config, make sure that is
documented for users of the library.
Fixes: https://github.com/harlow/kinesis-consumer/issues/86
* Use shard broker to start processing new shards
The addition of a shard broker will allow the consumer to be notified
when new shards are added to the stream so it can consume them.
Fixes: https://github.com/harlow/kinesis-consumer/issues/36
Major changes:
```go
type ScanFunc func(r *Record) error
```
* Simplify the callback func signature by removing `ScanStatus`
* Leverage context for cancellation
* Add custom error `SkipCheckpoint` for special cases when we don't want to checkpoint
Minor changes:
* Use kinesis package constants for shard iterator types
* Move optional config to new file
See conversation on #75 for more details
Having an additional Client has added some confusion (https://github.com/harlow/kinesis-consumer/issues/45) on how to provide a
custom kinesis client. Allowing `WithClient` to accept a Kinesis client
it cleans up the interface.
Major changes:
* Remove the Client wrapper; prefer using kinesis client directly
* Change `ScanError` to `ScanStatus` as the return value isn't necessarily an error
Note: these are breaking changes, if you need last stable release please see here: https://github.com/harlow/kinesis-consumer/releases/tag/v0.2.0
- scanError.Error is removed since it is not used.
- session.New() is deprecated, The NewKinesisClient's function signature change so it can
returns the error from the NewSession().
* remove ValidateCheckpoint
* update for checkpoint can not customize retryer
* implement the scan error as in PR 44
* at least log if record processor has error
* mistakenly removed this line
* propage error up. ignore invalid state
Testing the components of the consumer where proving difficult because
the consumer code was so tightly coupled with the Kinesis client
library.
* Extract the concept of Client interface
* Create default client w/ kinesis connection
* Test with fake client to avoid round trip to kinesis
The Checkpoint functionality is an important part of the library and
previously it wasn't obvious that the Consumer was defaulting to Redis
for this functionality.
* Add Checkpoint as required param for new consumer
Major changes:
* Remove intermediate batching of kinesis records
* Call the callback func with each record
* Use functional options for config
https://dave.cheney.net/2014/10/17/functional-options-for-friendly-apis
Minor changes:
* update README messaging about Kinesis -> Firehose functionality
* remove unused buffer and emitter code
To allow other checkpoint backends we extract a checkpoint interface
which future checkpoints can implement.
* Add checkpoint interface for custom checkpoints
* Create RedisCheckpoint to implement checkpoint interface
* Swap out hosie redis library for go-redis
Minor changes
* Allow configuration of Redis endpoint with env var `REDIS_URL`
* Replace gvt with govendor
The previous pipeline model required a lot of setup and abstracted away
the processing of records. By passing a HandlerFunc to the consumer we
keep the business logic of processing of records closer to the use of
the consumer.
* Add refactoring note and SHA to README