As we move towards consumer groups we'll need to support the current
"consume all shards" strategy, and setup the codebase for the
anticipated "consume balanced shards."
* Use shard broker to start processing new shards
The addition of a shard broker will allow the consumer to be notified
when new shards are added to the stream so it can consume them.
Fixes: https://github.com/harlow/kinesis-consumer/issues/36
Major changes:
```go
type ScanFunc func(r *Record) error
```
* Simplify the callback func signature by removing `ScanStatus`
* Leverage context for cancellation
* Add custom error `SkipCheckpoint` for special cases when we don't want to checkpoint
Minor changes:
* Use kinesis package constants for shard iterator types
* Move optional config to new file
See conversation on #75 for more details
Having an additional Client has added some confusion (https://github.com/harlow/kinesis-consumer/issues/45) on how to provide a
custom kinesis client. Allowing `WithClient` to accept a Kinesis client
it cleans up the interface.
Major changes:
* Remove the Client wrapper; prefer using kinesis client directly
* Change `ScanError` to `ScanStatus` as the return value isn't necessarily an error
Note: these are breaking changes, if you need last stable release please see here: https://github.com/harlow/kinesis-consumer/releases/tag/v0.2.0
* remove ValidateCheckpoint
* update for checkpoint can not customize retryer
* implement the scan error as in PR 44
* at least log if record processor has error
* mistakenly removed this line
* propage error up. ignore invalid state
Add interval flush for DDB checkpoint
* Allow checkpointing on a specified interval
* Add shutdown method to checkpoint to force flush
Minor changes:
* Swap order of input params for checkpoint (app, table)
Addresses: https://github.com/harlow/kinesis-consumer/issues/39
The Checkpoint functionality is an important part of the library and
previously it wasn't obvious that the Consumer was defaulting to Redis
for this functionality.
* Add Checkpoint as required param for new consumer
Major changes:
* Remove intermediate batching of kinesis records
* Call the callback func with each record
* Use functional options for config
https://dave.cheney.net/2014/10/17/functional-options-for-friendly-apis
Minor changes:
* update README messaging about Kinesis -> Firehose functionality
* remove unused buffer and emitter code
The previous pipeline model required a lot of setup and abstracted away
the processing of records. By passing a HandlerFunc to the consumer we
keep the business logic of processing of records closer to the use of
the consumer.
* Add refactoring note and SHA to README