* Use ApplicationName as default for EnhancedFanOutConsumerName
Signed-off-by: Ilia Cimpoes <ilia.cimpoes@ellation.com>
* Add tests
Signed-off-by: Ilia Cimpoes <ilia.cimpoes@ellation.com>
* Implement enhanced fan-out consumer
Signed-off-by: Ilia Cimpoes <ilia.cimpoes@ellation.com>
* Add test cases
Signed-off-by: Ilia Cimpoes <ilia.cimpoes@ellation.com>
* Small adjustments in fan-out consumer
Signed-off-by: Ilia Cimpoes <ilia.cimpoes@ellation.com>
Add support for Kinesis aggregation format to consume record
published by KPL.
Note: current implementation need to checkpoint the whole batch
of the de-aggregated records instead of just portion of them.
Add cache entry and exit time.
Signed-off-by: Tao Jiang <taoj@vmware.com>
Adding min/max retry and throttle delay for the retryer.
Also, increase the max retries to 10 which is inline with
dynamodb default retry count.
Signed-off-by: Tao Jiang <taoj@vmware.com>
Update aws go sdk to the latest. Also, update
integration tests by publishing data using both
PutRecord and PutRecords.
Signed-off-by: Tao Jiang <taoj@vmware.com>
The FieldLogger interface is satisfied by either *Logger or *Entry.
Accepting this interface in place of the concrete *Logger type allows
users to inject a logger with some fields already set. For example, the
application developer might want all logging from the library to have a
`subsystem=kcl` field.
Signed-off-by: Mike Pye <mail@mdpye.co.uk>
* Refactor
* Use `nextToken` paramter as string.
Use `nextToken` paramter as string instead of pointer to match the original code base.
* Log the last shard token when failing.
* Use aws.StringValue to get the string pointer value.
Co-authored-by: Wesam Gerges <wesam.gerges.discovery@gmail.com>
ull-request #62 wrongly introduced an increased delay on
shutdown.
Before #62 the `stop` channel could be triggered while waiting for
`syncShard` milliseconds, so the function could return as soon as
`stop` was received.
However #62 changed this behavior by sleeping in the default case:
`stop` couldn't be handled right away anymore. Instead it was
handled after a whole new loop, potentially delaying shutdown by
minutes. (up to synchard * 1.5 ms).
This commit fixes that.
Signed-off-by: Aurélien Rainone <aurelien.rainone@gmail.com>
* Add a random number generator to Worker
Signed-off-by: Aurélien Rainone <aurelien.rainone@gmail.com>
* Add random jitter to the worker shard sync sleep
Signed-off-by: Aurélien Rainone <aurelien.rainone@gmail.com>
* Add random jitter in case syncShard fails
Fixes#61
Signed-off-by: Aurélien Rainone <aurelien.rainone@gmail.com>
Previously, a WaitGroup was used to track executing ShardConsumers
and prevent Worker.Shutdown() from returning until all ShardConsumers
had completed. Unfortunately, it was possible for Shutdown() to race
with the eventLoop(), leading to a situation where Worker.Shutdown()
returns while a ShardConsumer is still executing.
Now, we increment the WaitGroup to keep track the eventLoop() as well
as the ShardConsumers. This prevents shutdown from returning until all
background go-routines have completed.
Signed-off-by: Daniel Ferstay <dferstay@splunk.com>