Merge branch 'awslabs:v1.x' into v1.x

This commit is contained in:
philltomlinson 2024-02-19 11:58:47 +00:00 committed by GitHub
commit 4077af47a7
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
58 changed files with 1845 additions and 528 deletions

32
.github/workflows/maven.yml vendored Normal file
View file

@ -0,0 +1,32 @@
# This workflow will build a Java project with Maven, and cache/restore any dependencies to improve the workflow execution time
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-java-with-maven
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.
name: Java CI with Maven
on:
push:
branches:
- 'v1.x'
pull_request:
branches:
- 'v1.x'
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up JDK 8
uses: actions/setup-java@v3
with:
java-version: '8'
distribution: 'corretto'
- name: Build with Maven
run: mvn -B package --file pom.xml -DskipITs

View file

@ -1,6 +1,67 @@
# Changelog
### Latest Release (1.15.1 - Feb 5, 2023)
* [#1214](https://github.com/awslabs/amazon-kinesis-client/pull/1214) Added backoff logic for ShardSyncTaskIntegrationTest
* [#1214](https://github.com/awslabs/amazon-kinesis-client/pull/1214) Upgrade Guava version from 31.0.1 to 32.1.1
* [#1252](https://github.com/awslabs/amazon-kinesis-client/pull/1252) Upgrade aws-java-sdk from 1.12.406 to 1.12.647
## Latest Release (1.14.0 - August 17, 2020)
### Release (1.15.0 - Jun 8, 2023)
* **[#1108](https://github.com/awslabs/amazon-kinesis-client/pull/1108) Add support for Stream ARNs**
* [#1111](https://github.com/awslabs/amazon-kinesis-client/pull/1111) More consistent testing behavior with HashRangesAreAlwaysComplete
* [#1054](https://github.com/awslabs/amazon-kinesis-client/pull/1054) Upgrade log4j-core from 2.17.1 to 2.20.0
* [#1103](https://github.com/awslabs/amazon-kinesis-client/pull/1103) Upgrade jackson-core from 2.13.0 to 2.15.0
* [#943](https://github.com/awslabs/amazon-kinesis-client/pull/943) Upgrade nexus-staging-maven-plugin from 1.6.8 to 1.6.13
* [#1044](https://github.com/awslabs/amazon-kinesis-client/pull/1044) Upgrade aws-java-sdk.version from 1.12.406 to 1.12.408
* [#1055](https://github.com/awslabs/amazon-kinesis-client/pull/1055) Upgrade maven-compiler-plugin from 3.10.0 to 3.11.0
### Release (1.14.10 - Feb 15, 2023)
* Updated aws-java-sdk from 1.12.130 to 1.12.406
* Updated com.google.protobuf from 3.19.4 to 3.19.6
* [Issue #1026](https://github.com/awslabs/amazon-kinesis-client/issues/1026)
* [PR #1042](https://github.com/awslabs/amazon-kinesis-client/pull/1042)
### Release (1.14.9 - Dec 14, 2022)
* [#995](https://github.com/awslabs/amazon-kinesis-client/commit/372f98b21a91487e36612d528c56765a44b0aa86) Every other change for DynamoDBStreamsKinesis Adapter Compatibility
* [#970](https://github.com/awslabs/amazon-kinesis-client/commit/251b331a2e0fd912b50f8b5a12d088bf0b3263b9) PeriodicShardSyncManager Changes Needed for DynamoDBStreamsKinesisAdapter
### Release (1.14.8 - Feb 24, 2022)
* [Bump log4j-core from 2.17.0 to 2.17.1](https://github.com/awslabs/amazon-kinesis-client/commit/94b138a9d9a502ee0f4f000bb0efd2766ebadc37)
* [Bump protobuf-java from 3.19.1 to 3.19.4](https://github.com/awslabs/amazon-kinesis-client/commit/a809b12c43c57a3d6ad3827feb60e4322614259c)
* [Bump maven-compiler-plugin from 3.8.1 to 3.10.0](https://github.com/awslabs/amazon-kinesis-client/commit/37b5d7b9a1ccad483469ef542a6a7237462b14f2)
### Release (1.14.7 - Dec 22, 2021)
* [#881](https://github.com/awslabs/amazon-kinesis-client/pull/881) Update log4j test dependency from 2.16.0 to 2.17.0 and some other dependencies
### Release (1.14.6 - Dec 15, 2021)
* [#876](https://github.com/awslabs/amazon-kinesis-client/pull/876) Update log4j test dependency from 2.15.0 to 2.16.0
### Release (1.14.5 - Dec 10, 2021)
* [#872](https://github.com/awslabs/amazon-kinesis-client/pull/872) Update log4j test dependency from 1.2.17 to 2.15.0
* [#873](https://github.com/awslabs/amazon-kinesis-client/pull/873) Upgrading version of AWS Java SDK to 1.12.128
### Release (1.14.4 - June 14, 2021)
* [Milestone#61](https://github.com/awslabs/amazon-kinesis-client/milestone/61)
* [#816](https://github.com/awslabs/amazon-kinesis-client/pull/816) Updated the Worker shutdown logic to make sure that the `LeaseCleanupManager` also terminates all the threads that it has started.
* [#821](https://github.com/awslabs/amazon-kinesis-client/pull/821) Upgrading version of AWS Java SDK to 1.12.3
### Release (1.14.3 - May 3, 2021)
* [Milestone#60](https://github.com/awslabs/amazon-kinesis-client/milestone/60)
* [#811](https://github.com/awslabs/amazon-kinesis-client/pull/811) Fixing a bug in `KinesisProxy` that can lead to undetermined behavior during partial failures.
* [#811](https://github.com/awslabs/amazon-kinesis-client/pull/811) Adding guardrails to handle duplicate shards from the service.
## Release (1.14.2 - February 24, 2021)
* [Milestone#57](https://github.com/awslabs/amazon-kinesis-client/milestone/57)
* [#790](https://github.com/awslabs/amazon-kinesis-client/pull/790) Fixing a bug that caused paginated `ListShards` calls with the `ShardFilter` parameter to fail when the lease table was being initialized.
## Release (1.14.1 - January 27, 2021)
* [Milestone#56](https://github.com/awslabs/amazon-kinesis-client/milestone/56)
* Fix for cross DDB table interference when multiple KCL applications are run in same JVM.
* Fix and guards to avoid potential checkpoint rewind during shard end, which may block children shard processing.
* Fix for thread cycle wastage on InitializeTask for deleted shard.
* Improved logging in LeaseCleanupManager that would indicate why certain shards are not cleaned up from the lease table.
## Release (1.14.0 - August 17, 2020)
* [Milestone#50](https://github.com/awslabs/amazon-kinesis-client/milestone/50)

View file

@ -1,3 +1,6 @@
# Bugs in 1.14.0 version
We recommend customers to migrate to 1.14.1 to avoid [known bugs](https://github.com/awslabs/amazon-kinesis-client/issues/778) in 1.14.0 version
# Amazon Kinesis Client Library for Java
[![Build Status](https://travis-ci.org/awslabs/amazon-kinesis-client.svg?branch=master)](https://travis-ci.org/awslabs/amazon-kinesis-client) ![BuildStatus](https://codebuild.us-west-2.amazonaws.com/badges?uuid=eyJlbmNyeXB0ZWREYXRhIjoiaWo4bDYyUkpWaG9ZTy9zeFVoaVlWbEwxazdicDJLcmZwUUpFWVVBM0ZueEJSeFIzNkhURzdVbUd6WUZHcGNxa3BEUzNrL0I5Nzc4NE9rbXhvdEpNdlFRPSIsIml2UGFyYW1ldGVyU3BlYyI6IlZDaVZJSTM1QW95bFRTQnYiLCJtYXRlcmlhbFNldFNlcmlhbCI6MX0%3D&branch=v1.x)
@ -31,7 +34,68 @@ To make it easier for developers to write record processors in other languages,
## Release Notes
### Latest Release (1.14.0 - August 17, 2020)
### Latest Release (1.15.1 - Feb 5, 2023)
* [#1214](https://github.com/awslabs/amazon-kinesis-client/pull/1214) Added backoff logic for ShardSyncTaskIntegrationTest
* [#1214](https://github.com/awslabs/amazon-kinesis-client/pull/1214) Upgrade Guava version from 31.0.1 to 32.1.1
* [#1252](https://github.com/awslabs/amazon-kinesis-client/pull/1252) Upgrade aws-java-sdk from 1.12.406 to 1.12.647
### Release (1.15.0 - Jun 8, 2023)
* **[#1108](https://github.com/awslabs/amazon-kinesis-client/pull/1108) Add support for Stream ARNs**
* [#1111](https://github.com/awslabs/amazon-kinesis-client/pull/1111) More consistent testing behavior with HashRangesAreAlwaysComplete
* [#1054](https://github.com/awslabs/amazon-kinesis-client/pull/1054) Upgrade log4j-core from 2.17.1 to 2.20.0
* [#1103](https://github.com/awslabs/amazon-kinesis-client/pull/1103) Upgrade jackson-core from 2.13.0 to 2.15.0
* [#943](https://github.com/awslabs/amazon-kinesis-client/pull/943) Upgrade nexus-staging-maven-plugin from 1.6.8 to 1.6.13
* [#1044](https://github.com/awslabs/amazon-kinesis-client/pull/1044) Upgrade aws-java-sdk.version from 1.12.406 to 1.12.408
* [#1055](https://github.com/awslabs/amazon-kinesis-client/pull/1055) Upgrade maven-compiler-plugin from 3.10.0 to 3.11.0
### Release (1.14.10 - Feb 15, 2023)
* Updated aws-java-sdk from 1.12.130 to 1.12.406
* Updated com.google.protobuf from 3.19.4 to 3.19.6
* [Issue #1026](https://github.com/awslabs/amazon-kinesis-client/issues/1026)
* [PR #1042](https://github.com/awslabs/amazon-kinesis-client/pull/1042)
### Release (1.14.9 - Dec 14, 2022)
* [#995](https://github.com/awslabs/amazon-kinesis-client/commit/372f98b21a91487e36612d528c56765a44b0aa86) Every other change for DynamoDBStreamsKinesis Adapter Compatibility
* [#970](https://github.com/awslabs/amazon-kinesis-client/commit/251b331a2e0fd912b50f8b5a12d088bf0b3263b9) PeriodicShardSyncManager Changes Needed for DynamoDBStreamsKinesisAdapter
### Release (1.14.8 - Feb 24, 2022)
* [Bump log4j-core from 2.17.0 to 2.17.1](https://github.com/awslabs/amazon-kinesis-client/commit/94b138a9d9a502ee0f4f000bb0efd2766ebadc37)
* [Bump protobuf-java from 3.19.1 to 3.19.4](https://github.com/awslabs/amazon-kinesis-client/commit/a809b12c43c57a3d6ad3827feb60e4322614259c)
* [Bump maven-compiler-plugin from 3.8.1 to 3.10.0](https://github.com/awslabs/amazon-kinesis-client/commit/37b5d7b9a1ccad483469ef542a6a7237462b14f2)
### Release (1.14.7 - Dec 22, 2021)
* [#881](https://github.com/awslabs/amazon-kinesis-client/pull/881) Update log4j test dependency from 2.16.0 to 2.17.0 and some other dependencies
### Release (1.14.6 - Dec 15, 2021)
* [#876](https://github.com/awslabs/amazon-kinesis-client/pull/876) Update log4j test dependency from 2.15.0 to 2.16.0
### Release (1.14.5 - Dec 10, 2021)
* [#872](https://github.com/awslabs/amazon-kinesis-client/pull/872) Update log4j test dependency from 1.2.17 to 2.15.0
* [#873](https://github.com/awslabs/amazon-kinesis-client/pull/873) Upgrading version of AWS Java SDK to 1.12.128
### Release (1.14.4 - June 14, 2021)
* [Milestone#61](https://github.com/awslabs/amazon-kinesis-client/milestone/61)
* [#816](https://github.com/awslabs/amazon-kinesis-client/pull/816) Updated the Worker shutdown logic to make sure that the `LeaseCleanupManager` also terminates all the threads that it has started.
* [#821](https://github.com/awslabs/amazon-kinesis-client/pull/821) Upgrading version of AWS Java SDK to 1.12.3
### Release (1.14.3 - May 3, 2021)
* [Milestone#60](https://github.com/awslabs/amazon-kinesis-client/milestone/60)
* [#811](https://github.com/awslabs/amazon-kinesis-client/pull/811) Fixing a bug in `KinesisProxy` that can lead to undetermined behavior during partial failures.
* [#811](https://github.com/awslabs/amazon-kinesis-client/pull/811) Adding guardrails to handle duplicate shards from the service.
## Release (1.14.2 - February 24, 2021)
* [Milestone#57](https://github.com/awslabs/amazon-kinesis-client/milestone/57)
* [#790](https://github.com/awslabs/amazon-kinesis-client/pull/790) Fixing a bug that caused paginated `ListShards` calls with the `ShardFilter` parameter to fail when the lease table was being initialized.
### Release (1.14.1 - January 27, 2021)
* [Milestone#56](https://github.com/awslabs/amazon-kinesis-client/milestone/56)
* Fix for cross DDB table interference when multiple KCL applications are run in same JVM.
* Fix and guards to avoid potential checkpoint rewind during shard end, which may block children shard processing.
* Fix for thread cycle wastage on InitializeTask for deleted shard.
* Improved logging in LeaseCleanupManager that would indicate why certain shards are not cleaned up from the lease table.
### Release (1.14.0 - August 17, 2020)
* [Milestone#50](https://github.com/awslabs/amazon-kinesis-client/milestone/50)

58
pom.xml
View file

@ -6,7 +6,7 @@
<artifactId>amazon-kinesis-client</artifactId>
<packaging>jar</packaging>
<name>Amazon Kinesis Client Library for Java</name>
<version>1.14.0</version>
<version>1.15.1-SNAPSHOT</version>
<description>The Amazon Kinesis Client Library for Java enables Java developers to easily consume and process data
from Amazon Kinesis.
</description>
@ -25,13 +25,18 @@
</licenses>
<properties>
<aws-java-sdk.version>1.11.844</aws-java-sdk.version>
<aws-java-sdk.version>1.12.647</aws-java-sdk.version>
<sqlite4java.version>1.0.392</sqlite4java.version>
<sqlite4java.native>libsqlite4java</sqlite4java.native>
<sqlite4java.libpath>${project.build.directory}/test-lib</sqlite4java.libpath>
</properties>
<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-core</artifactId>
<version>${aws-java-sdk.version}</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-dynamodb</artifactId>
@ -50,27 +55,27 @@
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>26.0-jre</version>
<version>32.1.1-jre</version>
</dependency>
<dependency>
<groupId>com.google.protobuf</groupId>
<artifactId>protobuf-java</artifactId>
<version>3.11.4</version>
<version>3.19.6</version>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<version>3.7</version>
<version>3.12.0</version>
</dependency>
<dependency>
<groupId>commons-logging</groupId>
<artifactId>commons-logging</artifactId>
<version>1.1.3</version>
<version>1.2</version>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>1.16.10</version>
<version>1.18.22</version>
<scope>provided</scope>
</dependency>
@ -78,7 +83,7 @@
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
<version>4.13.2</version>
<scope>test</scope>
</dependency>
@ -99,16 +104,29 @@
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>DynamoDBLocal</artifactId>
<version>1.11.86</version>
<version>1.17.2</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>2.20.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-api</artifactId>
<version>2.20.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>2.15.0</version>
</dependency>
</dependencies>
<repositories>
@ -136,7 +154,7 @@
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.2</version>
<version>3.11.0</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
@ -150,7 +168,7 @@
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.19.1</version>
<version>2.22.2</version>
<configuration>
<excludes>
<exclude>**/*IntegrationTest.java</exclude>
@ -166,7 +184,7 @@
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-failsafe-plugin</artifactId>
<version>2.19.1</version>
<version>2.22.2</version>
<configuration>
<includes>
<include>**/*IntegrationTest.java</include>
@ -252,7 +270,7 @@
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
<version>2.10.3</version>
<version>3.4.1</version>
<configuration>
<excludePackageNames>com.amazonaws.services.kinesis.producer.protobuf</excludePackageNames>
</configuration>
@ -268,7 +286,7 @@
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-source-plugin</artifactId>
<version>3.0.1</version>
<version>3.2.1</version>
<executions>
<execution>
<id>attach-sources</id>
@ -300,7 +318,7 @@
<jdk>[1.8,)</jdk>
</activation>
<properties>
<additionalparam>-Xdoclint:none</additionalparam>
<doclint>none</doclint>
</properties>
</profile>
@ -311,7 +329,7 @@
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-gpg-plugin</artifactId>
<version>1.6</version>
<version>3.0.1</version>
<executions>
<execution>
<id>sign-artifacts</id>
@ -325,7 +343,7 @@
<plugin>
<groupId>org.sonatype.plugins</groupId>
<artifactId>nexus-staging-maven-plugin</artifactId>
<version>1.6.8</version>
<version>1.6.13</version>
<extensions>true</extensions>
<configuration>
<serverId>sonatype-nexus-staging</serverId>

View file

@ -45,24 +45,24 @@ public class AsynchronousGetRecordsRetrievalStrategy implements GetRecordsRetrie
private static final int TIME_TO_KEEP_ALIVE = 5;
private static final int CORE_THREAD_POOL_COUNT = 1;
private final KinesisDataFetcher dataFetcher;
private final IDataFetcher dataFetcher;
private final ExecutorService executorService;
private final int retryGetRecordsInSeconds;
private final String shardId;
final Supplier<CompletionService<DataFetcherResult>> completionServiceSupplier;
public AsynchronousGetRecordsRetrievalStrategy(@NonNull final KinesisDataFetcher dataFetcher,
public AsynchronousGetRecordsRetrievalStrategy(@NonNull final IDataFetcher dataFetcher,
final int retryGetRecordsInSeconds, final int maxGetRecordsThreadPool, String shardId) {
this(dataFetcher, buildExector(maxGetRecordsThreadPool, shardId), retryGetRecordsInSeconds, shardId);
}
public AsynchronousGetRecordsRetrievalStrategy(final KinesisDataFetcher dataFetcher,
public AsynchronousGetRecordsRetrievalStrategy(final IDataFetcher dataFetcher,
final ExecutorService executorService, final int retryGetRecordsInSeconds, String shardId) {
this(dataFetcher, executorService, retryGetRecordsInSeconds, () -> new ExecutorCompletionService<>(executorService),
shardId);
}
AsynchronousGetRecordsRetrievalStrategy(KinesisDataFetcher dataFetcher, ExecutorService executorService,
AsynchronousGetRecordsRetrievalStrategy(IDataFetcher dataFetcher, ExecutorService executorService,
int retryGetRecordsInSeconds, Supplier<CompletionService<DataFetcherResult>> completionServiceSupplier,
String shardId) {
this.dataFetcher = dataFetcher;
@ -148,7 +148,7 @@ public class AsynchronousGetRecordsRetrievalStrategy implements GetRecordsRetrie
}
@Override
public KinesisDataFetcher getDataFetcher() {
public IDataFetcher getDataFetcher() {
return dataFetcher;
}
}

View file

@ -30,7 +30,7 @@ import com.amazonaws.services.kinesis.leases.interfaces.ILeaseManager;
* If we don't find a checkpoint for the parent shard(s), we assume they have been trimmed and directly
* proceed with processing data from the shard.
*/
class BlockOnParentShardTask implements ITask {
public class BlockOnParentShardTask implements ITask {
private static final Log LOG = LogFactory.getLog(BlockOnParentShardTask.class);
private final ShardInfo shardInfo;
@ -45,7 +45,7 @@ class BlockOnParentShardTask implements ITask {
* @param leaseManager Used to fetch the lease and checkpoint info for parent shards
* @param parentShardPollIntervalMillis Sleep time if the parent shard has not completed processing
*/
BlockOnParentShardTask(ShardInfo shardInfo,
public BlockOnParentShardTask(ShardInfo shardInfo,
ILeaseManager<KinesisClientLease> leaseManager,
long parentShardPollIntervalMillis) {
this.shardInfo = shardInfo;

View file

@ -46,9 +46,9 @@ public interface GetRecordsRetrievalStrategy {
boolean isShutdown();
/**
* Returns the KinesisDataFetcher used to getRecords from Kinesis.
* Returns the IDataFetcher used to getRecords
*
* @return KinesisDataFetcher
* @return IDataFetcher
*/
KinesisDataFetcher getDataFetcher();
IDataFetcher getDataFetcher();
}

View file

@ -0,0 +1,23 @@
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
import com.amazonaws.services.kinesis.clientlibrary.types.ExtendedSequenceNumber;
import com.amazonaws.services.kinesis.model.ChildShard;
import java.util.List;
public interface IDataFetcher {
DataFetcherResult getRecords(int maxRecords);
void initialize(String initialCheckpoint, InitialPositionInStreamExtended initialPositionInStream);
void initialize(ExtendedSequenceNumber initialCheckpoint, InitialPositionInStreamExtended initialPositionInStream);
void advanceIteratorTo(String sequenceNumber, InitialPositionInStreamExtended initialPositionInStream);
void restartIterator();
boolean isShardEndReached();
List<ChildShard> getChildShards();
}

View file

@ -0,0 +1,28 @@
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
import com.google.common.annotations.VisibleForTesting;
import lombok.Value;
import lombok.experimental.Accessors;
public interface IPeriodicShardSyncManager {
TaskResult start();
/**
* Runs ShardSync once, without scheduling further periodic ShardSyncs.
* @return TaskResult from shard sync
*/
TaskResult syncShardsOnce();
void stop();
@Value
@Accessors(fluent = true)
@VisibleForTesting
class ShardSyncResponse {
private final boolean shouldDoShardSync;
private final boolean isHoleDetected;
private final String reasonForDecision;
}
}

View file

@ -0,0 +1,25 @@
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
public interface IShardConsumer {
boolean isSkipShardSyncAtWorkerInitializationIfLeasesExist();
enum TaskOutcome {
SUCCESSFUL, END_OF_SHARD, NOT_COMPLETE, FAILURE, LEASE_NOT_FOUND
}
boolean consumeShard();
boolean isShutdown();
ShutdownReason getShutdownReason();
boolean beginShutdown();
void notifyShutdownRequested(ShutdownNotification shutdownNotification);
KinesisConsumerStates.ShardConsumerState getCurrentState();
boolean isShutdownRequested();
}

View file

@ -0,0 +1,34 @@
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.ICheckpoint;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessor;
import com.amazonaws.services.kinesis.leases.impl.LeaseCleanupManager;
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsFactory;
import java.util.Optional;
import java.util.concurrent.ExecutorService;
public interface IShardConsumerFactory {
/**
* Returns a shard consumer to be used for consuming a (assigned) shard.
*
* @return Returns a shard consumer object.
*/
IShardConsumer createShardConsumer(ShardInfo shardInfo,
StreamConfig streamConfig,
ICheckpoint checkpointTracker,
IRecordProcessor recordProcessor,
RecordProcessorCheckpointer recordProcessorCheckpointer,
KinesisClientLibLeaseCoordinator leaseCoordinator,
long parentShardPollIntervalMillis,
boolean cleanupLeasesUponShardCompletion,
ExecutorService executorService,
IMetricsFactory metricsFactory,
long taskBackoffTimeMillis,
boolean skipShardSyncAtWorkerInitializationIfLeasesExist,
Optional<Integer> retryGetRecordsInSeconds,
Optional<Integer> maxGetRecordsThreadPool,
KinesisClientLibConfiguration config, ShardSyncer shardSyncer, ShardSyncStrategy shardSyncStrategy,
LeaseCleanupManager leaseCleanupManager);
}

View file

@ -20,7 +20,7 @@ import java.util.concurrent.Callable;
* Interface for shard processing tasks.
* A task may execute an application callback (e.g. initialize, process, shutdown).
*/
interface ITask extends Callable<TaskResult> {
public interface ITask extends Callable<TaskResult> {
/**
* Perform task logic.

View file

@ -14,6 +14,7 @@
*/
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
import com.amazonaws.services.kinesis.clientlibrary.exceptions.KinesisClientLibNonRetryableException;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
@ -28,7 +29,7 @@ import com.amazonaws.services.kinesis.metrics.interfaces.MetricsLevel;
/**
* Task for initializing shard position and invoking the RecordProcessor initialize() API.
*/
class InitializeTask implements ITask {
public class InitializeTask implements ITask {
private static final Log LOG = LogFactory.getLog(InitializeTask.class);
@ -36,7 +37,7 @@ class InitializeTask implements ITask {
private final ShardInfo shardInfo;
private final IRecordProcessor recordProcessor;
private final KinesisDataFetcher dataFetcher;
private final IDataFetcher dataFetcher;
private final TaskType taskType = TaskType.INITIALIZE;
private final ICheckpoint checkpoint;
private final RecordProcessorCheckpointer recordProcessorCheckpointer;
@ -48,11 +49,11 @@ class InitializeTask implements ITask {
/**
* Constructor.
*/
InitializeTask(ShardInfo shardInfo,
public InitializeTask(ShardInfo shardInfo,
IRecordProcessor recordProcessor,
ICheckpoint checkpoint,
RecordProcessorCheckpointer recordProcessorCheckpointer,
KinesisDataFetcher dataFetcher,
IDataFetcher dataFetcher,
long backoffTimeMillis,
StreamConfig streamConfig,
GetRecordsCache getRecordsCache) {
@ -79,7 +80,15 @@ class InitializeTask implements ITask {
try {
LOG.debug("Initializing ShardId " + shardInfo.getShardId());
Checkpoint initialCheckpointObject = checkpoint.getCheckpointObject(shardInfo.getShardId());
Checkpoint initialCheckpointObject;
try {
initialCheckpointObject = checkpoint.getCheckpointObject(shardInfo.getShardId());
} catch (KinesisClientLibNonRetryableException e) {
LOG.error("Caught exception while fetching checkpoint for " + shardInfo.getShardId(), e);
final TaskResult result = new TaskResult(e);
result.leaseNotFound();
return result;
}
ExtendedSequenceNumber initialCheckpoint = initialCheckpointObject.getCheckpoint();
dataFetcher.initialize(initialCheckpoint.getSequenceNumber(), streamConfig.getInitialPositionInStream());

View file

@ -17,12 +17,14 @@ package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
import java.time.Duration;
import java.util.Date;
import java.util.Optional;
import java.util.regex.Pattern;
import java.util.Set;
import com.amazonaws.services.dynamodbv2.model.BillingMode;
import org.apache.commons.lang3.Validate;
import com.amazonaws.ClientConfiguration;
import com.amazonaws.arn.Arn;
import com.amazonaws.auth.AWSCredentialsProvider;
import com.amazonaws.services.kinesis.metrics.impl.MetricsHelper;
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsScope;
@ -61,7 +63,7 @@ public class KinesisClientLibConfiguration {
public static final int DEFAULT_MAX_RECORDS = 10000;
/**
* The default value for how long the {@link ShardConsumer} should sleep if no records are returned from the call to
* The default value for how long the {@link KinesisShardConsumer} should sleep if no records are returned from the call to
* {@link com.amazonaws.services.kinesis.AmazonKinesis#getRecords(com.amazonaws.services.kinesis.model.GetRecordsRequest)}.
*/
public static final long DEFAULT_IDLETIME_BETWEEN_READS_MILLIS = 1000L;
@ -91,7 +93,7 @@ public class KinesisClientLibConfiguration {
public static final boolean DEFAULT_CLEANUP_LEASES_UPON_SHARDS_COMPLETION = true;
/**
* Interval to run lease cleanup thread in {@link LeaseCleanupManager}.
* Interval to run lease cleanup thread in {@link com.amazonaws.services.kinesis.leases.impl.LeaseCleanupManager}.
*/
private static final long DEFAULT_LEASE_CLEANUP_INTERVAL_MILLIS = Duration.ofMinutes(1).toMillis();
@ -147,7 +149,7 @@ public class KinesisClientLibConfiguration {
/**
* User agent set when Amazon Kinesis Client Library makes AWS requests.
*/
public static final String KINESIS_CLIENT_LIB_USER_AGENT = "amazon-kinesis-client-library-java-1.14.0";
public static final String KINESIS_CLIENT_LIB_USER_AGENT = "amazon-kinesis-client-library-java-1.15.1";
/**
* KCL will validate client provided sequence numbers with a call to Amazon Kinesis before checkpointing for calls
@ -233,11 +235,25 @@ public class KinesisClientLibConfiguration {
*/
public static final int DEFAULT_MAX_INITIALIZATION_ATTEMPTS = 20;
/**
* Pattern for a stream ARN. The valid format is
* {@code arn:aws:kinesis:<region>:<accountId>:stream/<streamName>}
* where {@code region} is the id representation of a {@link Region}.
*/
private static final Pattern STREAM_ARN_PATTERN = Pattern.compile(
"arn:aws[^:]*:kinesis:(?<region>[-a-z0-9]+):(?<accountId>[0-9]{12}):stream/(?<streamName>.+)");
@Getter
private BillingMode billingMode;
private String applicationName;
private String tableName;
private String streamName;
/**
* Kinesis stream ARN
*/
@Getter
private Arn streamArn;
private String kinesisEndpoint;
private String dynamoDBEndpoint;
private InitialPositionInStream initialPositionInStream;
@ -719,14 +735,105 @@ public class KinesisClientLibConfiguration {
this.billingMode = billingMode;
}
/**
* Duplicate constructor to support stream ARN's in place of stream names.
*
* @param applicationName Name of the Kinesis application
* By default the application name is included in the user agent string used to make AWS requests. This
* can assist with troubleshooting (e.g. distinguish requests made by separate applications).
* @param streamArn Kinesis stream ARN
* @param kinesisEndpoint Kinesis endpoint
* @param dynamoDBEndpoint DynamoDB endpoint
* @param initialPositionInStream One of LATEST or TRIM_HORIZON. The KinesisClientLibrary will start fetching
* records from that location in the stream when an application starts up for the first time and there
* are no checkpoints. If there are checkpoints, then we start from the checkpoint position.
* @param kinesisCredentialsProvider Provides credentials used to access Kinesis
* @param dynamoDBCredentialsProvider Provides credentials used to access DynamoDB
* @param cloudWatchCredentialsProvider Provides credentials used to access CloudWatch
* @param failoverTimeMillis Lease duration (leases not renewed within this period will be claimed by others)
* @param workerId Used to distinguish different workers/processes of a Kinesis application
* @param maxRecords Max records to read per Kinesis getRecords() call
* @param idleTimeBetweenReadsInMillis Idle time between calls to fetch data from Kinesis
* @param callProcessRecordsEvenForEmptyRecordList Call the IRecordProcessor::processRecords() API even if
* GetRecords returned an empty record list.
* @param parentShardPollIntervalMillis Wait for this long between polls to check if parent shards are done
* @param shardSyncIntervalMillis Time between tasks to sync leases and Kinesis shards
* @param cleanupTerminatedShardsBeforeExpiry Clean up shards we've finished processing (don't wait for expiration
* in Kinesis)
* @param kinesisClientConfig Client Configuration used by Kinesis client
* @param dynamoDBClientConfig Client Configuration used by DynamoDB client
* @param cloudWatchClientConfig Client Configuration used by CloudWatch client
* @param taskBackoffTimeMillis Backoff period when tasks encounter an exception
* @param metricsBufferTimeMillis Metrics are buffered for at most this long before publishing to CloudWatch
* @param metricsMaxQueueSize Max number of metrics to buffer before publishing to CloudWatch
* @param validateSequenceNumberBeforeCheckpointing whether KCL should validate client provided sequence numbers
* with a call to Amazon Kinesis before checkpointing for calls to
* {@link RecordProcessorCheckpointer#checkpoint(String)}
* @param regionName The region name for the service
* @param shutdownGraceMillis Time before gracefully shutdown forcefully terminates
* @param billingMode The DDB Billing mode to set for lease table creation.
* @param recordsFetcherFactory Factory to create the records fetcher to retrieve data from Kinesis for a given shard.
* @param leaseCleanupIntervalMillis Rate at which to run lease cleanup thread in
* {@link com.amazonaws.services.kinesis.leases.impl.LeaseCleanupManager}
* @param completedLeaseCleanupThresholdMillis Threshold in millis at which to check if there are any completed leases
* (leases for shards which have been closed as a result of a resharding operation) that need to be cleaned up.
* @param garbageLeaseCleanupThresholdMillis Threshold in millis at which to check if there are any garbage leases
* (leases for shards which no longer exist in the stream) that need to be cleaned up.
*/
public KinesisClientLibConfiguration(String applicationName,
Arn streamArn,
String kinesisEndpoint,
String dynamoDBEndpoint,
InitialPositionInStream initialPositionInStream,
AWSCredentialsProvider kinesisCredentialsProvider,
AWSCredentialsProvider dynamoDBCredentialsProvider,
AWSCredentialsProvider cloudWatchCredentialsProvider,
long failoverTimeMillis,
String workerId,
int maxRecords,
long idleTimeBetweenReadsInMillis,
boolean callProcessRecordsEvenForEmptyRecordList,
long parentShardPollIntervalMillis,
long shardSyncIntervalMillis,
boolean cleanupTerminatedShardsBeforeExpiry,
ClientConfiguration kinesisClientConfig,
ClientConfiguration dynamoDBClientConfig,
ClientConfiguration cloudWatchClientConfig,
long taskBackoffTimeMillis,
long metricsBufferTimeMillis,
int metricsMaxQueueSize,
boolean validateSequenceNumberBeforeCheckpointing,
String regionName,
long shutdownGraceMillis,
BillingMode billingMode,
RecordsFetcherFactory recordsFetcherFactory,
long leaseCleanupIntervalMillis,
long completedLeaseCleanupThresholdMillis,
long garbageLeaseCleanupThresholdMillis) {
this(applicationName, streamArn.getResource().getResource(), kinesisEndpoint, dynamoDBEndpoint, initialPositionInStream, kinesisCredentialsProvider,
dynamoDBCredentialsProvider, cloudWatchCredentialsProvider, failoverTimeMillis, workerId, maxRecords, idleTimeBetweenReadsInMillis,
callProcessRecordsEvenForEmptyRecordList, parentShardPollIntervalMillis, shardSyncIntervalMillis, cleanupTerminatedShardsBeforeExpiry,
kinesisClientConfig, dynamoDBClientConfig, cloudWatchClientConfig, taskBackoffTimeMillis, metricsBufferTimeMillis,
metricsMaxQueueSize, validateSequenceNumberBeforeCheckpointing, regionName, shutdownGraceMillis, billingMode,
recordsFetcherFactory, leaseCleanupIntervalMillis, completedLeaseCleanupThresholdMillis, garbageLeaseCleanupThresholdMillis);
checkIsValidStreamArn(streamArn);
this.streamArn = streamArn;
}
// Check if value is positive, otherwise throw an exception
private void checkIsValuePositive(String key, long value) {
private static void checkIsValuePositive(String key, long value) {
if (value <= 0) {
throw new IllegalArgumentException("Value of " + key
+ " should be positive, but current value is " + value);
}
}
private static void checkIsValidStreamArn(Arn streamArn) {
if (!STREAM_ARN_PATTERN.matcher(streamArn.toString()).matches()) {
throw new IllegalArgumentException("Invalid streamArn " + streamArn);
}
}
// Check if user agent in configuration is the default agent.
// If so, replace it with application name plus KINESIS_CLIENT_LIB_USER_AGENT.
// If not, append KINESIS_CLIENT_LIB_USER_AGENT to the end.
@ -1030,7 +1137,7 @@ public class KinesisClientLibConfiguration {
* Keeping it protected to forbid outside callers from depending on this internal object.
* @return The initialPositionInStreamExtended object.
*/
protected InitialPositionInStreamExtended getInitialPositionInStreamExtended() {
public InitialPositionInStreamExtended getInitialPositionInStreamExtended() {
return initialPositionInStreamExtended;
}
@ -1056,6 +1163,25 @@ public class KinesisClientLibConfiguration {
return shutdownGraceMillis;
}
/**
* @param streamName Kinesis stream name
* @return KinesisClientLibConfiguration
*/
public KinesisClientLibConfiguration withStreamName(String streamName) {
this.streamName = streamName;
return this;
}
/**
* @param streamArn Kinesis stream ARN
* @return KinesisClientLibConfiguration
*/
public KinesisClientLibConfiguration withStreamArn(Arn streamArn) {
checkIsValidStreamArn(streamArn);
this.streamArn = streamArn;
return this;
}
/*
// CHECKSTYLE:IGNORE HiddenFieldCheck FOR NEXT 190 LINES
/**

View file

@ -51,7 +51,7 @@ import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsFactory;
/**
* This class is used to coordinate/manage leases owned by this worker process and to get/set checkpoints.
*/
class KinesisClientLibLeaseCoordinator extends LeaseCoordinator<KinesisClientLease> implements ICheckpoint {
public class KinesisClientLibLeaseCoordinator extends LeaseCoordinator<KinesisClientLease> implements ICheckpoint {
private static final Log LOG = LogFactory.getLog(KinesisClientLibLeaseCoordinator.class);
@ -283,7 +283,8 @@ class KinesisClientLibLeaseCoordinator extends LeaseCoordinator<KinesisClientLea
try {
KinesisClientLease lease = leaseManager.getLease(shardId);
if (lease == null) {
throw new KinesisClientLibIOException(errorMessage);
// This is a KinesisClientLibNonRetryableException
throw new com.amazonaws.services.kinesis.clientlibrary.exceptions.InvalidStateException(errorMessage);
}
return new Checkpoint(lease.getCheckpoint(), lease.getPendingCheckpoint());
} catch (DependencyException | InvalidStateException | ProvisionedThroughputException e) {
@ -367,7 +368,7 @@ class KinesisClientLibLeaseCoordinator extends LeaseCoordinator<KinesisClientLea
*
* @return LeaseManager
*/
ILeaseManager<KinesisClientLease> getLeaseManager() {
public ILeaseManager<KinesisClientLease> getLeaseManager() {
return leaseManager;
}

View file

@ -15,7 +15,7 @@
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
/**
* Top level container for all the possible states a {@link ShardConsumer} can be in. The logic for creation of tasks,
* Top level container for all the possible states a {@link KinesisShardConsumer} can be in. The logic for creation of tasks,
* and state transitions is contained within the {@link ConsumerState} objects.
*
* <h2>State Diagram</h2>
@ -64,12 +64,12 @@ package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
* +-------------------+
* </pre>
*/
class ConsumerStates {
public class KinesisConsumerStates {
/**
* Enumerates processing states when working on a shard.
*/
enum ShardConsumerState {
public enum ShardConsumerState {
// @formatter:off
WAITING_ON_PARENT_SHARDS(new BlockedOnParentState()),
INITIALIZING(new InitializingState()),
@ -96,7 +96,7 @@ class ConsumerStates {
* do when a transition occurs.
*
*/
interface ConsumerState {
public interface ConsumerState {
/**
* Creates a new task for this state using the passed in consumer to build the task. If there is no task
* required for this state it may return a null value. {@link ConsumerState}'s are allowed to modify the
@ -106,11 +106,11 @@ class ConsumerStates {
* the consumer to use build the task, or execute state.
* @return a valid task for this state or null if there is no task required.
*/
ITask createTask(ShardConsumer consumer);
ITask createTask(KinesisShardConsumer consumer);
/**
* Provides the next state of the consumer upon success of the task return by
* {@link ConsumerState#createTask(ShardConsumer)}.
* {@link ConsumerState#createTask(KinesisShardConsumer)}.
*
* @return the next state that the consumer should transition to, this may be the same object as the current
* state.
@ -129,7 +129,7 @@ class ConsumerStates {
ConsumerState shutdownTransition(ShutdownReason shutdownReason);
/**
* The type of task that {@link ConsumerState#createTask(ShardConsumer)} would return. This is always a valid state
* The type of task that {@link ConsumerState#createTask(KinesisShardConsumer)} would return. This is always a valid state
* even if createTask would return a null value.
*
* @return the type of task that this state represents.
@ -149,7 +149,7 @@ class ConsumerStates {
}
/**
* The initial state that any {@link ShardConsumer} should start in.
* The initial state that any {@link KinesisShardConsumer} should start in.
*/
static final ConsumerState INITIAL_STATE = ShardConsumerState.WAITING_ON_PARENT_SHARDS.getConsumerState();
@ -187,7 +187,7 @@ class ConsumerStates {
static class BlockedOnParentState implements ConsumerState {
@Override
public ITask createTask(ShardConsumer consumer) {
public ITask createTask(KinesisShardConsumer consumer) {
return new BlockOnParentShardTask(consumer.getShardInfo(), consumer.getLeaseManager(),
consumer.getParentShardPollIntervalMillis());
}
@ -247,10 +247,10 @@ class ConsumerStates {
* </dd>
* </dl>
*/
static class InitializingState implements ConsumerState {
public static class InitializingState implements ConsumerState {
@Override
public ITask createTask(ShardConsumer consumer) {
public ITask createTask(KinesisShardConsumer consumer) {
return new InitializeTask(consumer.getShardInfo(),
consumer.getRecordProcessor(),
consumer.getCheckpoint(),
@ -311,7 +311,7 @@ class ConsumerStates {
static class ProcessingState implements ConsumerState {
@Override
public ITask createTask(ShardConsumer consumer) {
public ITask createTask(KinesisShardConsumer consumer) {
return new ProcessTask(consumer.getShardInfo(),
consumer.getStreamConfig(),
consumer.getRecordProcessor(),
@ -358,10 +358,10 @@ class ConsumerStates {
* <h2>Valid Transitions</h2>
* <dl>
* <dt>Success</dt>
* <dd>Success shouldn't normally be called since the {@link ShardConsumer} is marked for shutdown.</dd>
* <dd>Success shouldn't normally be called since the {@link KinesisShardConsumer} is marked for shutdown.</dd>
* <dt>Shutdown</dt>
* <dd>At this point records are being retrieved, and processed. An explicit shutdown will allow the record
* processor one last chance to checkpoint, and then the {@link ShardConsumer} will be held in an idle state.
* processor one last chance to checkpoint, and then the {@link KinesisShardConsumer} will be held in an idle state.
* <dl>
* <dt>{@link ShutdownReason#REQUESTED}</dt>
* <dd>Remains in the {@link ShardConsumerState#SHUTDOWN_REQUESTED}, but the state implementation changes to
@ -377,7 +377,7 @@ class ConsumerStates {
static class ShutdownNotificationState implements ConsumerState {
@Override
public ITask createTask(ShardConsumer consumer) {
public ITask createTask(KinesisShardConsumer consumer) {
return new ShutdownNotificationTask(consumer.getRecordProcessor(),
consumer.getRecordProcessorCheckpointer(),
consumer.getShutdownNotification(),
@ -414,24 +414,24 @@ class ConsumerStates {
}
/**
* Once the {@link ShutdownNotificationState} has been completed the {@link ShardConsumer} must not re-enter any of the
* processing states. This state idles the {@link ShardConsumer} until the worker triggers the final shutdown state.
* Once the {@link ShutdownNotificationState} has been completed the {@link KinesisShardConsumer} must not re-enter any of the
* processing states. This state idles the {@link KinesisShardConsumer} until the worker triggers the final shutdown state.
*
* <h2>Valid Transitions</h2>
* <dl>
* <dt>Success</dt>
* <dd>
* <p>
* Success shouldn't normally be called since the {@link ShardConsumer} is marked for shutdown.
* Success shouldn't normally be called since the {@link KinesisShardConsumer} is marked for shutdown.
* </p>
* <p>
* Remains in the {@link ShutdownNotificationCompletionState}
* </p>
* </dd>
* <dt>Shutdown</dt>
* <dd>At this point the {@link ShardConsumer} has notified the record processor of the impending shutdown, and is
* <dd>At this point the {@link KinesisShardConsumer} has notified the record processor of the impending shutdown, and is
* waiting that notification. While waiting for the notification no further processing should occur on the
* {@link ShardConsumer}.
* {@link KinesisShardConsumer}.
* <dl>
* <dt>{@link ShutdownReason#REQUESTED}</dt>
* <dd>Remains in the {@link ShardConsumerState#SHUTDOWN_REQUESTED}, and the state implementation remains
@ -447,7 +447,7 @@ class ConsumerStates {
static class ShutdownNotificationCompletionState implements ConsumerState {
@Override
public ITask createTask(ShardConsumer consumer) {
public ITask createTask(KinesisShardConsumer consumer) {
return null;
}
@ -481,14 +481,14 @@ class ConsumerStates {
}
/**
* This state is entered if the {@link ShardConsumer} loses its lease, or reaches the end of the shard.
* This state is entered if the {@link KinesisShardConsumer} loses its lease, or reaches the end of the shard.
*
* <h2>Valid Transitions</h2>
* <dl>
* <dt>Success</dt>
* <dd>
* <p>
* Success shouldn't normally be called since the {@link ShardConsumer} is marked for shutdown.
* Success shouldn't normally be called since the {@link KinesisShardConsumer} is marked for shutdown.
* </p>
* <p>
* Transitions to the {@link ShutdownCompleteState}
@ -497,7 +497,7 @@ class ConsumerStates {
* <dt>Shutdown</dt>
* <dd>At this point the record processor has processed the final shutdown indication, and depending on the shutdown
* reason taken the correct course of action. From this point on there should be no more interactions with the
* record processor or {@link ShardConsumer}.
* record processor or {@link KinesisShardConsumer}.
* <dl>
* <dt>{@link ShutdownReason#REQUESTED}</dt>
* <dd>
@ -519,8 +519,8 @@ class ConsumerStates {
static class ShuttingDownState implements ConsumerState {
@Override
public ITask createTask(ShardConsumer consumer) {
return new ShutdownTask(consumer.getShardInfo(),
public ITask createTask(KinesisShardConsumer consumer) {
return new KinesisShutdownTask(consumer.getShardInfo(),
consumer.getRecordProcessor(),
consumer.getRecordProcessorCheckpointer(),
consumer.getShutdownReason(),
@ -562,21 +562,21 @@ class ConsumerStates {
}
/**
* This is the final state for the {@link ShardConsumer}. This occurs once all shutdown activities are completed.
* This is the final state for the {@link KinesisShardConsumer}. This occurs once all shutdown activities are completed.
*
* <h2>Valid Transitions</h2>
* <dl>
* <dt>Success</dt>
* <dd>
* <p>
* Success shouldn't normally be called since the {@link ShardConsumer} is marked for shutdown.
* Success shouldn't normally be called since the {@link KinesisShardConsumer} is marked for shutdown.
* </p>
* <p>
* Remains in the {@link ShutdownCompleteState}
* </p>
* </dd>
* <dt>Shutdown</dt>
* <dd>At this point the all shutdown activites are completed, and the {@link ShardConsumer} should not take any
* <dd>At this point the all shutdown activites are completed, and the {@link KinesisShardConsumer} should not take any
* further actions.
* <dl>
* <dt>{@link ShutdownReason#REQUESTED}</dt>
@ -599,7 +599,7 @@ class ConsumerStates {
static class ShutdownCompleteState implements ConsumerState {
@Override
public ITask createTask(ShardConsumer consumer) {
public ITask createTask(KinesisShardConsumer consumer) {
if (consumer.getShutdownNotification() != null) {
consumer.getShutdownNotification().shutdownComplete();
}

View file

@ -39,7 +39,7 @@ import lombok.Data;
/**
* Used to get data from Amazon Kinesis. Tracks iterator state internally.
*/
class KinesisDataFetcher {
public class KinesisDataFetcher implements IDataFetcher{
private static final Log LOG = LogFactory.getLog(KinesisDataFetcher.class);
@ -185,7 +185,7 @@ class KinesisDataFetcher {
* @param sequenceNumber advance the iterator to the record at this sequence number.
* @param initialPositionInStream The initialPositionInStream.
*/
void advanceIteratorTo(String sequenceNumber, InitialPositionInStreamExtended initialPositionInStream) {
public void advanceIteratorTo(String sequenceNumber, InitialPositionInStreamExtended initialPositionInStream) {
if (sequenceNumber == null) {
throw new IllegalArgumentException("SequenceNumber should not be null: shardId " + shardId);
} else if (sequenceNumber.equals(SentinelCheckpoint.LATEST.toString())) {
@ -276,11 +276,11 @@ class KinesisDataFetcher {
/**
* @return the shardEndReached
*/
protected boolean isShardEndReached() {
public boolean isShardEndReached() {
return isShardEndReached;
}
protected List<ChildShard> getChildShards() {
public List<ChildShard> getChildShards() {
return childShards;
}
@ -290,5 +290,4 @@ class KinesisDataFetcher {
String getNextIterator() {
return nextIterator;
}
}

View file

@ -0,0 +1,403 @@
/*
* Copyright 2019 Amazon.com, Inc. or its affiliates.
* Licensed under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
import java.io.Serializable;
import java.math.BigInteger;
import java.util.Collections;
import java.util.Comparator;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
import java.util.stream.Collectors;
import com.amazonaws.services.cloudwatch.model.StandardUnit;
import com.amazonaws.services.kinesis.clientlibrary.proxies.IKinesisProxy;
import com.amazonaws.services.kinesis.leases.exceptions.DependencyException;
import com.amazonaws.services.kinesis.leases.exceptions.InvalidStateException;
import com.amazonaws.services.kinesis.leases.exceptions.ProvisionedThroughputException;
import com.amazonaws.services.kinesis.leases.impl.HashKeyRangeForLease;
import com.amazonaws.services.kinesis.leases.impl.KinesisClientLease;
import com.amazonaws.services.kinesis.leases.impl.UpdateField;
import com.amazonaws.services.kinesis.leases.interfaces.ILeaseManager;
import com.amazonaws.services.kinesis.metrics.impl.MetricsHelper;
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsFactory;
import com.amazonaws.services.kinesis.metrics.interfaces.MetricsLevel;
import com.amazonaws.services.kinesis.model.Shard;
import com.amazonaws.util.CollectionUtils;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.collect.ComparisonChain;
import lombok.EqualsAndHashCode;
import lombok.Getter;
import lombok.NonNull;
import lombok.Value;
import org.apache.commons.lang3.Validate;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import static com.amazonaws.services.kinesis.leases.impl.HashKeyRangeForLease.fromHashKeyRange;
/**
* The top level orchestrator for coordinating the periodic shard sync related activities. If the configured
* {@link ShardSyncStrategyType} is PERIODIC, this class will be the main shard sync orchestrator. For non-PERIODIC
* strategies, this class will serve as an internal auditor that periodically checks if the full hash range is covered
* by currently held leases, and initiates a recovery shard sync if not.
*/
@Getter
@EqualsAndHashCode
class KinesisPeriodicShardSyncManager implements IPeriodicShardSyncManager{
private static final Log LOG = LogFactory.getLog(KinesisPeriodicShardSyncManager.class);
private static final long INITIAL_DELAY = 0;
/** DEFAULT interval is used for PERIODIC {@link ShardSyncStrategyType}. */
private static final long DEFAULT_PERIODIC_SHARD_SYNC_INTERVAL_MILLIS = 1000L;
/** Parameters for validating hash range completeness when running in auditor mode. */
@VisibleForTesting
static final BigInteger MIN_HASH_KEY = BigInteger.ZERO;
@VisibleForTesting
static final BigInteger MAX_HASH_KEY = new BigInteger("2").pow(128).subtract(BigInteger.ONE);
static final String PERIODIC_SHARD_SYNC_MANAGER = "PeriodicShardSyncManager";
private final HashRangeHoleTracker hashRangeHoleTracker = new HashRangeHoleTracker();
private final String workerId;
private final LeaderDecider leaderDecider;
private final ITask metricsEmittingShardSyncTask;
private final ScheduledExecutorService shardSyncThreadPool;
private final ILeaseManager<KinesisClientLease> leaseManager;
private final IKinesisProxy kinesisProxy;
private final boolean isAuditorMode;
private final long periodicShardSyncIntervalMillis;
private boolean isRunning;
private final IMetricsFactory metricsFactory;
private final int leasesRecoveryAuditorInconsistencyConfidenceThreshold;
KinesisPeriodicShardSyncManager(String workerId,
LeaderDecider leaderDecider,
ShardSyncTask shardSyncTask,
IMetricsFactory metricsFactory,
ILeaseManager<KinesisClientLease> leaseManager,
IKinesisProxy kinesisProxy,
boolean isAuditorMode,
long leasesRecoveryAuditorExecutionFrequencyMillis,
int leasesRecoveryAuditorInconsistencyConfidenceThreshold) {
this(workerId, leaderDecider, shardSyncTask, Executors.newSingleThreadScheduledExecutor(), metricsFactory,
leaseManager, kinesisProxy, isAuditorMode, leasesRecoveryAuditorExecutionFrequencyMillis,
leasesRecoveryAuditorInconsistencyConfidenceThreshold);
}
KinesisPeriodicShardSyncManager(String workerId,
LeaderDecider leaderDecider,
ShardSyncTask shardSyncTask,
ScheduledExecutorService shardSyncThreadPool,
IMetricsFactory metricsFactory,
ILeaseManager<KinesisClientLease> leaseManager,
IKinesisProxy kinesisProxy,
boolean isAuditorMode,
long leasesRecoveryAuditorExecutionFrequencyMillis,
int leasesRecoveryAuditorInconsistencyConfidenceThreshold) {
Validate.notBlank(workerId, "WorkerID is required to initialize PeriodicShardSyncManager.");
Validate.notNull(leaderDecider, "LeaderDecider is required to initialize PeriodicShardSyncManager.");
Validate.notNull(shardSyncTask, "ShardSyncTask is required to initialize PeriodicShardSyncManager.");
this.workerId = workerId;
this.leaderDecider = leaderDecider;
this.metricsEmittingShardSyncTask = new MetricsCollectingTaskDecorator(shardSyncTask, metricsFactory);
this.shardSyncThreadPool = shardSyncThreadPool;
this.leaseManager = leaseManager;
this.kinesisProxy = kinesisProxy;
this.metricsFactory = metricsFactory;
this.isAuditorMode = isAuditorMode;
this.leasesRecoveryAuditorInconsistencyConfidenceThreshold = leasesRecoveryAuditorInconsistencyConfidenceThreshold;
if (isAuditorMode) {
Validate.notNull(this.leaseManager, "LeaseManager is required for non-PERIODIC shard sync strategies.");
Validate.notNull(this.kinesisProxy, "KinesisProxy is required for non-PERIODIC shard sync strategies.");
this.periodicShardSyncIntervalMillis = leasesRecoveryAuditorExecutionFrequencyMillis;
} else {
this.periodicShardSyncIntervalMillis = DEFAULT_PERIODIC_SHARD_SYNC_INTERVAL_MILLIS;
}
}
@Override
public synchronized TaskResult start() {
if (!isRunning) {
final Runnable periodicShardSyncer = () -> {
try {
runShardSync();
} catch (Throwable t) {
LOG.error("Error running shard sync.", t);
}
};
shardSyncThreadPool
.scheduleWithFixedDelay(periodicShardSyncer, INITIAL_DELAY, periodicShardSyncIntervalMillis,
TimeUnit.MILLISECONDS);
isRunning = true;
}
return new TaskResult(null);
}
/**
* Runs ShardSync once, without scheduling further periodic ShardSyncs.
* @return TaskResult from shard sync
*/
@Override
public synchronized TaskResult syncShardsOnce() {
LOG.info("Syncing shards once from worker " + workerId);
return metricsEmittingShardSyncTask.call();
}
@Override
public void stop() {
if (isRunning) {
LOG.info(String.format("Shutting down leader decider on worker %s", workerId));
leaderDecider.shutdown();
LOG.info(String.format("Shutting down periodic shard sync task scheduler on worker %s", workerId));
shardSyncThreadPool.shutdown();
isRunning = false;
}
}
private void runShardSync() {
if (leaderDecider.isLeader(workerId)) {
LOG.debug("WorkerId " + workerId + " is a leader, running the shard sync task");
MetricsHelper.startScope(metricsFactory, PERIODIC_SHARD_SYNC_MANAGER);
boolean isRunSuccess = false;
final long runStartMillis = System.currentTimeMillis();
try {
final ShardSyncResponse shardSyncResponse = checkForShardSync();
MetricsHelper.getMetricsScope().addData("NumStreamsToSync", shardSyncResponse.shouldDoShardSync() ? 1 : 0, StandardUnit.Count, MetricsLevel.SUMMARY);
MetricsHelper.getMetricsScope().addData("NumStreamsWithPartialLeases", shardSyncResponse.isHoleDetected() ? 1 : 0, StandardUnit.Count, MetricsLevel.SUMMARY);
if (shardSyncResponse.shouldDoShardSync()) {
LOG.info("Periodic shard syncer initiating shard sync due to the reason - " +
shardSyncResponse.reasonForDecision());
metricsEmittingShardSyncTask.call();
} else {
LOG.info("Skipping shard sync due to the reason - " + shardSyncResponse.reasonForDecision());
}
isRunSuccess = true;
} catch (Exception e) {
LOG.error("Caught exception while running periodic shard syncer.", e);
} finally {
MetricsHelper.addSuccessAndLatency(runStartMillis, isRunSuccess, MetricsLevel.SUMMARY);
MetricsHelper.endScope();
}
} else {
LOG.debug("WorkerId " + workerId + " is not a leader, not running the shard sync task");
}
}
@VisibleForTesting
ShardSyncResponse checkForShardSync() throws DependencyException, InvalidStateException,
ProvisionedThroughputException {
if (!isAuditorMode) {
// If we are running with PERIODIC shard sync strategy, we should sync every time.
return new ShardSyncResponse(true, false, "Syncing every time with PERIODIC shard sync strategy.");
}
// Get current leases from DynamoDB.
final List<KinesisClientLease> currentLeases = leaseManager.listLeases();
if (CollectionUtils.isNullOrEmpty(currentLeases)) {
// If the current leases are null or empty, then we need to initiate a shard sync.
LOG.info("No leases found. Will trigger a shard sync.");
return new ShardSyncResponse(true, false, "No leases found.");
}
// Check if there are any holes in the hash range covered by current leases. Return the first hole if present.
Optional<HashRangeHole> hashRangeHoleOpt = hasHoleInLeases(currentLeases);
if (hashRangeHoleOpt.isPresent()) {
// If hole is present, check if the hole is detected consecutively in previous occurrences. If hole is
// determined with high confidence, return true; return false otherwise. We use the high confidence factor
// to avoid shard sync on any holes during resharding and lease cleanups, or other intermittent issues.
final boolean hasHoleWithHighConfidence =
hashRangeHoleTracker.hashHighConfidenceOfHoleWith(hashRangeHoleOpt.get());
return new ShardSyncResponse(hasHoleWithHighConfidence, true,
"Detected the same hole for " + hashRangeHoleTracker.getNumConsecutiveHoles() + " times. " +
"Will initiate shard sync after reaching threshold: " + leasesRecoveryAuditorInconsistencyConfidenceThreshold);
} else {
// If hole is not present, clear any previous hole tracking and return false.
hashRangeHoleTracker.reset();
return new ShardSyncResponse(false, false, "Hash range is complete.");
}
}
@VisibleForTesting
Optional<HashRangeHole> hasHoleInLeases(List<KinesisClientLease> leases) {
// Filter out any leases with checkpoints other than SHARD_END
final List<KinesisClientLease> activeLeases = leases.stream()
.filter(lease -> lease.getCheckpoint() != null && !lease.getCheckpoint().isShardEnd())
.collect(Collectors.toList());
final List<KinesisClientLease> activeLeasesWithHashRanges = fillWithHashRangesIfRequired(activeLeases);
return checkForHoleInHashKeyRanges(activeLeasesWithHashRanges);
}
private List<KinesisClientLease> fillWithHashRangesIfRequired(List<KinesisClientLease> activeLeases) {
final List<KinesisClientLease> activeLeasesWithNoHashRanges = activeLeases.stream()
.filter(lease -> lease.getHashKeyRange() == null).collect(Collectors.toList());
if (activeLeasesWithNoHashRanges.isEmpty()) {
return activeLeases;
}
// Fetch shards from Kinesis to fill in the in-memory hash ranges
final Map<String, Shard> kinesisShards = kinesisProxy.getShardList().stream()
.collect(Collectors.toMap(Shard::getShardId, shard -> shard));
return activeLeases.stream().map(lease -> {
if (lease.getHashKeyRange() == null) {
final String shardId = lease.getLeaseKey();
final Shard shard = kinesisShards.get(shardId);
if (shard == null) {
return lease;
}
lease.setHashKeyRange(fromHashKeyRange(shard.getHashKeyRange()));
try {
leaseManager.updateLeaseWithMetaInfo(lease, UpdateField.HASH_KEY_RANGE);
} catch (Exception e) {
LOG.warn("Unable to update hash range information for lease " + lease.getLeaseKey() +
". This may result in explicit lease sync.");
}
}
return lease;
}).filter(lease -> lease.getHashKeyRange() != null).collect(Collectors.toList());
}
@VisibleForTesting
static Optional<HashRangeHole> checkForHoleInHashKeyRanges(List<KinesisClientLease> leasesWithHashKeyRanges) {
// Sort the hash ranges by starting hash key
final List<KinesisClientLease> sortedLeasesWithHashKeyRanges = sortLeasesByHashRange(leasesWithHashKeyRanges);
if (sortedLeasesWithHashKeyRanges.isEmpty()) {
LOG.error("No leases with valid hash ranges found.");
return Optional.of(new HashRangeHole());
}
// Validate the hash range bounds
final KinesisClientLease minHashKeyLease = sortedLeasesWithHashKeyRanges.get(0);
final KinesisClientLease maxHashKeyLease =
sortedLeasesWithHashKeyRanges.get(sortedLeasesWithHashKeyRanges.size() - 1);
if (!minHashKeyLease.getHashKeyRange().startingHashKey().equals(MIN_HASH_KEY) ||
!maxHashKeyLease.getHashKeyRange().endingHashKey().equals(MAX_HASH_KEY)) {
LOG.error("Incomplete hash range found between " + minHashKeyLease + " and " + maxHashKeyLease);
return Optional.of(new HashRangeHole(minHashKeyLease.getHashKeyRange(), maxHashKeyLease.getHashKeyRange()));
}
// Check for any holes in the sorted hash range intervals
if (sortedLeasesWithHashKeyRanges.size() > 1) {
KinesisClientLease leftmostLeaseToReportInCaseOfHole = minHashKeyLease;
HashKeyRangeForLease leftLeaseHashRange = leftmostLeaseToReportInCaseOfHole.getHashKeyRange();
for (int i = 1; i < sortedLeasesWithHashKeyRanges.size(); i++) {
final KinesisClientLease rightLease = sortedLeasesWithHashKeyRanges.get(i);
final HashKeyRangeForLease rightLeaseHashRange = rightLease.getHashKeyRange();
final BigInteger rangeDiff =
rightLeaseHashRange.startingHashKey().subtract(leftLeaseHashRange.endingHashKey());
// We have overlapping leases when rangeDiff is 0 or negative.
// signum() will be -1 for negative and 0 if value is 0.
// Merge the ranges for further tracking.
if (rangeDiff.signum() <= 0) {
leftLeaseHashRange = new HashKeyRangeForLease(leftLeaseHashRange.startingHashKey(),
leftLeaseHashRange.endingHashKey().max(rightLeaseHashRange.endingHashKey()));
} else {
// We have non-overlapping leases when rangeDiff is positive. signum() will be 1 in this case.
// If rangeDiff is 1, then it is a continuous hash range. If not, there is a hole.
if (!rangeDiff.equals(BigInteger.ONE)) {
LOG.error("Incomplete hash range found between " + leftmostLeaseToReportInCaseOfHole +
" and " + rightLease);
return Optional.of(new HashRangeHole(leftmostLeaseToReportInCaseOfHole.getHashKeyRange(),
rightLease.getHashKeyRange()));
}
leftmostLeaseToReportInCaseOfHole = rightLease;
leftLeaseHashRange = rightLeaseHashRange;
}
}
}
return Optional.empty();
}
@VisibleForTesting
static List<KinesisClientLease> sortLeasesByHashRange(List<KinesisClientLease> leasesWithHashKeyRanges) {
if (leasesWithHashKeyRanges.size() == 0 || leasesWithHashKeyRanges.size() == 1) {
return leasesWithHashKeyRanges;
}
Collections.sort(leasesWithHashKeyRanges, new HashKeyRangeComparator());
return leasesWithHashKeyRanges;
}
@Value
private static class HashRangeHole {
private final HashKeyRangeForLease hashRangeAtStartOfPossibleHole;
private final HashKeyRangeForLease hashRangeAtEndOfPossibleHole;
HashRangeHole() {
hashRangeAtStartOfPossibleHole = hashRangeAtEndOfPossibleHole = null;
}
HashRangeHole(HashKeyRangeForLease hashRangeAtStartOfPossibleHole,
HashKeyRangeForLease hashRangeAtEndOfPossibleHole) {
this.hashRangeAtStartOfPossibleHole = hashRangeAtStartOfPossibleHole;
this.hashRangeAtEndOfPossibleHole = hashRangeAtEndOfPossibleHole;
}
}
private class HashRangeHoleTracker {
private HashRangeHole hashRangeHole;
@Getter
private Integer numConsecutiveHoles;
public boolean hashHighConfidenceOfHoleWith(@NonNull HashRangeHole hashRangeHole) {
if (hashRangeHole.equals(this.hashRangeHole)) {
++this.numConsecutiveHoles;
} else {
this.hashRangeHole = hashRangeHole;
this.numConsecutiveHoles = 1;
}
return numConsecutiveHoles >= leasesRecoveryAuditorInconsistencyConfidenceThreshold;
}
public void reset() {
this.hashRangeHole = null;
this.numConsecutiveHoles = 0;
}
}
private static class HashKeyRangeComparator implements Comparator<KinesisClientLease>, Serializable {
private static final long serialVersionUID = 1L;
@Override
public int compare(KinesisClientLease lease, KinesisClientLease otherLease) {
Validate.notNull(lease);
Validate.notNull(otherLease);
Validate.notNull(lease.getHashKeyRange());
Validate.notNull(otherLease.getHashKeyRange());
return ComparisonChain.start()
.compare(lease.getHashKeyRange().startingHashKey(), otherLease.getHashKeyRange().startingHashKey())
.compare(lease.getHashKeyRange().endingHashKey(), otherLease.getHashKeyRange().endingHashKey())
.result();
}
}
}

View file

@ -14,7 +14,6 @@
*/
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
import java.util.List;
import java.util.Optional;
import java.util.concurrent.ExecutorService;
@ -35,7 +34,6 @@ import com.amazonaws.services.kinesis.leases.impl.KinesisClientLease;
import com.amazonaws.services.kinesis.leases.interfaces.ILeaseManager;
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsFactory;
import com.google.common.annotations.VisibleForTesting;
import lombok.Getter;
/**
@ -43,9 +41,9 @@ import lombok.Getter;
* The instance should be shutdown when we lose the primary responsibility for a shard.
* A new instance should be created if the primary responsibility is reassigned back to this process.
*/
class ShardConsumer {
public class KinesisShardConsumer implements IShardConsumer{
private static final Log LOG = LogFactory.getLog(ShardConsumer.class);
private static final Log LOG = LogFactory.getLog(KinesisShardConsumer.class);
private final StreamConfig streamConfig;
private final IRecordProcessor recordProcessor;
@ -78,7 +76,7 @@ class ShardConsumer {
@Getter
private final GetRecordsCache getRecordsCache;
private static final GetRecordsRetrievalStrategy makeStrategy(KinesisDataFetcher dataFetcher,
private static final GetRecordsRetrievalStrategy makeStrategy(IDataFetcher dataFetcher,
Optional<Integer> retryGetRecordsInSeconds,
Optional<Integer> maxGetRecordsThreadPool,
ShardInfo shardInfo) {
@ -93,7 +91,7 @@ class ShardConsumer {
* Tracks current state. It is only updated via the consumeStream/shutdown APIs. Therefore we don't do
* much coordination/synchronization to handle concurrent reads/updates.
*/
private ConsumerStates.ConsumerState currentState = ConsumerStates.INITIAL_STATE;
private KinesisConsumerStates.ConsumerState currentState = KinesisConsumerStates.INITIAL_STATE;
/*
* Used to track if we lost the primary responsibility. Once set to true, we will start shutting down.
* If we regain primary responsibility before shutdown is complete, Worker should create a new ShardConsumer object.
@ -116,7 +114,7 @@ class ShardConsumer {
*/
// CHECKSTYLE:IGNORE ParameterNumber FOR NEXT 10 LINES
@Deprecated
ShardConsumer(ShardInfo shardInfo,
KinesisShardConsumer(ShardInfo shardInfo,
StreamConfig streamConfig,
ICheckpoint checkpoint,
IRecordProcessor recordProcessor,
@ -162,7 +160,7 @@ class ShardConsumer {
*/
// CHECKSTYLE:IGNORE ParameterNumber FOR NEXT 10 LINES
@Deprecated
ShardConsumer(ShardInfo shardInfo,
KinesisShardConsumer(ShardInfo shardInfo,
StreamConfig streamConfig,
ICheckpoint checkpoint,
IRecordProcessor recordProcessor,
@ -223,7 +221,7 @@ class ShardConsumer {
* @param shardSyncer shardSyncer instance used to check and create new leases
*/
@Deprecated
ShardConsumer(ShardInfo shardInfo,
KinesisShardConsumer(ShardInfo shardInfo,
StreamConfig streamConfig,
ICheckpoint checkpoint,
IRecordProcessor recordProcessor,
@ -243,7 +241,7 @@ class ShardConsumer {
this(shardInfo, streamConfig, checkpoint, recordProcessor, recordProcessorCheckpointer, leaseCoordinator,
parentShardPollIntervalMillis, cleanupLeasesOfCompletedShards, executorService, metricsFactory,
backoffTimeMillis, skipShardSyncAtWorkerInitializationIfLeasesExist, kinesisDataFetcher, retryGetRecordsInSeconds,
maxGetRecordsThreadPool, config, shardSyncer, shardSyncStrategy, LeaseCleanupManager.createOrGetInstance(streamConfig.getStreamProxy(), leaseCoordinator.getLeaseManager(),
maxGetRecordsThreadPool, config, shardSyncer, shardSyncStrategy, LeaseCleanupManager.newInstance(streamConfig.getStreamProxy(), leaseCoordinator.getLeaseManager(),
Executors.newSingleThreadScheduledExecutor(), metricsFactory, config.shouldCleanupLeasesUponShardCompletion(),
config.leaseCleanupIntervalMillis(), config.completedLeaseCleanupThresholdMillis(),
config.garbageLeaseCleanupThresholdMillis(), config.getMaxRecords()));
@ -269,23 +267,23 @@ class ShardConsumer {
* @param shardSyncer shardSyncer instance used to check and create new leases
* @param leaseCleanupManager used to clean up leases in lease table.
*/
ShardConsumer(ShardInfo shardInfo,
StreamConfig streamConfig,
ICheckpoint checkpoint,
IRecordProcessor recordProcessor,
RecordProcessorCheckpointer recordProcessorCheckpointer,
KinesisClientLibLeaseCoordinator leaseCoordinator,
long parentShardPollIntervalMillis,
boolean cleanupLeasesOfCompletedShards,
ExecutorService executorService,
IMetricsFactory metricsFactory,
long backoffTimeMillis,
boolean skipShardSyncAtWorkerInitializationIfLeasesExist,
KinesisDataFetcher kinesisDataFetcher,
Optional<Integer> retryGetRecordsInSeconds,
Optional<Integer> maxGetRecordsThreadPool,
KinesisClientLibConfiguration config, ShardSyncer shardSyncer, ShardSyncStrategy shardSyncStrategy,
LeaseCleanupManager leaseCleanupManager) {
KinesisShardConsumer(ShardInfo shardInfo,
StreamConfig streamConfig,
ICheckpoint checkpoint,
IRecordProcessor recordProcessor,
RecordProcessorCheckpointer recordProcessorCheckpointer,
KinesisClientLibLeaseCoordinator leaseCoordinator,
long parentShardPollIntervalMillis,
boolean cleanupLeasesOfCompletedShards,
ExecutorService executorService,
IMetricsFactory metricsFactory,
long backoffTimeMillis,
boolean skipShardSyncAtWorkerInitializationIfLeasesExist,
KinesisDataFetcher kinesisDataFetcher,
Optional<Integer> retryGetRecordsInSeconds,
Optional<Integer> maxGetRecordsThreadPool,
KinesisClientLibConfiguration config, ShardSyncer shardSyncer, ShardSyncStrategy shardSyncStrategy,
LeaseCleanupManager leaseCleanupManager) {
this.shardInfo = shardInfo;
this.streamConfig = streamConfig;
this.checkpoint = checkpoint;
@ -314,7 +312,7 @@ class ShardConsumer {
*
* @return true if a new process task was submitted, false otherwise
*/
synchronized boolean consumeShard() {
public synchronized boolean consumeShard() {
return checkAndSubmitNextTask();
}
@ -373,10 +371,6 @@ class ShardConsumer {
return skipShardSyncAtWorkerInitializationIfLeasesExist;
}
private enum TaskOutcome {
SUCCESSFUL, END_OF_SHARD, NOT_COMPLETE, FAILURE
}
private TaskOutcome determineTaskOutcome() {
try {
TaskResult result = future.get();
@ -391,6 +385,10 @@ class ShardConsumer {
return TaskOutcome.SUCCESSFUL;
}
logTaskException(result);
// This is the case of result with exception
if (result.isLeaseNotFound()) {
return TaskOutcome.LEASE_NOT_FOUND;
}
} catch (Exception e) {
throw new RuntimeException(e);
} finally {
@ -419,7 +417,7 @@ class ShardConsumer {
*
* @param shutdownNotification used to signal that the record processor has been given the chance to shutdown.
*/
void notifyShutdownRequested(ShutdownNotification shutdownNotification) {
public void notifyShutdownRequested(ShutdownNotification shutdownNotification) {
this.shutdownNotification = shutdownNotification;
markForShutdown(ShutdownReason.REQUESTED);
}
@ -430,7 +428,7 @@ class ShardConsumer {
*
* @return true if shutdown is complete (false if shutdown is still in progress)
*/
synchronized boolean beginShutdown() {
public synchronized boolean beginShutdown() {
markForShutdown(ShutdownReason.ZOMBIE);
checkAndSubmitNextTask();
@ -450,14 +448,14 @@ class ShardConsumer {
*
* @return true if shutdown is complete
*/
boolean isShutdown() {
public boolean isShutdown() {
return currentState.isTerminal();
}
/**
* @return the shutdownReason
*/
ShutdownReason getShutdownReason() {
public ShutdownReason getShutdownReason() {
return shutdownReason;
}
@ -487,9 +485,13 @@ class ShardConsumer {
markForShutdown(ShutdownReason.TERMINATE);
LOG.info("Shard " + shardInfo.getShardId() + ": Mark for shutdown with reason TERMINATE");
}
if (taskOutcome == TaskOutcome.LEASE_NOT_FOUND) {
markForShutdown(ShutdownReason.ZOMBIE);
LOG.info("Shard " + shardInfo.getShardId() + ": Mark for shutdown with reason ZOMBIE as lease was not found");
}
if (isShutdownRequested() && taskOutcome != TaskOutcome.FAILURE) {
currentState = currentState.shutdownTransition(shutdownReason);
} else if (isShutdownRequested() && ConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS.equals(currentState.getState())) {
} else if (isShutdownRequested() && KinesisConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS.equals(currentState.getState())) {
currentState = currentState.shutdownTransition(shutdownReason);
} else if (taskOutcome == TaskOutcome.SUCCESSFUL) {
if (currentState.getTaskType() == currentTask.getTaskType()) {
@ -508,7 +510,7 @@ class ShardConsumer {
}
@VisibleForTesting
boolean isShutdownRequested() {
public boolean isShutdownRequested() {
return shutdownReason != null;
}
@ -517,7 +519,7 @@ class ShardConsumer {
*
* @return the currentState
*/
ConsumerStates.ShardConsumerState getCurrentState() {
public KinesisConsumerStates.ShardConsumerState getCurrentState() {
return currentState.getState();
}

View file

@ -0,0 +1,48 @@
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.ICheckpoint;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessor;
import com.amazonaws.services.kinesis.leases.impl.LeaseCleanupManager;
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsFactory;
import java.util.Optional;
import java.util.concurrent.ExecutorService;
public class KinesisShardConsumerFactory implements IShardConsumerFactory{
@Override
public IShardConsumer createShardConsumer(ShardInfo shardInfo,
StreamConfig streamConfig,
ICheckpoint checkpointTracker,
IRecordProcessor recordProcessor,
RecordProcessorCheckpointer recordProcessorCheckpointer,
KinesisClientLibLeaseCoordinator leaseCoordinator,
long parentShardPollIntervalMillis,
boolean cleanupLeasesUponShardCompletion,
ExecutorService executorService,
IMetricsFactory metricsFactory,
long taskBackoffTimeMillis,
boolean skipShardSyncAtWorkerInitializationIfLeasesExist,
Optional<Integer> retryGetRecordsInSeconds,
Optional<Integer> maxGetRecordsThreadPool,
KinesisClientLibConfiguration config, ShardSyncer shardSyncer, ShardSyncStrategy shardSyncStrategy,
LeaseCleanupManager leaseCleanupManager) {
return new KinesisShardConsumer(shardInfo,
streamConfig,
checkpointTracker,
recordProcessor,
recordProcessorCheckpointer,
leaseCoordinator,
parentShardPollIntervalMillis,
cleanupLeasesUponShardCompletion,
executorService,
metricsFactory,
taskBackoffTimeMillis,
skipShardSyncAtWorkerInitializationIfLeasesExist,
new KinesisDataFetcher(streamConfig.getStreamProxy(), shardInfo),
retryGetRecordsInSeconds,
maxGetRecordsThreadPool,
config, shardSyncer, shardSyncStrategy,
leaseCleanupManager);
}
}

View file

@ -45,6 +45,8 @@ import com.amazonaws.services.kinesis.metrics.impl.MetricsHelper;
import com.amazonaws.services.kinesis.metrics.interfaces.MetricsLevel;
import com.amazonaws.services.kinesis.model.Shard;
import static com.amazonaws.services.kinesis.leases.impl.HashKeyRangeForLease.fromHashKeyRange;
/**
* Helper class to sync leases with shards of the Kinesis stream.
* It will create new leases/activities when it discovers new Kinesis shards (bootstrap/resharding).
@ -617,7 +619,7 @@ class KinesisShardSyncer implements ShardSyncer {
}
newLease.setParentShardIds(parentShardIds);
newLease.setOwnerSwitchesSinceCheckpoint(0L);
newLease.setHashKeyRange(fromHashKeyRange(shard.getHashKeyRange()));
return newLease;
}
@ -641,6 +643,7 @@ class KinesisShardSyncer implements ShardSyncer {
newLease.setParentShardIds(parentShardIds);
newLease.setOwnerSwitchesSinceCheckpoint(0L);
newLease.setCheckpoint(ExtendedSequenceNumber.TRIM_HORIZON);
newLease.setHashKeyRange(fromHashKeyRange(childShard.getHashKeyRange()));
return newLease;
}

View file

@ -14,25 +14,24 @@
*/
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
import com.amazonaws.services.kinesis.leases.LeasePendingDeletion;
import com.amazonaws.services.kinesis.clientlibrary.exceptions.internal.BlockedOnParentShardException;
import com.amazonaws.services.kinesis.leases.exceptions.CustomerApplicationException;
import com.amazonaws.services.kinesis.leases.exceptions.DependencyException;
import com.amazonaws.services.kinesis.leases.exceptions.InvalidStateException;
import com.amazonaws.services.kinesis.leases.exceptions.ProvisionedThroughputException;
import com.amazonaws.services.kinesis.leases.impl.LeaseCleanupManager;
import com.amazonaws.services.kinesis.leases.impl.UpdateField;
import com.amazonaws.services.kinesis.model.ChildShard;
import com.amazonaws.util.CollectionUtils;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessor;
import com.amazonaws.services.kinesis.clientlibrary.proxies.IKinesisProxy;
import com.amazonaws.services.kinesis.clientlibrary.types.ExtendedSequenceNumber;
import com.amazonaws.services.kinesis.clientlibrary.types.ShutdownInput;
import com.amazonaws.services.kinesis.leases.LeasePendingDeletion;
import com.amazonaws.services.kinesis.leases.exceptions.CustomerApplicationException;
import com.amazonaws.services.kinesis.leases.exceptions.DependencyException;
import com.amazonaws.services.kinesis.leases.exceptions.InvalidStateException;
import com.amazonaws.services.kinesis.leases.exceptions.ProvisionedThroughputException;
import com.amazonaws.services.kinesis.leases.impl.KinesisClientLease;
import com.amazonaws.services.kinesis.leases.impl.LeaseCleanupManager;
import com.amazonaws.services.kinesis.leases.impl.UpdateField;
import com.amazonaws.services.kinesis.model.ChildShard;
import com.amazonaws.util.CollectionUtils;
import com.google.common.annotations.VisibleForTesting;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import java.util.List;
import java.util.Objects;
@ -44,9 +43,9 @@ import java.util.stream.Collectors;
/**
* Task for invoking the RecordProcessor shutdown() callback.
*/
class ShutdownTask implements ITask {
public class KinesisShutdownTask implements ITask {
private static final Log LOG = LogFactory.getLog(ShutdownTask.class);
private static final Log LOG = LogFactory.getLog(KinesisShutdownTask.class);
@VisibleForTesting
static final int RETRY_RANDOM_MAX_RANGE = 50;
@ -72,7 +71,7 @@ class ShutdownTask implements ITask {
* Constructor.
*/
// CHECKSTYLE:IGNORE ParameterNumber FOR NEXT 10 LINES
ShutdownTask(ShardInfo shardInfo,
KinesisShutdownTask(ShardInfo shardInfo,
IRecordProcessor recordProcessor,
RecordProcessorCheckpointer recordProcessorCheckpointer,
ShutdownReason reason,
@ -222,11 +221,29 @@ class ShutdownTask implements ITask {
.withCheckpointer(recordProcessorCheckpointer);
recordProcessor.shutdown(shardEndShutdownInput);
final ExtendedSequenceNumber lastCheckpointValue = recordProcessorCheckpointer.getLastCheckpointValue();
boolean successfullyCheckpointedShardEnd = false;
final boolean successfullyCheckpointedShardEnd = lastCheckpointValue.equals(ExtendedSequenceNumber.SHARD_END);
KinesisClientLease leaseFromDdb = null;
try {
leaseFromDdb = leaseCoordinator.getLeaseManager().getLease(shardInfo.getShardId());
} catch (Exception e) {
LOG.error("Shard " + shardInfo.getShardId() + " : Unable to get lease entry for shard to verify shard end checkpointing.", e);
}
if ((lastCheckpointValue == null) || (!successfullyCheckpointedShardEnd)) {
if (leaseFromDdb != null && leaseFromDdb.getCheckpoint() != null) {
successfullyCheckpointedShardEnd = leaseFromDdb.getCheckpoint().equals(ExtendedSequenceNumber.SHARD_END);
final ExtendedSequenceNumber lastCheckpointValue = recordProcessorCheckpointer.getLastCheckpointValue();
if (!leaseFromDdb.getCheckpoint().equals(lastCheckpointValue)) {
LOG.error("Shard " + shardInfo.getShardId() +
" : Checkpoint information mismatch between authoritative source and local cache. " +
"This does not affect the application flow, but cut a ticket to Kinesis when you see this. " +
"Authoritative entry : " + leaseFromDdb.getCheckpoint() + " Cache entry : " + lastCheckpointValue);
}
} else {
LOG.error("Shard " + shardInfo.getShardId() + " : No lease checkpoint entry for shard to verify shard end checkpointing. Lease Entry : " + leaseFromDdb);
}
if (!successfullyCheckpointedShardEnd) {
throw new IllegalArgumentException("Application didn't checkpoint at end of shard "
+ shardInfo.getShardId() + ". Application must checkpoint upon shutdown. " +
"See IRecordProcessor.shutdown javadocs for more information.");

View file

@ -21,7 +21,7 @@ import com.amazonaws.services.kinesis.metrics.interfaces.MetricsLevel;
/**
* Decorates an ITask and reports metrics about its timing and success/failure.
*/
class MetricsCollectingTaskDecorator implements ITask {
public class MetricsCollectingTaskDecorator implements ITask {
private final ITask other;
private final IMetricsFactory factory;

View file

@ -407,4 +407,4 @@ class PeriodicShardSyncManager {
.result();
}
}
}
}

View file

@ -6,9 +6,9 @@ package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
*/
class PeriodicShardSyncStrategy implements ShardSyncStrategy {
private PeriodicShardSyncManager periodicShardSyncManager;
private IPeriodicShardSyncManager periodicShardSyncManager;
PeriodicShardSyncStrategy(PeriodicShardSyncManager periodicShardSyncManager) {
PeriodicShardSyncStrategy(IPeriodicShardSyncManager periodicShardSyncManager) {
this.periodicShardSyncManager = periodicShardSyncManager;
}

View file

@ -60,7 +60,7 @@ public class PrefetchGetRecordsCache implements GetRecordsCache {
private PrefetchCounters prefetchCounters;
private boolean started = false;
private final String operation;
private final KinesisDataFetcher dataFetcher;
private final IDataFetcher dataFetcher;
private final String shardId;
/**

View file

@ -41,7 +41,7 @@ import com.amazonaws.services.kinesis.model.Shard;
/**
* Task for fetching data records and invoking processRecords() on the record processor instance.
*/
class ProcessTask implements ITask {
public class ProcessTask implements ITask {
private static final Log LOG = LogFactory.getLog(ProcessTask.class);
@ -55,7 +55,7 @@ class ProcessTask implements ITask {
private final ShardInfo shardInfo;
private final IRecordProcessor recordProcessor;
private final RecordProcessorCheckpointer recordProcessorCheckpointer;
private final KinesisDataFetcher dataFetcher;
private final IDataFetcher dataFetcher;
private final TaskType taskType = TaskType.PROCESS;
private final StreamConfig streamConfig;
private final long backoffTimeMillis;
@ -81,7 +81,7 @@ class ProcessTask implements ITask {
* The retrieval strategy for fetching records from kinesis
*/
public ProcessTask(ShardInfo shardInfo, StreamConfig streamConfig, IRecordProcessor recordProcessor,
RecordProcessorCheckpointer recordProcessorCheckpointer, KinesisDataFetcher dataFetcher,
RecordProcessorCheckpointer recordProcessorCheckpointer, IDataFetcher dataFetcher,
long backoffTimeMillis, boolean skipShardSyncAtWorkerInitializationIfLeasesExist,
GetRecordsCache getRecordsCache) {
this(shardInfo, streamConfig, recordProcessor, recordProcessorCheckpointer, dataFetcher, backoffTimeMillis,
@ -107,7 +107,7 @@ class ProcessTask implements ITask {
* determines how throttling events should be reported in the log.
*/
public ProcessTask(ShardInfo shardInfo, StreamConfig streamConfig, IRecordProcessor recordProcessor,
RecordProcessorCheckpointer recordProcessorCheckpointer, KinesisDataFetcher dataFetcher,
RecordProcessorCheckpointer recordProcessorCheckpointer, IDataFetcher dataFetcher,
long backoffTimeMillis, boolean skipShardSyncAtWorkerInitializationIfLeasesExist,
ThrottlingReporter throttlingReporter, GetRecordsCache getRecordsCache) {
super();

View file

@ -37,7 +37,7 @@ import com.amazonaws.services.kinesis.model.Record;
* The Amazon Kinesis Client Library will instantiate an object and provide a reference to the application
* RecordProcessor instance. Amazon Kinesis Client Library will create one instance per shard assignment.
*/
class RecordProcessorCheckpointer implements IRecordProcessorCheckpointer {
public class RecordProcessorCheckpointer implements IRecordProcessorCheckpointer {
private static final Log LOG = LogFactory.getLog(RecordProcessorCheckpointer.class);
@ -62,7 +62,7 @@ class RecordProcessorCheckpointer implements IRecordProcessorCheckpointer {
* @param checkpoint Used to checkpoint progress of a RecordProcessor
* @param validator Used for validating sequence numbers
*/
RecordProcessorCheckpointer(ShardInfo shardInfo,
public RecordProcessorCheckpointer(ShardInfo shardInfo,
ICheckpoint checkpoint,
SequenceNumberValidator validator,
IMetricsFactory metricsFactory) {
@ -231,7 +231,7 @@ class RecordProcessorCheckpointer implements IRecordProcessorCheckpointer {
/**
* @return the lastCheckpointValue
*/
ExtendedSequenceNumber getLastCheckpointValue() {
public ExtendedSequenceNumber getLastCheckpointValue() {
return lastCheckpointValue;
}
@ -244,14 +244,14 @@ class RecordProcessorCheckpointer implements IRecordProcessorCheckpointer {
*
* @return the largest permitted checkpoint
*/
synchronized ExtendedSequenceNumber getLargestPermittedCheckpointValue() {
public synchronized ExtendedSequenceNumber getLargestPermittedCheckpointValue() {
return largestPermittedCheckpointValue;
}
/**
* @param largestPermittedCheckpointValue the largest permitted checkpoint
*/
synchronized void setLargestPermittedCheckpointValue(ExtendedSequenceNumber largestPermittedCheckpointValue) {
public synchronized void setLargestPermittedCheckpointValue(ExtendedSequenceNumber largestPermittedCheckpointValue) {
this.largestPermittedCheckpointValue = largestPermittedCheckpointValue;
}
@ -262,7 +262,7 @@ class RecordProcessorCheckpointer implements IRecordProcessorCheckpointer {
*
* @param extendedSequenceNumber
*/
synchronized void setSequenceNumberAtShardEnd(ExtendedSequenceNumber extendedSequenceNumber) {
public synchronized void setSequenceNumberAtShardEnd(ExtendedSequenceNumber extendedSequenceNumber) {
this.sequenceNumberAtShardEnd = extendedSequenceNumber;
}

View file

@ -51,7 +51,7 @@ public class SequenceNumberValidator {
* @param validateWithGetIterator Whether to attempt to get an iterator for this shard id and the sequence numbers
* being validated
*/
SequenceNumberValidator(IKinesisProxy proxy, String shardId, boolean validateWithGetIterator) {
public SequenceNumberValidator(IKinesisProxy proxy, String shardId, boolean validateWithGetIterator) {
this.proxy = proxy;
this.shardId = shardId;
this.validateWithGetIterator = validateWithGetIterator;

View file

@ -17,10 +17,10 @@ class ShardEndShardSyncStrategy implements ShardSyncStrategy {
private ShardSyncTaskManager shardSyncTaskManager;
/** Runs periodic shard sync jobs in the background as an auditor process for shard-end syncs. */
private PeriodicShardSyncManager periodicShardSyncManager;
private IPeriodicShardSyncManager periodicShardSyncManager;
ShardEndShardSyncStrategy(ShardSyncTaskManager shardSyncTaskManager,
PeriodicShardSyncManager periodicShardSyncManager) {
IPeriodicShardSyncManager periodicShardSyncManager) {
this.shardSyncTaskManager = shardSyncTaskManager;
this.periodicShardSyncManager = periodicShardSyncManager;
}

View file

@ -30,7 +30,7 @@ import java.util.List;
* It will clean up leases/activities for shards that have been completely processed (if
* cleanupLeasesUponShardCompletion is true).
*/
class ShardSyncTask implements ITask {
public class ShardSyncTask implements ITask {
private static final Log LOG = LogFactory.getLog(ShardSyncTask.class);
@ -56,7 +56,7 @@ class ShardSyncTask implements ITask {
* @param shardSyncer shardSyncer instance used to check and create new leases
* @param latestShards latest snapshot of shards to reuse
*/
ShardSyncTask(IKinesisProxy kinesisProxy,
public ShardSyncTask(IKinesisProxy kinesisProxy,
ILeaseManager<KinesisClientLease> leaseManager,
InitialPositionInStreamExtended initialPositionInStream,
boolean cleanupLeasesUponShardCompletion,

View file

@ -21,14 +21,14 @@ import com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IShutdownNotif
/**
* Notifies record processor of incoming shutdown request, and gives them a chance to checkpoint.
*/
class ShutdownNotificationTask implements ITask {
public class ShutdownNotificationTask implements ITask {
private final IRecordProcessor recordProcessor;
private final IRecordProcessorCheckpointer recordProcessorCheckpointer;
private final ShutdownNotification shutdownNotification;
private final ShardInfo shardInfo;
ShutdownNotificationTask(IRecordProcessor recordProcessor, IRecordProcessorCheckpointer recordProcessorCheckpointer, ShutdownNotification shutdownNotification, ShardInfo shardInfo) {
public ShutdownNotificationTask(IRecordProcessor recordProcessor, IRecordProcessorCheckpointer recordProcessorCheckpointer, ShutdownNotification shutdownNotification, ShardInfo shardInfo) {
this.recordProcessor = recordProcessor;
this.recordProcessorCheckpointer = recordProcessorCheckpointer;
this.shutdownNotification = shutdownNotification;

View file

@ -15,8 +15,8 @@
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
import com.amazonaws.services.kinesis.clientlibrary.types.ShutdownInput;
import static com.amazonaws.services.kinesis.clientlibrary.lib.worker.ConsumerStates.ConsumerState;
import static com.amazonaws.services.kinesis.clientlibrary.lib.worker.ConsumerStates.ShardConsumerState;
import static com.amazonaws.services.kinesis.clientlibrary.lib.worker.KinesisConsumerStates.ConsumerState;
import static com.amazonaws.services.kinesis.clientlibrary.lib.worker.KinesisConsumerStates.ShardConsumerState;
/**
@ -72,7 +72,7 @@ public enum ShutdownReason {
return reason.rank > this.rank;
}
ConsumerState getShutdownState() {
public ConsumerState getShutdownState() {
return shutdownState;
}
}

View file

@ -19,7 +19,7 @@ import com.amazonaws.services.kinesis.clientlibrary.proxies.IKinesisProxy;
/**
* Used to capture stream configuration and pass it along.
*/
class StreamConfig {
public class StreamConfig {
private final IKinesisProxy streamProxy;
private final int maxRecords;
@ -54,7 +54,7 @@ class StreamConfig {
/**
* @return the streamProxy
*/
IKinesisProxy getStreamProxy() {
public IKinesisProxy getStreamProxy() {
return streamProxy;
}
@ -82,14 +82,14 @@ class StreamConfig {
/**
* @return the initialPositionInStream
*/
InitialPositionInStreamExtended getInitialPositionInStream() {
public InitialPositionInStreamExtended getInitialPositionInStream() {
return initialPositionInStream;
}
/**
* @return validateSequenceNumberBeforeCheckpointing
*/
boolean shouldValidateSequenceNumberBeforeCheckpointing() {
public boolean shouldValidateSequenceNumberBeforeCheckpointing() {
return validateSequenceNumberBeforeCheckpointing;
}
}

View file

@ -24,7 +24,7 @@ import lombok.NonNull;
@Data
public class SynchronousGetRecordsRetrievalStrategy implements GetRecordsRetrievalStrategy {
@NonNull
private final KinesisDataFetcher dataFetcher;
private final IDataFetcher dataFetcher;
@Override
public GetRecordsResult getRecords(final int maxRecords) {
@ -44,7 +44,7 @@ public class SynchronousGetRecordsRetrievalStrategy implements GetRecordsRetriev
}
@Override
public KinesisDataFetcher getDataFetcher() {
public IDataFetcher getDataFetcher() {
return dataFetcher;
}
}

View file

@ -22,7 +22,7 @@ import java.util.List;
* Used to capture information from a task that we want to communicate back to the higher layer.
* E.g. exception thrown when executing the task, if we reach end of a shard.
*/
class TaskResult {
public class TaskResult {
// Did we reach the end of the shard while processing this task.
private boolean shardEndReached;
@ -33,10 +33,12 @@ class TaskResult {
// List of childShards of the current shard. This field is only required for the task result when we reach end of a shard.
private List<ChildShard> childShards;
private boolean leaseNotFound;
/**
* @return the shardEndReached
*/
protected boolean isShardEndReached() {
public boolean isShardEndReached() {
return shardEndReached;
}
@ -57,6 +59,14 @@ class TaskResult {
*/
protected void setChildShards(List<ChildShard> childShards) { this.childShards = childShards; }
public boolean isLeaseNotFound() {
return leaseNotFound;
}
public void leaseNotFound() {
this.leaseNotFound = true;
}
/**
* @return the exception
*/
@ -67,7 +77,7 @@ class TaskResult {
/**
* @param e Any exception encountered when running the process task.
*/
TaskResult(Exception e) {
public TaskResult(Exception e) {
this(e, false);
}

View file

@ -97,9 +97,9 @@ public class Worker implements Runnable {
// Default configs for periodic shard sync
private static final int SHARD_SYNC_SLEEP_FOR_PERIODIC_SHARD_SYNC = 0;
private static final int PERIODIC_SHARD_SYNC_MAX_WORKERS_DEFAULT = 1; //Default for KCL.
static final long LEASE_TABLE_CHECK_FREQUENCY_MILLIS = 3 * 1000L;
static final long MIN_WAIT_TIME_FOR_LEASE_TABLE_CHECK_MILLIS = 1 * 1000L;
static final long MAX_WAIT_TIME_FOR_LEASE_TABLE_CHECK_MILLIS = 30 * 1000L;
static long LEASE_TABLE_CHECK_FREQUENCY_MILLIS = 3 * 1000L;
static long MIN_WAIT_TIME_FOR_LEASE_TABLE_CHECK_MILLIS = 1 * 1000L;
static long MAX_WAIT_TIME_FOR_LEASE_TABLE_CHECK_MILLIS = 30 * 1000L;
private static final WorkerStateChangeListener DEFAULT_WORKER_STATE_CHANGE_LISTENER = new NoOpWorkerStateChangeListener();
private static final LeaseCleanupValidator DEFAULT_LEASE_CLEANUP_VALIDATOR = new KinesisLeaseCleanupValidator();
@ -140,7 +140,7 @@ public class Worker implements Runnable {
// Holds consumers for shards the worker is currently tracking. Key is shard
// info, value is ShardConsumer.
private ConcurrentMap<ShardInfo, ShardConsumer> shardInfoShardConsumerMap = new ConcurrentHashMap<ShardInfo, ShardConsumer>();
private ConcurrentMap<ShardInfo, IShardConsumer> shardInfoShardConsumerMap = new ConcurrentHashMap<ShardInfo, IShardConsumer>();
private final boolean cleanupLeasesUponShardCompletion;
private final boolean skipShardSyncAtWorkerInitializationIfLeasesExist;
@ -159,10 +159,13 @@ public class Worker implements Runnable {
// Periodic Shard Sync related fields
private LeaderDecider leaderDecider;
private ShardSyncStrategy shardSyncStrategy;
private PeriodicShardSyncManager leaderElectedPeriodicShardSyncManager;
private IPeriodicShardSyncManager leaderElectedPeriodicShardSyncManager;
private final LeaseCleanupManager leaseCleanupManager;
// Shard Consumer Factory
private IShardConsumerFactory shardConsumerFactory;
/**
* Constructor.
*
@ -536,13 +539,13 @@ public class Worker implements Runnable {
IMetricsFactory metricsFactory, long taskBackoffTimeMillis, long failoverTimeMillis,
boolean skipShardSyncAtWorkerInitializationIfLeasesExist, ShardPrioritization shardPrioritization,
Optional<Integer> retryGetRecordsInSeconds, Optional<Integer> maxGetRecordsThreadPool, WorkerStateChangeListener workerStateChangeListener,
LeaseCleanupValidator leaseCleanupValidator, LeaderDecider leaderDecider, PeriodicShardSyncManager periodicShardSyncManager) {
LeaseCleanupValidator leaseCleanupValidator, LeaderDecider leaderDecider, IPeriodicShardSyncManager periodicShardSyncManager) {
this(applicationName, recordProcessorFactory, config, streamConfig, initialPositionInStream,
parentShardPollIntervalMillis, shardSyncIdleTimeMillis, cleanupLeasesUponShardCompletion, checkpoint,
leaseCoordinator, execService, metricsFactory, taskBackoffTimeMillis, failoverTimeMillis,
skipShardSyncAtWorkerInitializationIfLeasesExist, shardPrioritization, retryGetRecordsInSeconds,
maxGetRecordsThreadPool, workerStateChangeListener, new KinesisShardSyncer(leaseCleanupValidator),
leaderDecider, periodicShardSyncManager);
leaderDecider, periodicShardSyncManager, null /*ShardConsumerFactory*/);
}
Worker(String applicationName, IRecordProcessorFactory recordProcessorFactory, KinesisClientLibConfiguration config,
@ -553,7 +556,7 @@ public class Worker implements Runnable {
boolean skipShardSyncAtWorkerInitializationIfLeasesExist, ShardPrioritization shardPrioritization,
Optional<Integer> retryGetRecordsInSeconds, Optional<Integer> maxGetRecordsThreadPool,
WorkerStateChangeListener workerStateChangeListener, ShardSyncer shardSyncer, LeaderDecider leaderDecider,
PeriodicShardSyncManager periodicShardSyncManager) {
IPeriodicShardSyncManager periodicShardSyncManager, IShardConsumerFactory shardConsumerFactory) {
this.applicationName = applicationName;
this.recordProcessorFactory = recordProcessorFactory;
this.config = config;
@ -579,10 +582,11 @@ public class Worker implements Runnable {
this.workerStateChangeListener = workerStateChangeListener;
workerStateChangeListener.onWorkerStateChange(WorkerStateChangeListener.WorkerState.CREATED);
createShardSyncStrategy(config.getShardSyncStrategyType(), leaderDecider, periodicShardSyncManager);
this.leaseCleanupManager = LeaseCleanupManager.createOrGetInstance(streamConfig.getStreamProxy(), leaseCoordinator.getLeaseManager(),
this.leaseCleanupManager = LeaseCleanupManager.newInstance(streamConfig.getStreamProxy(), leaseCoordinator.getLeaseManager(),
Executors.newSingleThreadScheduledExecutor(), metricsFactory, cleanupLeasesUponShardCompletion,
config.leaseCleanupIntervalMillis(), config.completedLeaseCleanupThresholdMillis(),
config.garbageLeaseCleanupThresholdMillis(), config.getMaxRecords());
this.shardConsumerFactory = shardConsumerFactory;
}
/**
@ -593,7 +597,7 @@ public class Worker implements Runnable {
*/
private void createShardSyncStrategy(ShardSyncStrategyType strategyType,
LeaderDecider leaderDecider,
PeriodicShardSyncManager periodicShardSyncManager) {
IPeriodicShardSyncManager periodicShardSyncManager) {
switch (strategyType) {
case PERIODIC:
this.leaderDecider = getOrCreateLeaderDecider(leaderDecider);
@ -655,7 +659,7 @@ public class Worker implements Runnable {
/**
* @return the leaderElectedPeriodicShardSyncManager
*/
PeriodicShardSyncManager getPeriodicShardSyncManager() {
IPeriodicShardSyncManager getPeriodicShardSyncManager() {
return leaderElectedPeriodicShardSyncManager;
}
@ -690,7 +694,7 @@ public class Worker implements Runnable {
boolean foundCompletedShard = false;
Set<ShardInfo> assignedShards = new HashSet<>();
for (ShardInfo shardInfo : getShardInfoForAssignments()) {
ShardConsumer shardConsumer = createOrGetShardConsumer(shardInfo, recordProcessorFactory);
IShardConsumer shardConsumer = createOrGetShardConsumer(shardInfo, recordProcessorFactory);
if (shardConsumer.isShutdown() && shardConsumer.getShutdownReason().equals(ShutdownReason.TERMINATE)) {
foundCompletedShard = true;
} else {
@ -1000,9 +1004,9 @@ public class Worker implements Runnable {
ShutdownNotification shutdownNotification = new ShardConsumerShutdownNotification(leaseCoordinator,
lease, notificationCompleteLatch, shutdownCompleteLatch);
ShardInfo shardInfo = KinesisClientLibLeaseCoordinator.convertLeaseToAssignment(lease);
ShardConsumer consumer = shardInfoShardConsumerMap.get(shardInfo);
IShardConsumer consumer = shardInfoShardConsumerMap.get(shardInfo);
if (consumer == null || ConsumerStates.ShardConsumerState.SHUTDOWN_COMPLETE.equals(consumer.getCurrentState())) {
if (consumer == null || KinesisConsumerStates.ShardConsumerState.SHUTDOWN_COMPLETE.equals(consumer.getCurrentState())) {
//
// CASE1: There is a race condition between retrieving the current assignments, and creating the
// notification. If the a lease is lost in between these two points, we explicitly decrement the
@ -1024,7 +1028,7 @@ public class Worker implements Runnable {
return shutdownComplete;
}
ConcurrentMap<ShardInfo, ShardConsumer> getShardInfoShardConsumerMap() {
ConcurrentMap<ShardInfo, IShardConsumer> getShardInfoShardConsumerMap() {
return shardInfoShardConsumerMap;
}
@ -1062,6 +1066,10 @@ public class Worker implements Runnable {
// Lost leases will force Worker to begin shutdown process for all shard consumers in
// Worker.run().
leaseCoordinator.stop();
// Stop the lease cleanup manager
leaseCleanupManager.shutdown();
// Stop the periodicShardSyncManager for the worker
if (shardSyncStrategy != null) {
shardSyncStrategy.onWorkerShutDown();
@ -1120,8 +1128,8 @@ public class Worker implements Runnable {
* RecordProcessor factory
* @return ShardConsumer for the shard
*/
ShardConsumer createOrGetShardConsumer(ShardInfo shardInfo, IRecordProcessorFactory processorFactory) {
ShardConsumer consumer = shardInfoShardConsumerMap.get(shardInfo);
IShardConsumer createOrGetShardConsumer(ShardInfo shardInfo, IRecordProcessorFactory processorFactory) {
IShardConsumer consumer = shardInfoShardConsumerMap.get(shardInfo);
// Instantiate a new consumer if we don't have one, or the one we
// had was from an earlier
// lease instance (and was shutdown). Don't need to create another
@ -1136,7 +1144,7 @@ public class Worker implements Runnable {
return consumer;
}
protected ShardConsumer buildConsumer(ShardInfo shardInfo, IRecordProcessorFactory processorFactory) {
protected IShardConsumer buildConsumer(ShardInfo shardInfo, IRecordProcessorFactory processorFactory) {
final IRecordProcessor recordProcessor = processorFactory.createProcessor();
final RecordProcessorCheckpointer recordProcessorCheckpointer = new RecordProcessorCheckpointer(
shardInfo,
@ -1147,7 +1155,11 @@ public class Worker implements Runnable {
streamConfig.shouldValidateSequenceNumberBeforeCheckpointing()),
metricsFactory);
return new ShardConsumer(shardInfo,
if(shardConsumerFactory == null){ //Default to KinesisShardConsumerFactory if null
this.shardConsumerFactory = new KinesisShardConsumerFactory();
}
return shardConsumerFactory.createShardConsumer(shardInfo,
streamConfig,
checkpointTracker,
recordProcessor,
@ -1159,7 +1171,6 @@ public class Worker implements Runnable {
metricsFactory,
taskBackoffTimeMillis,
skipShardSyncAtWorkerInitializationIfLeasesExist,
new KinesisDataFetcher(streamConfig.getStreamProxy(), shardInfo),
retryGetRecordsInSeconds,
maxGetRecordsThreadPool,
config, shardSyncer, shardSyncStrategy,
@ -1237,8 +1248,8 @@ public class Worker implements Runnable {
* KinesisClientLibConfiguration
* @return Returns metrics factory based on the config.
*/
private static IMetricsFactory getMetricsFactory(AmazonCloudWatch cloudWatchClient,
KinesisClientLibConfiguration config) {
public static IMetricsFactory getMetricsFactory(AmazonCloudWatch cloudWatchClient,
KinesisClientLibConfiguration config) {
IMetricsFactory metricsFactory;
if (config.getMetricsLevel() == MetricsLevel.NONE) {
metricsFactory = new NullMetricsFactory();
@ -1291,13 +1302,13 @@ public class Worker implements Runnable {
/** A non-null PeriodicShardSyncManager can only provided from unit tests. Any application code will create the
* PeriodicShardSyncManager for the first time here. */
private PeriodicShardSyncManager getOrCreatePeriodicShardSyncManager(PeriodicShardSyncManager periodicShardSyncManager,
private IPeriodicShardSyncManager getOrCreatePeriodicShardSyncManager(IPeriodicShardSyncManager periodicShardSyncManager,
boolean isAuditorMode) {
if (periodicShardSyncManager != null) {
return periodicShardSyncManager;
}
return new PeriodicShardSyncManager(config.getWorkerIdentifier(),
return new KinesisPeriodicShardSyncManager(config.getWorkerIdentifier(),
leaderDecider,
new ShardSyncTask(streamConfig.getStreamProxy(),
leaseCoordinator.getLeaseManager(),
@ -1366,6 +1377,10 @@ public class Worker implements Runnable {
@Setter @Accessors(fluent = true)
private IKinesisProxy kinesisProxy;
@Setter @Accessors(fluent = true)
private IPeriodicShardSyncManager periodicShardSyncManager;
@Setter @Accessors(fluent = true)
private IShardConsumerFactory shardConsumerFactory;
@Setter @Accessors(fluent = true)
private WorkerStateChangeListener workerStateChangeListener;
@Setter @Accessors(fluent = true)
private LeaseCleanupValidator leaseCleanupValidator;
@ -1434,6 +1449,16 @@ public class Worker implements Runnable {
throw new IllegalArgumentException(
"Kinesis Client Library configuration needs to be provided to build Worker");
}
if (periodicShardSyncManager != null) {
if (leaseManager == null || shardSyncer == null || metricsFactory == null || leaderDecider == null) {
throw new IllegalArgumentException("LeaseManager, ShardSyncer, MetricsFactory, and LeaderDecider must be provided if PeriodicShardSyncManager is provided");
}
}
if(shardConsumerFactory == null){
shardConsumerFactory = new KinesisShardConsumerFactory();
}
if (recordProcessorFactory == null) {
throw new IllegalArgumentException("A Record Processor Factory needs to be provided to build Worker");
}
@ -1516,7 +1541,7 @@ public class Worker implements Runnable {
}
// We expect users to either inject both LeaseRenewer and the corresponding thread-pool, or neither of them (DEFAULT).
if (leaseRenewer == null) {
if (leaseRenewer == null) {
ExecutorService leaseRenewerThreadPool = LeaseCoordinator.getDefaultLeaseRenewalExecutorService(config.getMaxLeaseRenewalThreads());
leaseRenewer = new LeaseRenewer<>(leaseManager, config.getWorkerIdentifier(), config.getFailoverTimeMillis(), leaseRenewerThreadPool);
}
@ -1525,7 +1550,6 @@ public class Worker implements Runnable {
leaderDecider = new DeterministicShuffleShardSyncLeaderDecider(leaseManager,
Executors.newSingleThreadScheduledExecutor(), PERIODIC_SHARD_SYNC_MAX_WORKERS_DEFAULT);
}
return new Worker(config.getApplicationName(),
recordProcessorFactory,
config,
@ -1559,7 +1583,8 @@ public class Worker implements Runnable {
workerStateChangeListener,
shardSyncer,
leaderDecider,
null /* PeriodicShardSyncManager */);
periodicShardSyncManager /*PeriodicShardSyncManager*/,
shardConsumerFactory);
}
<R, T extends AwsClientBuilder<T, R>> R createClient(final T builder,

View file

@ -22,6 +22,7 @@ import java.util.ArrayList;
import java.util.Date;
import java.util.EnumSet;
import java.util.HashSet;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
@ -29,12 +30,14 @@ import java.util.concurrent.atomic.AtomicInteger;
import java.util.function.Function;
import java.util.stream.Collectors;
import com.amazonaws.services.kinesis.clientlibrary.utils.RequestUtil;
import com.amazonaws.services.kinesis.model.ShardFilter;
import com.amazonaws.util.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import com.amazonaws.arn.Arn;
import com.amazonaws.auth.AWSCredentialsProvider;
import com.amazonaws.services.kinesis.AmazonKinesis;
import com.amazonaws.services.kinesis.AmazonKinesisClient;
@ -59,7 +62,6 @@ import com.amazonaws.services.kinesis.model.ShardIteratorType;
import com.amazonaws.services.kinesis.model.StreamStatus;
import lombok.AccessLevel;
import lombok.Data;
import lombok.Getter;
import lombok.Setter;
@ -82,8 +84,6 @@ public class KinesisProxy implements IKinesisProxyExtended {
private AmazonKinesis client;
private AWSCredentialsProvider credentialsProvider;
private ShardIterationState shardIterationState = null;
@Setter(AccessLevel.PACKAGE)
private volatile Map<String, Shard> cachedShardMap = null;
@Setter(AccessLevel.PACKAGE)
@ -95,6 +95,12 @@ public class KinesisProxy implements IKinesisProxyExtended {
private final String streamName;
/**
* Stored as a string instead of an ARN to reduce repetitive null checks when passing in the stream ARN to
* the client requests, which accepts a String stream ARN parameter.
*/
private String streamArn;
private static final long DEFAULT_DESCRIBE_STREAM_BACKOFF_MILLIS = 1000L;
private static final int DEFAULT_DESCRIBE_STREAM_RETRY_TIMES = 50;
private final long describeStreamBackoffTimeInMillis;
@ -219,6 +225,8 @@ public class KinesisProxy implements IKinesisProxyExtended {
config.getListShardsBackoffTimeInMillis(),
config.getMaxListShardsRetryAttempts());
this.credentialsProvider = config.getKinesisCredentialsProvider();
Arn arn = config.getStreamArn();
this.streamArn = arn != null ? arn.toString() : null;
}
public KinesisProxy(final String streamName,
@ -254,6 +262,7 @@ public class KinesisProxy implements IKinesisProxyExtended {
final GetRecordsRequest getRecordsRequest = new GetRecordsRequest();
getRecordsRequest.setRequestCredentials(credentialsProvider.getCredentials());
getRecordsRequest.setStreamARN(streamArn);
getRecordsRequest.setShardIterator(shardIterator);
getRecordsRequest.setLimit(maxRecords);
final GetRecordsResult response = client.getRecords(getRecordsRequest);
@ -271,6 +280,7 @@ public class KinesisProxy implements IKinesisProxyExtended {
final DescribeStreamRequest describeStreamRequest = new DescribeStreamRequest();
describeStreamRequest.setRequestCredentials(credentialsProvider.getCredentials());
describeStreamRequest.setStreamName(streamName);
describeStreamRequest.setStreamARN(streamArn);
describeStreamRequest.setExclusiveStartShardId(startShardId);
DescribeStreamResult response = null;
@ -315,13 +325,13 @@ public class KinesisProxy implements IKinesisProxyExtended {
request.setRequestCredentials(credentialsProvider.getCredentials());
if (StringUtils.isEmpty(nextToken)) {
request.setStreamName(streamName);
request.setStreamARN(streamArn);
request.setShardFilter(shardFilter);
} else {
request.setNextToken(nextToken);
}
if (shardFilter != null) {
request.setShardFilter(shardFilter);
}
LOG.info("Listing shards with list shards request " + request);
ListShardsResult result = null;
LimitExceededException lastException = null;
@ -443,10 +453,8 @@ public class KinesisProxy implements IKinesisProxyExtended {
*/
@Override
public synchronized List<Shard> getShardListWithFilter(ShardFilter shardFilter) {
if (shardIterationState == null) {
shardIterationState = new ShardIterationState();
}
final List<Shard> shards = new ArrayList<>();
final List<String> requestIds = new ArrayList<>();
if (isKinesisClient) {
ListShardsResult result;
String nextToken = null;
@ -461,16 +469,18 @@ public class KinesisProxy implements IKinesisProxyExtended {
*/
return null;
} else {
shardIterationState.update(result.getShards());
shards.addAll(result.getShards());
requestIds.add(RequestUtil.requestId(result));
nextToken = result.getNextToken();
}
} while (StringUtils.isNotEmpty(result.getNextToken()));
} else {
DescribeStreamResult response;
String lastShardId = null;
do {
response = getStreamInfo(shardIterationState.getLastShardId());
response = getStreamInfo(lastShardId);
if (response == null) {
/*
@ -479,16 +489,26 @@ public class KinesisProxy implements IKinesisProxyExtended {
*/
return null;
} else {
shardIterationState.update(response.getStreamDescription().getShards());
final List<Shard> pageOfShards = response.getStreamDescription().getShards();
shards.addAll(pageOfShards);
requestIds.add(RequestUtil.requestId(response));
final Shard lastShard = pageOfShards.get(pageOfShards.size() - 1);
if (lastShardId == null || lastShardId.compareTo(lastShard.getShardId()) < 0) {
lastShardId = lastShard.getShardId();
}
}
} while (response.getStreamDescription().isHasMoreShards());
}
List<Shard> shards = shardIterationState.getShards();
this.cachedShardMap = shards.stream().collect(Collectors.toMap(Shard::getShardId, Function.identity()));
final List<Shard> dedupedShards = new ArrayList<>(new LinkedHashSet<>(shards));
if (dedupedShards.size() < shards.size()) {
LOG.warn("Found duplicate shards in response when sync'ing from Kinesis. " +
"Request ids - " + requestIds + ". Response - " + shards);
}
this.cachedShardMap = dedupedShards.stream().collect(Collectors.toMap(Shard::getShardId, Function.identity()));
this.lastCacheUpdateTime = Instant.now();
shardIterationState = new ShardIterationState();
return shards;
return dedupedShards;
}
/**
@ -559,6 +579,7 @@ public class KinesisProxy implements IKinesisProxyExtended {
final GetShardIteratorRequest getShardIteratorRequest = new GetShardIteratorRequest();
getShardIteratorRequest.setRequestCredentials(credentialsProvider.getCredentials());
getShardIteratorRequest.setStreamName(streamName);
getShardIteratorRequest.setStreamARN(streamArn);
getShardIteratorRequest.setShardId(shardId);
getShardIteratorRequest.setShardIteratorType(iteratorType);
getShardIteratorRequest.setStartingSequenceNumber(sequenceNumber);
@ -575,6 +596,7 @@ public class KinesisProxy implements IKinesisProxyExtended {
final GetShardIteratorRequest getShardIteratorRequest = new GetShardIteratorRequest();
getShardIteratorRequest.setRequestCredentials(credentialsProvider.getCredentials());
getShardIteratorRequest.setStreamName(streamName);
getShardIteratorRequest.setStreamARN(streamArn);
getShardIteratorRequest.setShardId(shardId);
getShardIteratorRequest.setShardIteratorType(iteratorType);
getShardIteratorRequest.setStartingSequenceNumber(null);
@ -591,6 +613,7 @@ public class KinesisProxy implements IKinesisProxyExtended {
final GetShardIteratorRequest getShardIteratorRequest = new GetShardIteratorRequest();
getShardIteratorRequest.setRequestCredentials(credentialsProvider.getCredentials());
getShardIteratorRequest.setStreamName(streamName);
getShardIteratorRequest.setStreamARN(streamArn);
getShardIteratorRequest.setShardId(shardId);
getShardIteratorRequest.setShardIteratorType(ShardIteratorType.AT_TIMESTAMP);
getShardIteratorRequest.setStartingSequenceNumber(null);
@ -610,6 +633,7 @@ public class KinesisProxy implements IKinesisProxyExtended {
final PutRecordRequest putRecordRequest = new PutRecordRequest();
putRecordRequest.setRequestCredentials(credentialsProvider.getCredentials());
putRecordRequest.setStreamName(streamName);
putRecordRequest.setStreamARN(streamArn);
putRecordRequest.setSequenceNumberForOrdering(exclusiveMinimumSequenceNumber);
putRecordRequest.setExplicitHashKey(explicitHashKey);
putRecordRequest.setPartitionKey(partitionKey);
@ -618,27 +642,4 @@ public class KinesisProxy implements IKinesisProxyExtended {
final PutRecordResult response = client.putRecord(putRecordRequest);
return response;
}
@Data
static class ShardIterationState {
private List<Shard> shards;
private String lastShardId;
public ShardIterationState() {
shards = new ArrayList<>();
}
public void update(List<Shard> shards) {
if (shards == null || shards.isEmpty()) {
return;
}
this.shards.addAll(shards);
Shard lastShard = shards.get(shards.size() - 1);
if (lastShardId == null || lastShardId.compareTo(lastShard.getShardId()) < 0) {
lastShardId = lastShard.getShardId();
}
}
}
}

View file

@ -0,0 +1,24 @@
package com.amazonaws.services.kinesis.clientlibrary.utils;
import com.amazonaws.AmazonWebServiceResult;
/**
* Helper class to parse metadata from AWS requests.
*/
public class RequestUtil {
private static final String DEFAULT_REQUEST_ID = "NONE";
/**
* Get the requestId associated with a request.
*
* @param result
* @return the requestId for a request, or "NONE" if one is not available.
*/
public static String requestId(AmazonWebServiceResult result) {
if (result == null || result.getSdkResponseMetadata() == null || result.getSdkResponseMetadata().getRequestId() == null) {
return DEFAULT_REQUEST_ID;
}
return result.getSdkResponseMetadata().getRequestId();
}
}

View file

@ -22,6 +22,7 @@ import com.amazonaws.services.dynamodbv2.model.AttributeAction;
import com.amazonaws.services.dynamodbv2.model.AttributeDefinition;
import com.amazonaws.services.dynamodbv2.model.AttributeValue;
import com.amazonaws.services.dynamodbv2.model.AttributeValueUpdate;
import com.amazonaws.services.dynamodbv2.model.ComparisonOperator;
import com.amazonaws.services.dynamodbv2.model.ExpectedAttributeValue;
import com.amazonaws.services.dynamodbv2.model.KeySchemaElement;
import com.amazonaws.services.kinesis.clientlibrary.types.ExtendedSequenceNumber;
@ -68,6 +69,11 @@ public class KinesisClientLeaseSerializer implements ILeaseSerializer<KinesisCli
result.put(PENDING_CHECKPOINT_SUBSEQUENCE_KEY, DynamoUtils.createAttributeValue(lease.getPendingCheckpoint().getSubSequenceNumber()));
}
if(lease.getHashKeyRange() != null) {
result.put(STARTING_HASH_KEY, DynamoUtils.createAttributeValue(lease.getHashKeyRange().serializedStartingHashKey()));
result.put(ENDING_HASH_KEY, DynamoUtils.createAttributeValue(lease.getHashKeyRange().serializedEndingHashKey()));
}
return result;
}
@ -92,6 +98,12 @@ public class KinesisClientLeaseSerializer implements ILeaseSerializer<KinesisCli
);
}
final String startingHashKey, endingHashKey;
if (!Strings.isNullOrEmpty(startingHashKey = DynamoUtils.safeGetString(dynamoRecord, STARTING_HASH_KEY))
&& !Strings.isNullOrEmpty(endingHashKey = DynamoUtils.safeGetString(dynamoRecord, ENDING_HASH_KEY))) {
result.setHashKeyRange(HashKeyRangeForLease.deserialize(startingHashKey, endingHashKey));
}
return result;
}
@ -115,6 +127,19 @@ public class KinesisClientLeaseSerializer implements ILeaseSerializer<KinesisCli
return baseSerializer.getDynamoLeaseOwnerExpectation(lease);
}
@Override
public Map<String, ExpectedAttributeValue> getDynamoLeaseCheckpointExpectation(KinesisClientLease lease) {
Map<String, ExpectedAttributeValue> result = baseSerializer.getDynamoLeaseCheckpointExpectation(lease);
ExpectedAttributeValue eav;
if (!lease.getCheckpoint().equals(ExtendedSequenceNumber.SHARD_END)) {
eav = new ExpectedAttributeValue(DynamoUtils.createAttributeValue(ExtendedSequenceNumber.SHARD_END.getSequenceNumber()));
eav.setComparisonOperator(ComparisonOperator.NE);
result.put(CHECKPOINT_SEQUENCE_NUMBER_KEY, eav);
}
return result;
}
@Override
public Map<String, ExpectedAttributeValue> getDynamoNonexistantExpectation() {
return baseSerializer.getDynamoNonexistantExpectation();
@ -163,6 +188,11 @@ public class KinesisClientLeaseSerializer implements ILeaseSerializer<KinesisCli
result.put(CHILD_SHARD_IDS_KEY, new AttributeValueUpdate(DynamoUtils.createAttributeValue(lease.getChildShardIds()), AttributeAction.PUT));
}
if(lease.getHashKeyRange() != null) {
result.put(STARTING_HASH_KEY, new AttributeValueUpdate(DynamoUtils.createAttributeValue(lease.getHashKeyRange().serializedStartingHashKey()), AttributeAction.PUT));
result.put(ENDING_HASH_KEY, new AttributeValueUpdate(DynamoUtils.createAttributeValue(lease.getHashKeyRange().serializedEndingHashKey()), AttributeAction.PUT));
}
if (lease.getPendingCheckpoint() != null && !lease.getPendingCheckpoint().getSequenceNumber().isEmpty()) {
result.put(PENDING_CHECKPOINT_SEQUENCE_KEY, new AttributeValueUpdate(DynamoUtils.createAttributeValue(lease.getPendingCheckpoint().getSequenceNumber()), AttributeAction.PUT));
result.put(PENDING_CHECKPOINT_SUBSEQUENCE_KEY, new AttributeValueUpdate(DynamoUtils.createAttributeValue(lease.getPendingCheckpoint().getSubSequenceNumber()), AttributeAction.PUT));
@ -181,7 +211,10 @@ public class KinesisClientLeaseSerializer implements ILeaseSerializer<KinesisCli
switch (updateField) {
case CHILD_SHARDS:
// TODO: Implement update fields for child shards
if (!CollectionUtils.isNullOrEmpty(lease.getChildShardIds())) {
result.put(CHILD_SHARD_IDS_KEY, new AttributeValueUpdate(DynamoUtils.createAttributeValue(
lease.getChildShardIds()), AttributeAction.PUT));
}
break;
case HASH_KEY_RANGE:
if (lease.getHashKeyRange() != null) {

View file

@ -15,6 +15,7 @@ package com.amazonaws.services.kinesis.leases.impl;
* limitations under the License.
*/
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.KinesisClientLibConfiguration;
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.ShardInfo;
import com.amazonaws.services.kinesis.clientlibrary.proxies.IKinesisProxy;
import com.amazonaws.services.kinesis.clientlibrary.types.ExtendedSequenceNumber;
@ -81,10 +82,8 @@ public class LeaseCleanupManager {
@Getter
private volatile boolean isRunning = false;
private static LeaseCleanupManager instance;
/**
* Factory method to return a singleton instance of {@link LeaseCleanupManager}.
* Method to return a new instance of {@link LeaseCleanupManager}.
* @param kinesisProxy
* @param leaseManager
* @param deletionThreadPool
@ -96,17 +95,13 @@ public class LeaseCleanupManager {
* @param maxRecords
* @return
*/
public static LeaseCleanupManager createOrGetInstance(IKinesisProxy kinesisProxy, ILeaseManager leaseManager,
ScheduledExecutorService deletionThreadPool, IMetricsFactory metricsFactory,
boolean cleanupLeasesUponShardCompletion, long leaseCleanupIntervalMillis,
long completedLeaseCleanupIntervalMillis, long garbageLeaseCleanupIntervalMillis,
int maxRecords) {
if (instance == null) {
instance = new LeaseCleanupManager(kinesisProxy, leaseManager, deletionThreadPool, metricsFactory, cleanupLeasesUponShardCompletion,
leaseCleanupIntervalMillis, completedLeaseCleanupIntervalMillis, garbageLeaseCleanupIntervalMillis, maxRecords);
}
return instance;
public static LeaseCleanupManager newInstance(IKinesisProxy kinesisProxy, ILeaseManager leaseManager,
ScheduledExecutorService deletionThreadPool, IMetricsFactory metricsFactory,
boolean cleanupLeasesUponShardCompletion, long leaseCleanupIntervalMillis,
long completedLeaseCleanupIntervalMillis, long garbageLeaseCleanupIntervalMillis,
int maxRecords) {
return new LeaseCleanupManager(kinesisProxy, leaseManager, deletionThreadPool, metricsFactory, cleanupLeasesUponShardCompletion,
leaseCleanupIntervalMillis, completedLeaseCleanupIntervalMillis, garbageLeaseCleanupIntervalMillis, maxRecords);
}
/**
@ -126,6 +121,23 @@ public class LeaseCleanupManager {
}
}
/**
* Stops the lease cleanup thread, which is scheduled periodically as specified by
* {@link LeaseCleanupManager#leaseCleanupIntervalMillis}
*/
public void shutdown() {
if (isRunning) {
LOG.info("Stopping the lease cleanup thread.");
completedLeaseStopwatch.stop();
garbageLeaseStopwatch.stop();
deletionThreadPool.shutdown();
isRunning = false;
} else {
LOG.info("Lease cleanup thread already stopped.");
}
}
/**
* Enqueues a lease for deletion without check for duplicate entry. Use {@link #isEnqueuedForDeletion}
* for checking the duplicate entries.
@ -181,6 +193,7 @@ public class LeaseCleanupManager {
boolean alreadyCheckedForGarbageCollection = false;
boolean wereChildShardsPresent = false;
boolean wasResourceNotFound = false;
String cleanupFailureReason = "";
try {
if (cleanupLeasesUponShardCompletion && timeToCheckForCompletedShard) {
@ -189,49 +202,57 @@ public class LeaseCleanupManager {
Set<String> childShardKeys = leaseFromDDB.getChildShardIds();
if (CollectionUtils.isNullOrEmpty(childShardKeys)) {
try {
// throws ResourceNotFoundException
childShardKeys = getChildShardsFromService(shardInfo);
if (CollectionUtils.isNullOrEmpty(childShardKeys)) {
LOG.error("No child shards returned from service for shard " + shardInfo.getShardId());
// If no children shard is found in DDB and from service, then do not delete the lease
throw new InvalidStateException("No child shards found for this supposedly " +
"closed shard in both local DDB and in service " + shardInfo.getShardId());
} else {
wereChildShardsPresent = true;
updateLeaseWithChildShards(leasePendingDeletion, childShardKeys);
}
} catch (ResourceNotFoundException e) {
throw e;
} finally {
// We rely on resource presence in service for garbage collection. Since we already
// made a call to getChildShardsFromService we would be coming to know if the resource
// is present of not. In latter case, we would throw ResourceNotFoundException, which is
// handled in catch block.
alreadyCheckedForGarbageCollection = true;
}
} else {
wereChildShardsPresent = true;
}
try {
cleanedUpCompletedLease = cleanupLeaseForCompletedShard(lease, childShardKeys);
final CompletedShardResult completedShardResult = cleanupLeaseForCompletedShard(lease, childShardKeys);
cleanedUpCompletedLease = completedShardResult.cleanedUp();
cleanupFailureReason = completedShardResult.failureMsg();
} catch (Exception e) {
// Suppressing the exception here, so that we can attempt for garbage cleanup.
LOG.warn("Unable to cleanup lease for shard " + shardInfo.getShardId());
LOG.warn("Unable to cleanup lease for shard " + shardInfo.getShardId() + " due to " + e.getMessage());
}
} else {
LOG.info("Lease not present in lease table while cleaning the shard " + shardInfo.getShardId());
cleanedUpCompletedLease = true;
}
} else {
cleanupFailureReason = "Configuration/Interval condition not satisfied to execute lease cleanup this cycle";
}
if (!alreadyCheckedForGarbageCollection && timeToCheckForGarbageShard) {
try {
wereChildShardsPresent = !CollectionUtils
if (!cleanedUpCompletedLease && !alreadyCheckedForGarbageCollection && timeToCheckForGarbageShard) {
// throws ResourceNotFoundException
wereChildShardsPresent = !CollectionUtils
.isNullOrEmpty(getChildShardsFromService(shardInfo));
} catch (ResourceNotFoundException e) {
throw e;
}
}
} catch (ResourceNotFoundException e) {
wasResourceNotFound = true;
cleanedUpGarbageLease = cleanupLeaseForGarbageShard(lease);
cleanupFailureReason = cleanedUpGarbageLease ? "" : "DDB Lease Deletion Failed";
} catch (Exception e) {
LOG.warn("Unable to cleanup lease for shard " + shardInfo.getShardId() + " : " + e.getMessage());
cleanupFailureReason = e.getMessage();
}
return new LeaseCleanupResult(cleanedUpCompletedLease, cleanedUpGarbageLease, wereChildShardsPresent,
wasResourceNotFound);
wasResourceNotFound, cleanupFailureReason);
}
private Set<String> getChildShardsFromService(ShardInfo shardInfo) {
@ -239,12 +260,16 @@ public class LeaseCleanupManager {
return kinesisProxy.get(iterator, maxRecords).getChildShards().stream().map(c -> c.getShardId()).collect(Collectors.toSet());
}
// A lease that ended with SHARD_END from ResourceNotFoundException is safe to delete if it no longer exists in the
// stream (known explicitly from ResourceNotFound being thrown when processing this shard),
private boolean cleanupLeaseForGarbageShard(KinesisClientLease lease) throws DependencyException, ProvisionedThroughputException, InvalidStateException {
LOG.info("Deleting lease " + lease.getLeaseKey() + " as it is not present in the stream.");
leaseManager.deleteLease(lease);
try {
leaseManager.deleteLease(lease);
} catch (Exception e) {
LOG.warn("Lease deletion failed for " + lease.getLeaseKey() + " due to " + e.getMessage());
return false;
}
return true;
}
@ -264,8 +289,9 @@ public class LeaseCleanupManager {
// We should only be deleting the current shard's lease if
// 1. All of its children are currently being processed, i.e their checkpoint is not TRIM_HORIZON or AT_TIMESTAMP.
// 2. Its parent shard lease(s) have already been deleted.
private boolean cleanupLeaseForCompletedShard(KinesisClientLease lease, Set<String> childShardLeaseKeys)
private CompletedShardResult cleanupLeaseForCompletedShard(KinesisClientLease lease, Set<String> childShardLeaseKeys)
throws DependencyException, ProvisionedThroughputException, InvalidStateException, IllegalStateException {
final Set<String> processedChildShardLeaseKeys = new HashSet<>();
for (String childShardLeaseKey : childShardLeaseKeys) {
@ -281,14 +307,17 @@ public class LeaseCleanupManager {
}
}
if (!allParentShardLeasesDeleted(lease) || !Objects.equals(childShardLeaseKeys, processedChildShardLeaseKeys)) {
return false;
boolean parentShardsDeleted = allParentShardLeasesDeleted(lease);
boolean childrenStartedProcessing = Objects.equals(childShardLeaseKeys, processedChildShardLeaseKeys);
if (!parentShardsDeleted || !childrenStartedProcessing) {
return new CompletedShardResult(false, !parentShardsDeleted ? "Parent shard(s) not deleted yet" : "Child shard(s) yet to begin processing");
}
LOG.info("Deleting lease " + lease.getLeaseKey() + " as it has been completely processed and processing of child shard(s) has begun.");
leaseManager.deleteLease(lease);
return true;
return new CompletedShardResult(true, "");
}
private void updateLeaseWithChildShards(LeasePendingDeletion leasePendingDeletion, Set<String> childShardKeys)
@ -296,7 +325,7 @@ public class LeaseCleanupManager {
final KinesisClientLease updatedLease = leasePendingDeletion.lease();
updatedLease.setChildShardIds(childShardKeys);
leaseManager.updateLease(updatedLease);
leaseManager.updateLeaseWithMetaInfo(updatedLease, UpdateField.CHILD_SHARDS);
}
@VisibleForTesting
@ -364,9 +393,17 @@ public class LeaseCleanupManager {
boolean cleanedUpGarbageLease;
boolean wereChildShardsPresent;
boolean wasResourceNotFound;
String cleanupFailureReason;
public boolean leaseCleanedUp() {
return cleanedUpCompletedLease | cleanedUpGarbageLease;
}
}
}
@Value
@Accessors(fluent = true)
private static class CompletedShardResult {
boolean cleanedUp;
String failureMsg;
}
}

View file

@ -20,6 +20,7 @@ import java.util.Map;
import java.util.concurrent.TimeUnit;
import com.amazonaws.services.dynamodbv2.model.BillingMode;
import com.amazonaws.services.dynamodbv2.model.ExpectedAttributeValue;
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.KinesisClientLibConfiguration;
import com.amazonaws.services.kinesis.leases.util.DynamoUtils;
import org.apache.commons.logging.Log;
@ -582,7 +583,9 @@ public class LeaseManager<T extends Lease> implements ILeaseManager<T> {
UpdateItemRequest request = new UpdateItemRequest();
request.setTableName(table);
request.setKey(serializer.getDynamoHashKey(lease));
request.setExpected(serializer.getDynamoLeaseCounterExpectation(lease));
Map<String, ExpectedAttributeValue> expectations = serializer.getDynamoLeaseCounterExpectation(lease);
expectations.putAll(serializer.getDynamoLeaseCheckpointExpectation(lease));
request.setExpected(expectations);
Map<String, AttributeValueUpdate> updates = serializer.getDynamoLeaseCounterUpdate(lease);
updates.putAll(serializer.getDynamoUpdateLeaseUpdate(lease));
@ -624,7 +627,6 @@ public class LeaseManager<T extends Lease> implements ILeaseManager<T> {
request.setExpected(serializer.getDynamoExistentExpectation(lease.getLeaseKey()));
Map<String, AttributeValueUpdate> updates = serializer.getDynamoUpdateLeaseUpdate(lease, updateField);
updates.putAll(serializer.getDynamoUpdateLeaseUpdate(lease));
request.setAttributeUpdates(updates);
try {

View file

@ -127,6 +127,11 @@ public class LeaseSerializer implements ILeaseSerializer<Lease> {
return result;
}
@Override
public Map<String, ExpectedAttributeValue> getDynamoLeaseCheckpointExpectation(final Lease lease) {
return new HashMap<>();
}
@Override
public Map<String, ExpectedAttributeValue> getDynamoNonexistantExpectation() {
Map<String, ExpectedAttributeValue> result = new HashMap<String, ExpectedAttributeValue>();

View file

@ -74,6 +74,14 @@ public interface ILeaseSerializer<T extends Lease> {
*/
public Map<String, ExpectedAttributeValue> getDynamoLeaseOwnerExpectation(T lease);
/**
* @param lease
* @return the attribute value map asserting that the checkpoint state is as expected.
*/
default Map<String, ExpectedAttributeValue> getDynamoLeaseCheckpointExpectation(T lease) {
throw new UnsupportedOperationException("DynamoLeaseCheckpointExpectation is not implemented");
}
/**
* @return the attribute value map asserting that a lease does not exist.
*/

View file

@ -14,8 +14,8 @@
*/
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
import static com.amazonaws.services.kinesis.clientlibrary.lib.worker.ConsumerStates.ConsumerState;
import static com.amazonaws.services.kinesis.clientlibrary.lib.worker.ConsumerStates.ShardConsumerState;
import static com.amazonaws.services.kinesis.clientlibrary.lib.worker.KinesisConsumerStates.ConsumerState;
import static com.amazonaws.services.kinesis.clientlibrary.lib.worker.KinesisConsumerStates.ShardConsumerState;
import static org.hamcrest.CoreMatchers.equalTo;
import static org.hamcrest.CoreMatchers.nullValue;
import static org.hamcrest.MatcherAssert.assertThat;
@ -50,7 +50,7 @@ import com.amazonaws.services.kinesis.leases.interfaces.ILeaseManager;
public class ConsumerStatesTest {
@Mock
private ShardConsumer consumer;
private KinesisShardConsumer consumer;
@Mock
private StreamConfig streamConfig;
@Mock
@ -251,9 +251,9 @@ public class ConsumerStatesTest {
equalTo((IRecordProcessorCheckpointer) recordProcessorCheckpointer)));
assertThat(task, shutdownReqTask(ShutdownNotification.class, "shutdownNotification", equalTo(shutdownNotification)));
assertThat(state.successTransition(), equalTo(ConsumerStates.SHUTDOWN_REQUEST_COMPLETION_STATE));
assertThat(state.successTransition(), equalTo(KinesisConsumerStates.SHUTDOWN_REQUEST_COMPLETION_STATE));
assertThat(state.shutdownTransition(ShutdownReason.REQUESTED),
equalTo(ConsumerStates.SHUTDOWN_REQUEST_COMPLETION_STATE));
equalTo(KinesisConsumerStates.SHUTDOWN_REQUEST_COMPLETION_STATE));
assertThat(state.shutdownTransition(ShutdownReason.ZOMBIE),
equalTo(ShardConsumerState.SHUTTING_DOWN.getConsumerState()));
assertThat(state.shutdownTransition(ShutdownReason.TERMINATE),
@ -266,7 +266,7 @@ public class ConsumerStatesTest {
@Test
public void shutdownRequestCompleteStateTest() {
ConsumerState state = ConsumerStates.SHUTDOWN_REQUEST_COMPLETION_STATE;
ConsumerState state = KinesisConsumerStates.SHUTDOWN_REQUEST_COMPLETION_STATE;
assertThat(state.createTask(consumer), nullValue());
@ -345,9 +345,9 @@ public class ConsumerStatesTest {
verify(shutdownNotification, never()).shutdownComplete();
}
static <ValueType> ReflectionPropertyMatcher<ShutdownTask, ValueType> shutdownTask(Class<ValueType> valueTypeClass,
static <ValueType> ReflectionPropertyMatcher<KinesisShutdownTask, ValueType> shutdownTask(Class<ValueType> valueTypeClass,
String propertyName, Matcher<ValueType> matcher) {
return taskWith(ShutdownTask.class, valueTypeClass, propertyName, matcher);
return taskWith(KinesisShutdownTask.class, valueTypeClass, propertyName, matcher);
}
static <ValueType> ReflectionPropertyMatcher<ShutdownNotificationTask, ValueType> shutdownReqTask(

View file

@ -49,7 +49,7 @@ public class GracefulShutdownCoordinatorTest {
@Mock
private Callable<GracefulShutdownContext> contextCallable;
@Mock
private ConcurrentMap<ShardInfo, ShardConsumer> shardInfoConsumerMap;
private ConcurrentMap<ShardInfo, IShardConsumer> shardInfoConsumerMap;
@Test
public void testAllShutdownCompletedAlready() throws Exception {

View file

@ -20,13 +20,16 @@ import static org.junit.Assert.assertTrue;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.fail;
import java.util.Arrays;
import java.util.Date;
import java.util.List;
import com.amazonaws.services.dynamodbv2.model.BillingMode;
import org.junit.Test;
import org.mockito.Mockito;
import com.amazonaws.ClientConfiguration;
import com.amazonaws.arn.Arn;
import com.amazonaws.auth.AWSCredentialsProvider;
import com.amazonaws.regions.Region;
import com.amazonaws.regions.RegionUtils;
@ -34,6 +37,7 @@ import com.amazonaws.services.cloudwatch.AmazonCloudWatchClient;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient;
import com.amazonaws.services.kinesis.AmazonKinesisClient;
import com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessorFactory;
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.SimpleRecordsFetcherFactory;
import com.amazonaws.services.kinesis.metrics.interfaces.MetricsLevel;
import com.google.common.collect.ImmutableSet;
@ -46,8 +50,28 @@ public class KinesisClientLibConfigurationTest {
private static final long TEST_VALUE_LONG = 1000L;
private static final int TEST_VALUE_INT = 1000;
private static final int PARAMETER_COUNT = 6;
private static final String ACCOUNT_ID = "123456789012";
private static final String TEST_STRING = "TestString";
private static final Arn TEST_ARN = Arn.builder()
.withPartition("aws")
.withService("kinesis")
.withRegion("us-east-1")
.withAccountId(ACCOUNT_ID)
.withResource("stream/" + TEST_STRING)
.build();
/**
* Invalid steamARN due to invalid service. This is a sample used for testing.
* @see KinesisClientLibConfigurationTest#testWithInvalidStreamArnsThrowsException() for more examples
*/
private static final Arn INVALID_TEST_ARN = Arn.builder()
.withPartition("aws")
.withService("dynamodb")
.withRegion("us-east-1")
.withAccountId(ACCOUNT_ID)
.withResource("stream/" + TEST_STRING)
.build();
private static final String ALTER_STRING = "AlterString";
// We don't want any of these tests to run checkpoint validation
@ -62,32 +86,11 @@ public class KinesisClientLibConfigurationTest {
new KinesisClientLibConfiguration(TEST_STRING, TEST_STRING, null, TEST_STRING);
// Test constructor with all valid arguments.
config =
new KinesisClientLibConfiguration(TEST_STRING,
TEST_STRING,
TEST_STRING,
TEST_STRING,
InitialPositionInStream.LATEST,
null,
null,
null,
TEST_VALUE_LONG,
TEST_STRING,
TEST_VALUE_INT,
TEST_VALUE_LONG,
false,
TEST_VALUE_LONG,
TEST_VALUE_LONG,
true,
new ClientConfiguration(),
new ClientConfiguration(),
new ClientConfiguration(),
TEST_VALUE_LONG,
TEST_VALUE_LONG,
TEST_VALUE_INT,
skipCheckpointValidationValue,
null,
TEST_VALUE_LONG, BillingMode.PROVISIONED);
config = buildKinesisClientLibConfiguration(TEST_STRING);
// Test constructor with streamArn with all valid arguments.
config = buildKinesisClientLibConfiguration(TEST_ARN);
Assert.assertEquals(config.getStreamName(), TEST_STRING);
}
@Test
@ -168,6 +171,12 @@ public class KinesisClientLibConfigurationTest {
}
intValues[i] = TEST_VALUE_INT;
}
// Test constructor with invalid streamArn
try {
config = buildKinesisClientLibConfiguration(INVALID_TEST_ARN);
} catch(IllegalArgumentException e) {
System.out.println(e.getMessage());
}
Assert.assertTrue("KCLConfiguration should return null when using negative arguments", config == null);
}
@ -370,4 +379,111 @@ public class KinesisClientLibConfigurationTest {
config = config.withIgnoreUnexpectedChildShards(true);
assertTrue(config.shouldIgnoreUnexpectedChildShards());
}
@Test
public void testWithValidStreamArnsSucceed() {
List<String> validArnList = Arrays.asList(
"arn:aws:kinesis:us-east-1:123456789012:stream/123stream-name123",
"arn:aws-china:kinesis:us-east-2:123456789012:stream/stream-name",
"arn:aws-us-gov:kinesis:us-east-2:123456789012:stream/stream-name"
);
KinesisClientLibConfiguration config =
new KinesisClientLibConfiguration("TestApplication", "TestStream", null, "TestWorker");
for (final String arn : validArnList) {
config.withStreamArn(Arn.fromString(arn));
}
}
@Test
public void testWithInvalidStreamArnsThrowsException() {
List<String> invalidArnList = Arrays.asList(
"arn:abc:kinesis:us-east-1:123456789012:stream/stream-name", //invalid partition
"arn:aws:dynamnodb:us-east-1:123456789012:stream/stream-name", // Kinesis ARN, but with a non-Kinesis service
"arn:aws:kinesis::123456789012:stream/stream-name", // missing region
"arn:aws:kinesis:us-east-1::stream/stream-name", // missing account id
"arn:aws:kinesis:us-east-1:123456789:stream/stream-name", // account id not 12 digits
"arn:aws:kinesis:us-east-1:123456789abc:stream/stream-name", // 12char alphanumeric account id
"arn:aws:kinesis:us-east-1:123456789012:table/table-name", // incorrect resource type
"arn:aws:dynamodb:us-east-1:123456789012:table/myDynamoDBTable" // valid arn but not a stream
);
KinesisClientLibConfiguration config =
new KinesisClientLibConfiguration("TestApplication", "TestStream", null, "TestWorker");
for (final String arnString : invalidArnList) {
Arn arn = Arn.fromString(arnString);
try {
config.withStreamArn(arn);
fail("Arn " + arn + " should have thrown an IllegalArgumentException");
} catch (IllegalArgumentException e) {
// expected
}
}
}
private KinesisClientLibConfiguration buildKinesisClientLibConfiguration(Arn streamArn) {
return new KinesisClientLibConfiguration(TEST_STRING,
streamArn,
TEST_STRING,
TEST_STRING,
InitialPositionInStream.LATEST,
null,
null,
null,
TEST_VALUE_LONG,
TEST_STRING,
TEST_VALUE_INT,
TEST_VALUE_LONG,
false,
TEST_VALUE_LONG,
TEST_VALUE_LONG,
true,
new ClientConfiguration(),
new ClientConfiguration(),
new ClientConfiguration(),
TEST_VALUE_LONG,
TEST_VALUE_LONG,
TEST_VALUE_INT,
skipCheckpointValidationValue,
null,
TEST_VALUE_LONG, BillingMode.PROVISIONED,
new SimpleRecordsFetcherFactory(),
TEST_VALUE_LONG,
TEST_VALUE_LONG,
TEST_VALUE_LONG);
}
private KinesisClientLibConfiguration buildKinesisClientLibConfiguration(String streamName) {
return new KinesisClientLibConfiguration(TEST_STRING,
streamName,
TEST_STRING,
TEST_STRING,
InitialPositionInStream.LATEST,
null,
null,
null,
TEST_VALUE_LONG,
TEST_STRING,
TEST_VALUE_INT,
TEST_VALUE_LONG,
false,
TEST_VALUE_LONG,
TEST_VALUE_LONG,
true,
new ClientConfiguration(),
new ClientConfiguration(),
new ClientConfiguration(),
TEST_VALUE_LONG,
TEST_VALUE_LONG,
TEST_VALUE_INT,
skipCheckpointValidationValue,
null,
TEST_VALUE_LONG, BillingMode.PROVISIONED,
new SimpleRecordsFetcherFactory(),
TEST_VALUE_LONG,
TEST_VALUE_LONG,
TEST_VALUE_LONG);
}
}

View file

@ -79,7 +79,7 @@ public class KinesisClientLibLeaseCoordinatorTest {
leaseCoordinator.initialize();
}
@Test(expected = KinesisClientLibIOException.class)
@Test(expected = com.amazonaws.services.kinesis.clientlibrary.exceptions.InvalidStateException.class)
public void testGetCheckpointObjectWithNoLease()
throws DependencyException, ProvisionedThroughputException, IllegalStateException, InvalidStateException,
KinesisClientLibException {

View file

@ -50,6 +50,7 @@ public class PeriodicShardSyncManagerTest {
private static final String WORKER_ID = "workerId";
public static final long LEASES_RECOVERY_AUDITOR_EXECUTION_FREQUENCY_MILLIS = 2 * 60 * 1000L;
public static final int LEASES_RECOVERY_AUDITOR_INCONSISTENCY_CONFIDENCE_THRESHOLD = 3;
private static final int MAX_DEPTH_WITH_IN_PROGRESS_PARENTS = 1;
/** Manager for PERIODIC shard sync strategy */
private PeriodicShardSyncManager periodicShardSyncManager;
@ -475,7 +476,7 @@ public class PeriodicShardSyncManagerTest {
for (int i = 0; i < 1000; i++) {
int maxInitialLeaseCount = 100;
List<KinesisClientLease> leases = generateInitialLeases(maxInitialLeaseCount);
reshard(leases, 5, ReshardType.MERGE, maxInitialLeaseCount, true);
reshard(leases, MAX_DEPTH_WITH_IN_PROGRESS_PARENTS, ReshardType.MERGE, maxInitialLeaseCount, true);
Collections.shuffle(leases);
Assert.assertFalse(periodicShardSyncManager.hasHoleInLeases(leases).isPresent());
Assert.assertFalse(auditorPeriodicShardSyncManager.hasHoleInLeases(leases).isPresent());
@ -487,7 +488,7 @@ public class PeriodicShardSyncManagerTest {
for (int i = 0; i < 1000; i++) {
int maxInitialLeaseCount = 100;
List<KinesisClientLease> leases = generateInitialLeases(maxInitialLeaseCount);
reshard(leases, 5, ReshardType.ANY, maxInitialLeaseCount, true);
reshard(leases, MAX_DEPTH_WITH_IN_PROGRESS_PARENTS, ReshardType.ANY, maxInitialLeaseCount, true);
Collections.shuffle(leases);
Assert.assertFalse(periodicShardSyncManager.hasHoleInLeases(leases).isPresent());
Assert.assertFalse(auditorPeriodicShardSyncManager.hasHoleInLeases(leases).isPresent());

View file

@ -54,8 +54,6 @@ import java.util.concurrent.Future;
import java.util.concurrent.RejectedExecutionException;
import java.util.concurrent.TimeUnit;
import com.amazonaws.services.kinesis.leases.impl.LeaseCleanupManager;
import com.amazonaws.services.kinesis.leases.impl.LeaseManager;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.hamcrest.Description;
@ -64,7 +62,6 @@ import org.hamcrest.TypeSafeMatcher;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import org.mockito.runners.MockitoJUnitRunner;
@ -88,7 +85,7 @@ import com.amazonaws.services.kinesis.model.Shard;
import com.amazonaws.services.kinesis.model.ShardIteratorType;
/**
* Unit tests of {@link ShardConsumer}.
* Unit tests of {@link KinesisShardConsumer}.
*/
@RunWith(MockitoJUnitRunner.class)
public class ShardConsumerTest {
@ -163,8 +160,8 @@ public class ShardConsumerTest {
callProcessRecordsForEmptyRecordList,
skipCheckpointValidationValue, INITIAL_POSITION_LATEST);
ShardConsumer consumer =
new ShardConsumer(shardInfo,
KinesisShardConsumer consumer =
new KinesisShardConsumer(shardInfo,
streamConfig,
checkpoint,
processor,
@ -178,19 +175,19 @@ public class ShardConsumerTest {
config,
shardSyncer,
shardSyncStrategy);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
consumer.consumeShard(); // initialize
Thread.sleep(50L);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
consumer.consumeShard(); // initialize
Thread.sleep(50L);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.INITIALIZING)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.INITIALIZING)));
consumer.consumeShard(); // initialize
Thread.sleep(50L);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.INITIALIZING)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.INITIALIZING)));
consumer.consumeShard(); // initialize
Thread.sleep(50L);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.INITIALIZING)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.INITIALIZING)));
}
/**
@ -213,8 +210,8 @@ public class ShardConsumerTest {
callProcessRecordsForEmptyRecordList,
skipCheckpointValidationValue, INITIAL_POSITION_LATEST);
ShardConsumer consumer =
new ShardConsumer(shardInfo,
KinesisShardConsumer consumer =
new KinesisShardConsumer(shardInfo,
streamConfig,
checkpoint,
processor,
@ -229,23 +226,69 @@ public class ShardConsumerTest {
shardSyncer,
shardSyncStrategy);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
consumer.consumeShard(); // initialize
Thread.sleep(50L);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
doThrow(new RejectedExecutionException()).when(spyExecutorService).submit(any(InitializeTask.class));
consumer.consumeShard(); // initialize
Thread.sleep(50L);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.INITIALIZING)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.INITIALIZING)));
consumer.consumeShard(); // initialize
Thread.sleep(50L);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.INITIALIZING)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.INITIALIZING)));
consumer.consumeShard(); // initialize
Thread.sleep(50L);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.INITIALIZING)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.INITIALIZING)));
}
@Test
public void testInitializationStateTransitionsToShutdownOnLeaseNotFound() throws Exception {
ShardInfo shardInfo = new ShardInfo("s-0-0", "testToken", null, ExtendedSequenceNumber.TRIM_HORIZON);
ICheckpoint checkpoint = new KinesisClientLibLeaseCoordinator(leaseManager, "", 0, 0);
when(leaseManager.getLease(anyString())).thenReturn(null);
when(leaseCoordinator.getLeaseManager()).thenReturn(leaseManager);
StreamConfig streamConfig =
new StreamConfig(streamProxy,
1,
10,
callProcessRecordsForEmptyRecordList,
skipCheckpointValidationValue, INITIAL_POSITION_LATEST);
KinesisShardConsumer consumer =
new KinesisShardConsumer(shardInfo,
streamConfig,
checkpoint,
processor,
leaseCoordinator,
parentShardPollIntervalMillis,
cleanupLeasesOfCompletedShards,
executorService,
metricsFactory,
taskBackoffTimeMillis,
KinesisClientLibConfiguration.DEFAULT_SKIP_SHARD_SYNC_AT_STARTUP_IF_LEASES_EXIST,
config,
shardSyncer,
shardSyncStrategy);
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
consumer.consumeShard();
Thread.sleep(50L);
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
consumer.consumeShard();
Thread.sleep(50L);
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.INITIALIZING)));
consumer.consumeShard();
Thread.sleep(50L);
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.SHUTTING_DOWN)));
consumer.consumeShard();
Thread.sleep(50L);
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.SHUTDOWN_COMPLETE)));
}
@SuppressWarnings("unchecked")
@Test
public final void testRecordProcessorThrowable() throws Exception {
@ -257,8 +300,8 @@ public class ShardConsumerTest {
callProcessRecordsForEmptyRecordList,
skipCheckpointValidationValue, INITIAL_POSITION_LATEST);
ShardConsumer consumer =
new ShardConsumer(shardInfo,
KinesisShardConsumer consumer =
new KinesisShardConsumer(shardInfo,
streamConfig,
checkpoint,
processor,
@ -281,10 +324,10 @@ public class ShardConsumerTest {
when(checkpoint.getCheckpointObject(anyString())).thenReturn(
new Checkpoint(checkpointSequenceNumber, pendingCheckpointSequenceNumber));
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
consumer.consumeShard(); // submit BlockOnParentShardTask
Thread.sleep(50L);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
verify(processor, times(0)).initialize(any(InitializationInput.class));
// Throw Error when IRecordProcessor.initialize() is invoked.
@ -292,7 +335,7 @@ public class ShardConsumerTest {
consumer.consumeShard(); // submit InitializeTask
Thread.sleep(50L);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.INITIALIZING)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.INITIALIZING)));
verify(processor, times(1)).initialize(argThat(
initializationInputMatcher(checkpointSequenceNumber, pendingCheckpointSequenceNumber)));
@ -304,7 +347,7 @@ public class ShardConsumerTest {
assertThat(e.getCause(), instanceOf(ExecutionException.class));
}
Thread.sleep(50L);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.INITIALIZING)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.INITIALIZING)));
verify(processor, times(1)).initialize(argThat(
initializationInputMatcher(checkpointSequenceNumber, pendingCheckpointSequenceNumber)));
@ -312,7 +355,7 @@ public class ShardConsumerTest {
consumer.consumeShard(); // submit InitializeTask again.
Thread.sleep(50L);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.INITIALIZING)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.INITIALIZING)));
verify(processor, times(2)).initialize(argThat(
initializationInputMatcher(checkpointSequenceNumber, pendingCheckpointSequenceNumber)));
verify(processor, times(2)).initialize(any(InitializationInput.class)); // no other calls with different args
@ -320,11 +363,11 @@ public class ShardConsumerTest {
// Checking the status of submitted InitializeTask from above should pass.
consumer.consumeShard();
Thread.sleep(50L);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.PROCESSING)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.PROCESSING)));
}
/**
* Test method for {@link ShardConsumer#consumeShard()}
* Test method for {@link KinesisShardConsumer#consumeShard()}
*/
@Test
public final void testConsumeShard() throws Exception {
@ -377,8 +420,8 @@ public class ShardConsumerTest {
any(IMetricsFactory.class), anyInt()))
.thenReturn(getRecordsCache);
ShardConsumer consumer =
new ShardConsumer(shardInfo,
KinesisShardConsumer consumer =
new KinesisShardConsumer(shardInfo,
streamConfig,
checkpoint,
processor,
@ -397,11 +440,11 @@ public class ShardConsumerTest {
shardSyncer,
shardSyncStrategy);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
consumer.consumeShard(); // check on parent shards
Thread.sleep(50L);
consumer.consumeShard(); // start initialization
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.INITIALIZING)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.INITIALIZING)));
consumer.consumeShard(); // initialize
processor.getInitializeLatch().await(5, TimeUnit.SECONDS);
verify(getRecordsCache).start();
@ -411,7 +454,7 @@ public class ShardConsumerTest {
boolean newTaskSubmitted = consumer.consumeShard();
if (newTaskSubmitted) {
LOG.debug("New processing task was submitted, call # " + i);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.PROCESSING)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.PROCESSING)));
// CHECKSTYLE:IGNORE ModifiedControlVariable FOR NEXT 1 LINES
i += maxRecords;
}
@ -426,21 +469,21 @@ public class ShardConsumerTest {
assertThat(processor.getNotifyShutdownLatch().await(1, TimeUnit.SECONDS), is(true));
Thread.sleep(50);
assertThat(consumer.getShutdownReason(), equalTo(ShutdownReason.REQUESTED));
assertThat(consumer.getCurrentState(), equalTo(ConsumerStates.ShardConsumerState.SHUTDOWN_REQUESTED));
assertThat(consumer.getCurrentState(), equalTo(KinesisConsumerStates.ShardConsumerState.SHUTDOWN_REQUESTED));
verify(shutdownNotification).shutdownNotificationComplete();
assertThat(processor.isShutdownNotificationCalled(), equalTo(true));
consumer.consumeShard();
Thread.sleep(50);
assertThat(consumer.getCurrentState(), equalTo(ConsumerStates.ShardConsumerState.SHUTDOWN_REQUESTED));
assertThat(consumer.getCurrentState(), equalTo(KinesisConsumerStates.ShardConsumerState.SHUTDOWN_REQUESTED));
consumer.beginShutdown();
Thread.sleep(50L);
assertThat(consumer.getShutdownReason(), equalTo(ShutdownReason.ZOMBIE));
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.SHUTTING_DOWN)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.SHUTTING_DOWN)));
consumer.beginShutdown();
consumer.consumeShard();
verify(shutdownNotification, atLeastOnce()).shutdownComplete();
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.SHUTDOWN_COMPLETE)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.SHUTDOWN_COMPLETE)));
assertThat(processor.getShutdownReason(), is(equalTo(ShutdownReason.ZOMBIE)));
verify(getRecordsCache).shutdown();
@ -481,8 +524,8 @@ public class ShardConsumerTest {
when(recordProcessorCheckpointer.getLastCheckpointValue()).thenReturn(ExtendedSequenceNumber.SHARD_END);
when(streamConfig.getStreamProxy()).thenReturn(streamProxy);
final ShardConsumer consumer =
new ShardConsumer(shardInfo,
final KinesisShardConsumer consumer =
new KinesisShardConsumer(shardInfo,
streamConfig,
checkpoint,
processor,
@ -501,21 +544,21 @@ public class ShardConsumerTest {
shardSyncer,
shardSyncStrategy);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
verify(parentLease, times(0)).getCheckpoint();
consumer.consumeShard(); // check on parent shards
Thread.sleep(parentShardPollIntervalMillis * 2);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
verify(parentLease, times(1)).getCheckpoint();
consumer.notifyShutdownRequested(shutdownNotification);
verify(shutdownNotification, times(0)).shutdownComplete();
assertThat(consumer.getShutdownReason(), equalTo(ShutdownReason.REQUESTED));
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
consumer.consumeShard();
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.SHUTTING_DOWN)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.SHUTTING_DOWN)));
Thread.sleep(50L);
consumer.beginShutdown();
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.SHUTDOWN_COMPLETE)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.SHUTDOWN_COMPLETE)));
assertThat(consumer.isShutdown(), is(true));
verify(shutdownNotification, times(1)).shutdownComplete();
consumer.beginShutdown();
@ -540,7 +583,7 @@ public class ShardConsumerTest {
}
/**
* Test method for {@link ShardConsumer#consumeShard()} that ensures a transient error thrown from the record
* Test method for {@link KinesisShardConsumer#consumeShard()} that ensures a transient error thrown from the record
* processor's shutdown method with reason zombie will be retried.
*/
@Test
@ -587,7 +630,9 @@ public class ShardConsumerTest {
parentShardIds.add("parentShardId");
KinesisClientLease currentLease = createLease(streamShardId, "leaseOwner", parentShardIds);
currentLease.setCheckpoint(new ExtendedSequenceNumber("testSequenceNumbeer"));
when(leaseManager.getLease(streamShardId)).thenReturn(currentLease);
KinesisClientLease currentLease1 = createLease(streamShardId, "leaseOwner", parentShardIds);
currentLease1.setCheckpoint(ExtendedSequenceNumber.SHARD_END);
when(leaseManager.getLease(streamShardId)).thenReturn(currentLease, currentLease, currentLease1);
when(leaseCoordinator.getCurrentlyHeldLease(shardInfo.getShardId())).thenReturn(currentLease);
RecordProcessorCheckpointer recordProcessorCheckpointer = new RecordProcessorCheckpointer(
@ -601,8 +646,8 @@ public class ShardConsumerTest {
metricsFactory
);
ShardConsumer consumer =
new ShardConsumer(shardInfo,
KinesisShardConsumer consumer =
new KinesisShardConsumer(shardInfo,
streamConfig,
checkpoint,
processor,
@ -622,11 +667,11 @@ public class ShardConsumerTest {
shardSyncStrategy);
when(leaseCoordinator.updateLease(any(KinesisClientLease.class), any(UUID.class))).thenReturn(true);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
consumer.consumeShard(); // check on parent shards
Thread.sleep(50L);
consumer.consumeShard(); // start initialization
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.INITIALIZING)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.INITIALIZING)));
consumer.consumeShard(); // initialize
processor.getInitializeLatch().await(5, TimeUnit.SECONDS);
verify(getRecordsCache).start();
@ -636,7 +681,7 @@ public class ShardConsumerTest {
boolean newTaskSubmitted = consumer.consumeShard();
if (newTaskSubmitted) {
LOG.debug("New processing task was submitted, call # " + i);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.PROCESSING)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.PROCESSING)));
// CHECKSTYLE:IGNORE ModifiedControlVariable FOR NEXT 1 LINES
i += maxRecords;
}
@ -664,12 +709,12 @@ public class ShardConsumerTest {
// Wait for shutdown complete now that terminate shutdown is successful
for (int i = 0; i < 100; i++) {
consumer.consumeShard();
if (consumer.getCurrentState() == ConsumerStates.ShardConsumerState.SHUTDOWN_COMPLETE) {
if (consumer.getCurrentState() == KinesisConsumerStates.ShardConsumerState.SHUTDOWN_COMPLETE) {
break;
}
Thread.sleep(50L);
}
assertThat(consumer.getCurrentState(), equalTo(ConsumerStates.ShardConsumerState.SHUTDOWN_COMPLETE));
assertThat(consumer.getCurrentState(), equalTo(KinesisConsumerStates.ShardConsumerState.SHUTDOWN_COMPLETE));
assertThat(processor.getShutdownReason(), is(equalTo(ShutdownReason.TERMINATE)));
@ -687,7 +732,7 @@ public class ShardConsumerTest {
/**
* Test method for {@link ShardConsumer#consumeShard()} that ensures the shardConsumer gets shutdown with shutdown
* Test method for {@link KinesisShardConsumer#consumeShard()} that ensures the shardConsumer gets shutdown with shutdown
* reason TERMINATE when the shard end is reached.
*/
@Test
@ -714,7 +759,10 @@ public class ShardConsumerTest {
parentShardIds.add("parentShardId");
KinesisClientLease currentLease = createLease(streamShardId, "leaseOwner", parentShardIds);
currentLease.setCheckpoint(new ExtendedSequenceNumber("testSequenceNumbeer"));
when(leaseManager.getLease(streamShardId)).thenReturn(currentLease);
KinesisClientLease currentLease1 = createLease(streamShardId, "leaseOwner", parentShardIds);
currentLease1.setCheckpoint(ExtendedSequenceNumber.SHARD_END);
when(leaseManager.getLease(streamShardId)).thenReturn(currentLease, currentLease, currentLease1);
when(leaseCoordinator.getLeaseManager()).thenReturn(leaseManager);
TransientShutdownErrorTestStreamlet processor = new TransientShutdownErrorTestStreamlet();
@ -747,8 +795,8 @@ public class ShardConsumerTest {
metricsFactory
);
ShardConsumer consumer =
new ShardConsumer(shardInfo,
KinesisShardConsumer consumer =
new KinesisShardConsumer(shardInfo,
streamConfig,
checkpoint,
processor,
@ -770,11 +818,11 @@ public class ShardConsumerTest {
when(leaseCoordinator.getCurrentlyHeldLease(shardInfo.getShardId())).thenReturn(currentLease);
when(leaseCoordinator.updateLease(any(KinesisClientLease.class), any(UUID.class))).thenReturn(true);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
consumer.consumeShard(); // check on parent shards
Thread.sleep(50L);
consumer.consumeShard(); // start initialization
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.INITIALIZING)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.INITIALIZING)));
consumer.consumeShard(); // initialize
processor.getInitializeLatch().await(5, TimeUnit.SECONDS);
verify(getRecordsCache).start();
@ -784,7 +832,7 @@ public class ShardConsumerTest {
boolean newTaskSubmitted = consumer.consumeShard();
if (newTaskSubmitted) {
LOG.debug("New processing task was submitted, call # " + i);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.PROCESSING)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.PROCESSING)));
// CHECKSTYLE:IGNORE ModifiedControlVariable FOR NEXT 1 LINES
i += maxRecords;
}
@ -812,12 +860,12 @@ public class ShardConsumerTest {
// Wait for shutdown complete now that terminate shutdown is successful
for (int i = 0; i < 100; i++) {
consumer.consumeShard();
if (consumer.getCurrentState() == ConsumerStates.ShardConsumerState.SHUTDOWN_COMPLETE) {
if (consumer.getCurrentState() == KinesisConsumerStates.ShardConsumerState.SHUTDOWN_COMPLETE) {
break;
}
Thread.sleep(50L);
}
assertThat(consumer.getCurrentState(), equalTo(ConsumerStates.ShardConsumerState.SHUTDOWN_COMPLETE));
assertThat(consumer.getCurrentState(), equalTo(KinesisConsumerStates.ShardConsumerState.SHUTDOWN_COMPLETE));
assertThat(processor.getShutdownReason(), is(equalTo(ShutdownReason.TERMINATE)));
@ -833,7 +881,7 @@ public class ShardConsumerTest {
}
/**
* Test method for {@link ShardConsumer#consumeShard()} that starts from initial position of type AT_TIMESTAMP.
* Test method for {@link KinesisShardConsumer#consumeShard()} that starts from initial position of type AT_TIMESTAMP.
*/
@Test
public final void testConsumeShardWithInitialPositionAtTimestamp() throws Exception {
@ -890,8 +938,8 @@ public class ShardConsumerTest {
any(IMetricsFactory.class), anyInt()))
.thenReturn(getRecordsCache);
ShardConsumer consumer =
new ShardConsumer(shardInfo,
KinesisShardConsumer consumer =
new KinesisShardConsumer(shardInfo,
streamConfig,
checkpoint,
processor,
@ -910,11 +958,11 @@ public class ShardConsumerTest {
shardSyncer,
shardSyncStrategy);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
consumer.consumeShard(); // check on parent shards
Thread.sleep(50L);
consumer.consumeShard(); // start initialization
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.INITIALIZING)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.INITIALIZING)));
consumer.consumeShard(); // initialize
Thread.sleep(50L);
@ -925,7 +973,7 @@ public class ShardConsumerTest {
boolean newTaskSubmitted = consumer.consumeShard();
if (newTaskSubmitted) {
LOG.debug("New processing task was submitted, call # " + i);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.PROCESSING)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.PROCESSING)));
// CHECKSTYLE:IGNORE ModifiedControlVariable FOR NEXT 1 LINES
i += maxRecords;
}
@ -937,9 +985,9 @@ public class ShardConsumerTest {
assertThat(processor.getShutdownReason(), nullValue());
consumer.beginShutdown();
Thread.sleep(50L);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.SHUTTING_DOWN)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.SHUTTING_DOWN)));
consumer.beginShutdown();
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.SHUTDOWN_COMPLETE)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.SHUTDOWN_COMPLETE)));
assertThat(processor.getShutdownReason(), is(equalTo(ShutdownReason.ZOMBIE)));
executorService.shutdown();
@ -966,8 +1014,8 @@ public class ShardConsumerTest {
callProcessRecordsForEmptyRecordList,
skipCheckpointValidationValue, INITIAL_POSITION_LATEST);
ShardConsumer consumer =
new ShardConsumer(shardInfo,
KinesisShardConsumer consumer =
new KinesisShardConsumer(shardInfo,
streamConfig,
checkpoint,
processor,
@ -993,22 +1041,22 @@ public class ShardConsumerTest {
when(checkpoint.getCheckpointObject(anyString())).thenReturn(
new Checkpoint(checkpointSequenceNumber, pendingCheckpointSequenceNumber));
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
consumer.consumeShard(); // submit BlockOnParentShardTask
Thread.sleep(50L);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.WAITING_ON_PARENT_SHARDS)));
verify(processor, times(0)).initialize(any(InitializationInput.class));
consumer.consumeShard(); // submit InitializeTask
Thread.sleep(50L);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.INITIALIZING)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.INITIALIZING)));
verify(processor, times(1)).initialize(argThat(
initializationInputMatcher(checkpointSequenceNumber, pendingCheckpointSequenceNumber)));
verify(processor, times(1)).initialize(any(InitializationInput.class)); // no other calls with different args
consumer.consumeShard();
Thread.sleep(50L);
assertThat(consumer.getCurrentState(), is(equalTo(ConsumerStates.ShardConsumerState.PROCESSING)));
assertThat(consumer.getCurrentState(), is(equalTo(KinesisConsumerStates.ShardConsumerState.PROCESSING)));
}
@Test
@ -1021,8 +1069,8 @@ public class ShardConsumerTest {
callProcessRecordsForEmptyRecordList,
skipCheckpointValidationValue, INITIAL_POSITION_LATEST);
ShardConsumer shardConsumer =
new ShardConsumer(shardInfo,
KinesisShardConsumer shardConsumer =
new KinesisShardConsumer(shardInfo,
streamConfig,
checkpoint,
processor,
@ -1053,8 +1101,8 @@ public class ShardConsumerTest {
callProcessRecordsForEmptyRecordList,
skipCheckpointValidationValue, INITIAL_POSITION_LATEST);
ShardConsumer shardConsumer =
new ShardConsumer(shardInfo,
KinesisShardConsumer shardConsumer =
new KinesisShardConsumer(shardInfo,
streamConfig,
checkpoint,
processor,
@ -1096,7 +1144,7 @@ public class ShardConsumerTest {
skipCheckpointValidationValue,
INITIAL_POSITION_LATEST);
ShardConsumer shardConsumer = new ShardConsumer(
KinesisShardConsumer shardConsumer = new KinesisShardConsumer(
shardInfo,
streamConfig,
checkpoint,

View file

@ -42,6 +42,7 @@ import com.amazonaws.services.kinesis.leases.impl.KinesisClientLease;
import com.amazonaws.services.kinesis.leases.impl.KinesisClientLeaseManager;
import com.amazonaws.services.kinesis.leases.interfaces.IKinesisClientLeaseManager;
import com.amazonaws.services.kinesis.model.StreamStatus;
import com.amazonaws.services.kinesis.model.LimitExceededException;
import static junit.framework.TestCase.fail;
@ -58,6 +59,8 @@ public class ShardSyncTaskIntegrationTest {
private IKinesisProxy kinesisProxy;
private final KinesisShardSyncer shardSyncer = new KinesisShardSyncer(new KinesisLeaseCleanupValidator());
private static final int retryBackoffMillis = 1000;
/**
* @throws java.lang.Exception
*/
@ -71,9 +74,13 @@ public class ShardSyncTaskIntegrationTest {
} catch (AmazonServiceException ase) {
}
StreamStatus status;
StreamStatus status = null;
do {
status = StreamStatus.fromValue(kinesis.describeStream(STREAM_NAME).getStreamDescription().getStreamStatus());
try {
status = StreamStatus.fromValue(kinesis.describeStream(STREAM_NAME).getStreamDescription().getStreamStatus());
} catch (LimitExceededException e) {
Thread.sleep(retryBackoffMillis + (long) (Math.random() * 100));
}
} while (status != StreamStatus.ACTIVE);
}

View file

@ -93,6 +93,8 @@ public class ShardSyncerTest {
private LeaseManager<KinesisClientLease> leaseManager = new KinesisClientLeaseManager("tempTestTable", ddbClient, KinesisClientLibConfiguration.DEFAULT_DDB_BILLING_MODE);
protected static final KinesisLeaseCleanupValidator leaseCleanupValidator = new KinesisLeaseCleanupValidator();
private static final KinesisShardSyncer shardSyncer = new KinesisShardSyncer(leaseCleanupValidator);
private static final HashKeyRange hashKeyRange = new HashKeyRange().withStartingHashKey("0").withEndingHashKey("10");
/**
* Old/Obsolete max value of a sequence number (2^128 -1).
*/
@ -154,10 +156,10 @@ public class ShardSyncerTest {
SequenceNumberRange sequenceRange = ShardObjectHelper.newSequenceNumberRange("342980", null);
String shardId0 = "shardId-0";
shards.add(ShardObjectHelper.newShard(shardId0, null, null, sequenceRange));
shards.add(ShardObjectHelper.newShard(shardId0, null, null, sequenceRange, hashKeyRange));
String shardId1 = "shardId-1";
shards.add(ShardObjectHelper.newShard(shardId1, null, null, sequenceRange));
shards.add(ShardObjectHelper.newShard(shardId1, null, null, sequenceRange, hashKeyRange));
final LeaseSynchronizer leaseSynchronizer = getLeaseSynchronizer(shards, currentLeases);
@ -183,16 +185,16 @@ public class ShardSyncerTest {
SequenceNumberRange sequenceRange = ShardObjectHelper.newSequenceNumberRange("342980", null);
String shardId0 = "shardId-0";
shards.add(ShardObjectHelper.newShard(shardId0, null, null, sequenceRange));
shards.add(ShardObjectHelper.newShard(shardId0, null, null, sequenceRange, hashKeyRange));
String shardId1 = "shardId-1";
shards.add(ShardObjectHelper.newShard(shardId1, null, null, sequenceRange));
shards.add(ShardObjectHelper.newShard(shardId1, null, null, sequenceRange, hashKeyRange));
String shardId2 = "shardId-2";
shards.add(ShardObjectHelper.newShard(shardId2, shardId1, null, sequenceRange));
shards.add(ShardObjectHelper.newShard(shardId2, shardId1, null, sequenceRange, hashKeyRange));
String shardIdWithLease = "shardId-3";
shards.add(ShardObjectHelper.newShard(shardIdWithLease, shardIdWithLease, null, sequenceRange));
shards.add(ShardObjectHelper.newShard(shardIdWithLease, shardIdWithLease, null, sequenceRange, hashKeyRange));
currentLeases.add(newLease(shardIdWithLease));
@ -699,9 +701,9 @@ public class ShardSyncerTest {
SequenceNumberRange sequenceRange = ShardObjectHelper.newSequenceNumberRange("342980", null);
String shardId0 = "shardId-0";
shards.add(ShardObjectHelper.newShard(shardId0, null, null, sequenceRange));
shards.add(ShardObjectHelper.newShard(shardId0, null, null, sequenceRange, hashKeyRange));
String shardId1 = "shardId-1";
shards.add(ShardObjectHelper.newShard(shardId1, null, null, sequenceRange));
shards.add(ShardObjectHelper.newShard(shardId1, null, null, sequenceRange, hashKeyRange));
File dataFile = KinesisLocalFileDataCreator.generateTempDataFile(shards, 2, "testBootstrap1");
dataFile.deleteOnExit();
IKinesisProxy kinesisProxy = new KinesisLocalFileProxy(dataFile.getAbsolutePath());
@ -731,10 +733,10 @@ public class ShardSyncerTest {
SequenceNumberRange sequenceRange = ShardObjectHelper.newSequenceNumberRange("342980", null);
String shardId0 = "shardId-0";
shards.add(ShardObjectHelper.newShard(shardId0, null, null, sequenceRange));
shards.add(ShardObjectHelper.newShard(shardId0, null, null, sequenceRange, hashKeyRange));
String shardId1 = "shardId-1";
shards.add(ShardObjectHelper.newShard(shardId1, null, null, sequenceRange));
shards.add(ShardObjectHelper.newShard(shardId1, null, null, sequenceRange, hashKeyRange));
Set<InitialPositionInStreamExtended> initialPositions = new HashSet<InitialPositionInStreamExtended>();
initialPositions.add(INITIAL_POSITION_LATEST);
@ -769,17 +771,20 @@ public class ShardSyncerTest {
shardsWithoutLeases.add(ShardObjectHelper.newShard("shardId-0",
null,
null,
ShardObjectHelper.newSequenceNumberRange("303", "404")));
ShardObjectHelper.newSequenceNumberRange("303", "404"),
hashKeyRange));
final String lastShardId = "shardId-1";
shardsWithoutLeases.add(ShardObjectHelper.newShard(lastShardId,
null,
null,
ShardObjectHelper.newSequenceNumberRange("405", null)));
ShardObjectHelper.newSequenceNumberRange("405", null),
hashKeyRange));
shardsWithLeases.add(ShardObjectHelper.newShard("shardId-2",
null,
null,
ShardObjectHelper.newSequenceNumberRange("202", "302")));
ShardObjectHelper.newSequenceNumberRange("202", "302"),
hashKeyRange));
currentLeases.add(newLease("shardId-2"));
final List<Shard> allShards =
@ -805,12 +810,14 @@ public class ShardSyncerTest {
shards.add(ShardObjectHelper.newShard(firstShardId,
null,
null,
ShardObjectHelper.newSequenceNumberRange("303", "404")));
ShardObjectHelper.newSequenceNumberRange("303", "404"),
hashKeyRange));
final String lastShardId = "shardId-1";
shards.add(ShardObjectHelper.newShard(lastShardId,
null,
null,
ShardObjectHelper.newSequenceNumberRange("405", null)));
ShardObjectHelper.newSequenceNumberRange("405", null),
hashKeyRange));
final LeaseSynchronizer leaseSynchronizer = getLeaseSynchronizer(shards, currentLeases);
@ -1969,14 +1976,14 @@ public class ShardSyncerTest {
Map<String, Shard> kinesisShards = new HashMap<String, Shard>();
String parentShardId = "shardId-parent";
kinesisShards.put(parentShardId, ShardObjectHelper.newShard(parentShardId, null, null, null));
kinesisShards.put(parentShardId, ShardObjectHelper.newShard(parentShardId, null, null, null, hashKeyRange));
shardIdsOfCurrentLeases.add(parentShardId);
String adjacentParentShardId = "shardId-adjacentParent";
kinesisShards.put(adjacentParentShardId, ShardObjectHelper.newShard(adjacentParentShardId, null, null, null));
kinesisShards.put(adjacentParentShardId, ShardObjectHelper.newShard(adjacentParentShardId, null, null, null, hashKeyRange));
String shardId = "shardId-9-1";
Shard shard = ShardObjectHelper.newShard(shardId, parentShardId, adjacentParentShardId, null);
Shard shard = ShardObjectHelper.newShard(shardId, parentShardId, adjacentParentShardId, null, hashKeyRange);
kinesisShards.put(shardId, shard);
final MemoizationContext memoizationContext = new MemoizationContext();
@ -2097,6 +2104,7 @@ public class ShardSyncerTest {
String adjacentParentShardId = "shardId-adjacentParent";
shard.setParentShardId(parentShardId);
shard.setAdjacentParentShardId(adjacentParentShardId);
shard.setHashKeyRange(hashKeyRange);
KinesisClientLease lease = shardSyncer.newKCLLease(shard);
Assert.assertEquals(shardId, lease.getLeaseKey());

View file

@ -19,6 +19,7 @@ import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertNull;
import static org.mockito.Matchers.any;
import static org.mockito.Matchers.anyString;
import static org.mockito.Mockito.doAnswer;
import static org.mockito.Mockito.doNothing;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.never;
@ -65,7 +66,7 @@ import com.amazonaws.services.kinesis.leases.interfaces.ILeaseManager;
import org.junit.runner.RunWith;
import org.mockito.Mock;
import org.mockito.runners.MockitoJUnitRunner;
import static com.amazonaws.services.kinesis.clientlibrary.lib.worker.ShutdownTask.RETRY_RANDOM_MAX_RANGE;
import static com.amazonaws.services.kinesis.clientlibrary.lib.worker.KinesisShutdownTask.RETRY_RANDOM_MAX_RANGE;
/**
*
@ -138,7 +139,7 @@ public class ShutdownTaskTest {
}
/**
* Test method for {@link ShutdownTask#call()}.
* Test method for {@link KinesisShutdownTask#call()}.
*/
@Test
public final void testCallWhenApplicationDoesNotCheckpoint() {
@ -147,7 +148,7 @@ public class ShutdownTaskTest {
when(leaseCoordinator.getLeaseManager()).thenReturn(leaseManager);
boolean cleanupLeasesOfCompletedShards = false;
boolean ignoreUnexpectedChildShards = false;
ShutdownTask task = new ShutdownTask(defaultShardInfo,
KinesisShutdownTask task = new KinesisShutdownTask(defaultShardInfo,
defaultRecordProcessor,
checkpointer,
ShutdownReason.TERMINATE,
@ -170,7 +171,7 @@ public class ShutdownTaskTest {
}
/**
* Test method for {@link ShutdownTask#call()}.
* Test method for {@link KinesisShutdownTask#call()}.
*/
@Test
public final void testCallWhenCreatingLeaseThrows() throws Exception {
@ -182,7 +183,7 @@ public class ShutdownTaskTest {
final String exceptionMessage = "InvalidStateException is thrown.";
when(leaseManager.createLeaseIfNotExists(any(KinesisClientLease.class))).thenThrow(new InvalidStateException(exceptionMessage));
ShutdownTask task = new ShutdownTask(defaultShardInfo,
KinesisShutdownTask task = new KinesisShutdownTask(defaultShardInfo,
defaultRecordProcessor,
checkpointer,
ShutdownReason.TERMINATE,
@ -211,16 +212,21 @@ public class ShutdownTaskTest {
boolean cleanupLeasesOfCompletedShards = false;
boolean ignoreUnexpectedChildShards = false;
KinesisClientLease currentLease = createLease(defaultShardId, "leaseOwner", Collections.emptyList());
currentLease.setCheckpoint(new ExtendedSequenceNumber("3298"));
KinesisClientLease currentLease1 = createLease(defaultShardId, "leaseOwner", Collections.emptyList());
currentLease1.setCheckpoint(new ExtendedSequenceNumber("3298"));
KinesisClientLease currentLease2 = createLease(defaultShardId, "leaseOwner", Collections.emptyList());
currentLease2.setCheckpoint(ExtendedSequenceNumber.SHARD_END);
KinesisClientLease adjacentParentLease = createLease("ShardId-1", "leaseOwner", Collections.emptyList());
when(leaseCoordinator.getCurrentlyHeldLease(defaultShardId)).thenReturn( currentLease);
when(leaseManager.getLease(defaultShardId)).thenReturn(currentLease);
when(leaseCoordinator.getCurrentlyHeldLease(defaultShardId)).thenReturn( currentLease1);
// 6 times as part of parent lease get in failure mode and then two times in actual execution
when(leaseManager.getLease(defaultShardId)).thenReturn(currentLease1, currentLease1, currentLease1, currentLease1,
currentLease1, currentLease1, currentLease1, currentLease2);
when(leaseManager.getLease("ShardId-1")).thenReturn(null, null, null, null, null, adjacentParentLease);
// Make first 5 attempts with partial parent info in lease table
for (int i = 0; i < 5; i++) {
ShutdownTask task = spy(new ShutdownTask(defaultShardInfo,
KinesisShutdownTask task = spy(new KinesisShutdownTask(defaultShardInfo,
defaultRecordProcessor,
checkpointer,
ShutdownReason.TERMINATE,
@ -246,7 +252,7 @@ public class ShutdownTaskTest {
}
// Make next attempt with complete parent info in lease table
ShutdownTask task = spy(new ShutdownTask(defaultShardInfo,
KinesisShutdownTask task = spy(new KinesisShutdownTask(defaultShardInfo,
defaultRecordProcessor,
checkpointer,
ShutdownReason.TERMINATE,
@ -267,7 +273,7 @@ public class ShutdownTaskTest {
verify(task, never()).isOneInNProbability(RETRY_RANDOM_MAX_RANGE);
verify(getRecordsCache).shutdown();
verify(defaultRecordProcessor).shutdown(any(ShutdownInput.class));
verify(leaseCoordinator, never()).dropLease(currentLease);
verify(leaseCoordinator, never()).dropLease(currentLease1);
}
@Test
@ -284,7 +290,7 @@ public class ShutdownTaskTest {
when(leaseManager.getLease("ShardId-1")).thenReturn(null, null, null, null, null, null, null, null, null, null, null);
for (int i = 0; i < 10; i++) {
ShutdownTask task = spy(new ShutdownTask(defaultShardInfo,
KinesisShutdownTask task = spy(new KinesisShutdownTask(defaultShardInfo,
defaultRecordProcessor,
checkpointer,
ShutdownReason.TERMINATE,
@ -309,7 +315,7 @@ public class ShutdownTaskTest {
verify(defaultRecordProcessor, never()).shutdown(any(ShutdownInput.class));
}
ShutdownTask task = spy(new ShutdownTask(defaultShardInfo,
KinesisShutdownTask task = spy(new KinesisShutdownTask(defaultShardInfo,
defaultRecordProcessor,
checkpointer,
ShutdownReason.TERMINATE,
@ -337,10 +343,15 @@ public class ShutdownTaskTest {
public final void testCallWhenShardEnd() throws Exception {
RecordProcessorCheckpointer checkpointer = mock(RecordProcessorCheckpointer.class);
when(checkpointer.getLastCheckpointValue()).thenReturn(ExtendedSequenceNumber.SHARD_END);
final KinesisClientLease parentLease1 = createLease(defaultShardId, "leaseOwner", Collections.emptyList());
parentLease1.setCheckpoint(new ExtendedSequenceNumber("3298"));
final KinesisClientLease parentLease2 = createLease(defaultShardId, "leaseOwner", Collections.emptyList());
parentLease2.setCheckpoint(ExtendedSequenceNumber.SHARD_END);
when(leaseManager.getLease(defaultShardId)).thenReturn(parentLease1).thenReturn(parentLease2);
boolean cleanupLeasesOfCompletedShards = false;
boolean ignoreUnexpectedChildShards = false;
ShutdownTask task = new ShutdownTask(defaultShardInfo,
KinesisShutdownTask task = new KinesisShutdownTask(defaultShardInfo,
defaultRecordProcessor,
checkpointer,
ShutdownReason.TERMINATE,
@ -374,7 +385,7 @@ public class ShutdownTaskTest {
boolean cleanupLeasesOfCompletedShards = false;
boolean ignoreUnexpectedChildShards = false;
ShutdownTask task = new ShutdownTask(shardInfo,
KinesisShutdownTask task = new KinesisShutdownTask(shardInfo,
defaultRecordProcessor,
checkpointer,
ShutdownReason.TERMINATE,
@ -404,7 +415,7 @@ public class ShutdownTaskTest {
boolean cleanupLeasesOfCompletedShards = false;
boolean ignoreUnexpectedChildShards = false;
ShutdownTask task = new ShutdownTask(defaultShardInfo,
KinesisShutdownTask task = new KinesisShutdownTask(defaultShardInfo,
defaultRecordProcessor,
checkpointer,
ShutdownReason.ZOMBIE,
@ -427,12 +438,12 @@ public class ShutdownTaskTest {
}
/**
* Test method for {@link ShutdownTask#getTaskType()}.
* Test method for {@link KinesisShutdownTask#getTaskType()}.
*/
@Test
public final void testGetTaskType() {
KinesisClientLibLeaseCoordinator leaseCoordinator = mock(KinesisClientLibLeaseCoordinator.class);
ShutdownTask task = new ShutdownTask(null, null, null, null,
KinesisShutdownTask task = new KinesisShutdownTask(null, null, null, null,
null, null, false,
false, leaseCoordinator, 0,
getRecordsCache, shardSyncer, shardSyncStrategy, Collections.emptyList(), leaseCleanupManager);

View file

@ -186,7 +186,7 @@ public class WorkerTest {
@Mock
private IRecordProcessor v2RecordProcessor;
@Mock
private ShardConsumer shardConsumer;
private IShardConsumer shardConsumer;
@Mock
private Future<TaskResult> taskFuture;
@Mock
@ -204,6 +204,10 @@ public class WorkerTest {
when(config.getRecordsFetcherFactory()).thenReturn(recordsFetcherFactory);
when(leaseCoordinator.getLeaseManager()).thenReturn(mock(ILeaseManager.class));
when(streamConfig.getStreamProxy()).thenReturn(kinesisProxy);
Worker.MIN_WAIT_TIME_FOR_LEASE_TABLE_CHECK_MILLIS = 10;
Worker.MAX_WAIT_TIME_FOR_LEASE_TABLE_CHECK_MILLIS = 50;
Worker.LEASE_TABLE_CHECK_FREQUENCY_MILLIS = 10;
}
// CHECKSTYLE:IGNORE AnonInnerLengthCheck FOR NEXT 50 LINES
@ -293,13 +297,13 @@ public class WorkerTest {
KinesisClientLibConfiguration.DEFAULT_SKIP_SHARD_SYNC_AT_STARTUP_IF_LEASES_EXIST,
shardPrioritization);
ShardInfo shardInfo = new ShardInfo(dummyKinesisShardId, testConcurrencyToken, null, ExtendedSequenceNumber.TRIM_HORIZON);
ShardConsumer consumer = worker.createOrGetShardConsumer(shardInfo, streamletFactory);
IShardConsumer consumer = worker.createOrGetShardConsumer(shardInfo, streamletFactory);
Assert.assertNotNull(consumer);
ShardConsumer consumer2 = worker.createOrGetShardConsumer(shardInfo, streamletFactory);
IShardConsumer consumer2 = worker.createOrGetShardConsumer(shardInfo, streamletFactory);
Assert.assertSame(consumer, consumer2);
ShardInfo shardInfoWithSameShardIdButDifferentConcurrencyToken =
new ShardInfo(dummyKinesisShardId, anotherConcurrencyToken, null, ExtendedSequenceNumber.TRIM_HORIZON);
ShardConsumer consumer3 =
IShardConsumer consumer3 =
worker.createOrGetShardConsumer(shardInfoWithSameShardIdButDifferentConcurrencyToken, streamletFactory);
Assert.assertNotNull(consumer3);
Assert.assertNotSame(consumer3, consumer);
@ -415,10 +419,10 @@ public class WorkerTest {
new ShardInfo(dummyKinesisShardId, anotherConcurrencyToken, null, ExtendedSequenceNumber.TRIM_HORIZON);
ShardInfo shardInfo2 = new ShardInfo(anotherDummyKinesisShardId, concurrencyToken, null, ExtendedSequenceNumber.TRIM_HORIZON);
ShardConsumer consumerOfShardInfo1 = worker.createOrGetShardConsumer(shardInfo1, streamletFactory);
ShardConsumer consumerOfDuplicateOfShardInfo1ButWithAnotherConcurrencyToken =
IShardConsumer consumerOfShardInfo1 = worker.createOrGetShardConsumer(shardInfo1, streamletFactory);
IShardConsumer consumerOfDuplicateOfShardInfo1ButWithAnotherConcurrencyToken =
worker.createOrGetShardConsumer(duplicateOfShardInfo1ButWithAnotherConcurrencyToken, streamletFactory);
ShardConsumer consumerOfShardInfo2 = worker.createOrGetShardConsumer(shardInfo2, streamletFactory);
IShardConsumer consumerOfShardInfo2 = worker.createOrGetShardConsumer(shardInfo2, streamletFactory);
Set<ShardInfo> assignedShards = new HashSet<ShardInfo>();
assignedShards.add(shardInfo1);
@ -1215,11 +1219,11 @@ public class WorkerTest {
false,
shardPrioritization);
final Map<ShardInfo, ShardConsumer> shardInfoShardConsumerMap = worker.getShardInfoShardConsumerMap();
final Map<ShardInfo, IShardConsumer> shardInfoShardConsumerMap = worker.getShardInfoShardConsumerMap();
final ShardInfo completedShardInfo = KinesisClientLibLeaseCoordinator.convertLeaseToAssignment(completedLease);
final ShardConsumer completedShardConsumer = mock(ShardConsumer.class);
final KinesisShardConsumer completedShardConsumer = mock(KinesisShardConsumer.class);
shardInfoShardConsumerMap.put(completedShardInfo, completedShardConsumer);
when(completedShardConsumer.getCurrentState()).thenReturn(ConsumerStates.ShardConsumerState.SHUTDOWN_COMPLETE);
when(completedShardConsumer.getCurrentState()).thenReturn(KinesisConsumerStates.ShardConsumerState.SHUTDOWN_COMPLETE);
Callable<GracefulShutdownContext> callable = worker.createWorkerShutdownCallable();
assertThat(worker.hasGracefulShutdownStarted(), equalTo(false));
@ -1334,11 +1338,11 @@ public class WorkerTest {
verify(executorService).submit(argThat(both(isA(MetricsCollectingTaskDecorator.class))
.and(TaskTypeMatcher.isOfType(TaskType.SHUTDOWN)).and(ReflectionFieldMatcher
.withField(ShutdownTask.class, "shardInfo", equalTo(shardInfo1)))));
.withField(KinesisShutdownTask.class, "shardInfo", equalTo(shardInfo1)))));
verify(executorService, never()).submit(argThat(both(isA(MetricsCollectingTaskDecorator.class))
.and(TaskTypeMatcher.isOfType(TaskType.SHUTDOWN)).and(ReflectionFieldMatcher
.withField(ShutdownTask.class, "shardInfo", equalTo(shardInfo2)))));
.withField(KinesisShutdownTask.class, "shardInfo", equalTo(shardInfo2)))));
}
@ -1447,11 +1451,11 @@ public class WorkerTest {
verify(executorService, never()).submit(argThat(both(isA(MetricsCollectingTaskDecorator.class))
.and(TaskTypeMatcher.isOfType(TaskType.SHUTDOWN)).and(ReflectionFieldMatcher
.withField(ShutdownTask.class, "shardInfo", equalTo(shardInfo1)))));
.withField(KinesisShutdownTask.class, "shardInfo", equalTo(shardInfo1)))));
verify(executorService, never()).submit(argThat(both(isA(MetricsCollectingTaskDecorator.class))
.and(TaskTypeMatcher.isOfType(TaskType.SHUTDOWN)).and(ReflectionFieldMatcher
.withField(ShutdownTask.class, "shardInfo", equalTo(shardInfo2)))));
.withField(KinesisShutdownTask.class, "shardInfo", equalTo(shardInfo2)))));
@ -2009,19 +2013,19 @@ public class WorkerTest {
@Override
protected boolean matchesSafely(MetricsCollectingTaskDecorator item, Description mismatchDescription) {
return Condition.matched(item, mismatchDescription)
.and(new Condition.Step<MetricsCollectingTaskDecorator, ShutdownTask>() {
.and(new Condition.Step<MetricsCollectingTaskDecorator, KinesisShutdownTask>() {
@Override
public Condition<ShutdownTask> apply(MetricsCollectingTaskDecorator value,
public Condition<KinesisShutdownTask> apply(MetricsCollectingTaskDecorator value,
Description mismatch) {
if (!(value.getOther() instanceof ShutdownTask)) {
if (!(value.getOther() instanceof KinesisShutdownTask)) {
mismatch.appendText("Wrapped task isn't a shutdown task");
return Condition.notMatched();
}
return Condition.matched((ShutdownTask) value.getOther(), mismatch);
return Condition.matched((KinesisShutdownTask) value.getOther(), mismatch);
}
}).and(new Condition.Step<ShutdownTask, ShutdownReason>() {
}).and(new Condition.Step<KinesisShutdownTask, ShutdownReason>() {
@Override
public Condition<ShutdownReason> apply(ShutdownTask value, Description mismatch) {
public Condition<ShutdownReason> apply(KinesisShutdownTask value, Description mismatch) {
return Condition.matched(value.getReason(), mismatch);
}
}).matching(matcher);

View file

@ -35,6 +35,7 @@ import java.util.Map;
import java.util.Set;
import com.amazonaws.services.kinesis.model.ChildShard;
import com.amazonaws.services.kinesis.model.HashKeyRange;
import com.amazonaws.services.kinesis.model.ShardFilter;
import com.amazonaws.util.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
@ -408,11 +409,13 @@ public class KinesisLocalFileProxy implements IKinesisProxy {
ChildShard leftChild = new ChildShard();
leftChild.setShardId("shardId-1");
leftChild.setParentShards(parentShards);
leftChild.setHashKeyRange(new HashKeyRange().withStartingHashKey("0").withEndingHashKey("10"));
childShards.add(leftChild);
ChildShard rightChild = new ChildShard();
rightChild.setShardId("shardId-2");
rightChild.setParentShards(parentShards);
rightChild.setHashKeyRange(new HashKeyRange().withStartingHashKey("11").withEndingHashKey(MAX_HASHKEY_VALUE.toString()));
childShards.add(rightChild);
return childShards;
}

View file

@ -24,6 +24,7 @@ import static org.hamcrest.Matchers.nullValue;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNull;
import static org.junit.Assert.assertThat;
import static org.junit.Assert.fail;
import static org.mockito.Matchers.any;
import static org.mockito.Matchers.argThat;
import static org.mockito.Mockito.doReturn;
@ -47,6 +48,7 @@ import java.util.concurrent.atomic.AtomicInteger;
import java.util.function.Function;
import java.util.stream.Collectors;
import lombok.Builder;
import org.apache.commons.lang3.StringUtils;
import org.hamcrest.Description;
import org.hamcrest.TypeSafeDiagnosingMatcher;
@ -58,6 +60,7 @@ import org.mockito.Mock;
import org.mockito.runners.MockitoJUnitRunner;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.arn.Arn;
import com.amazonaws.auth.AWSCredentialsProvider;
import com.amazonaws.services.dynamodbv2.streamsadapter.AmazonDynamoDBStreamsAdapterClient;
import com.amazonaws.services.dynamodbv2.streamsadapter.AmazonDynamoDBStreamsAdapterClientChild;
@ -76,11 +79,17 @@ import com.amazonaws.services.kinesis.model.ShardIteratorType;
import com.amazonaws.services.kinesis.model.StreamDescription;
import com.amazonaws.services.kinesis.model.StreamStatus;
import lombok.AllArgsConstructor;
@RunWith(MockitoJUnitRunner.class)
public class KinesisProxyTest {
private static final String TEST_STRING = "TestString";
private static final String ACCOUNT_ID = "123456789012";
private static final Arn TEST_ARN = Arn.builder()
.withPartition("aws")
.withService("kinesis")
.withRegion("us-east-1")
.withAccountId(ACCOUNT_ID)
.withResource("stream/" + TEST_STRING)
.build();
private static final long DESCRIBE_STREAM_BACKOFF_TIME = 10L;
private static final long LIST_SHARDS_BACKOFF_TIME = 10L;
private static final int DESCRIBE_STREAM_RETRY_TIMES = 3;
@ -92,6 +101,7 @@ public class KinesisProxyTest {
private static final String SHARD_4 = "shard-4";
private static final String NOT_CACHED_SHARD = "ShardId-0005";
private static final String NEVER_PRESENT_SHARD = "ShardId-0010";
private static final String REQUEST_ID = "requestId";
@Mock
private AmazonKinesis mockClient;
@ -130,6 +140,7 @@ public class KinesisProxyTest {
public void setUpTest() {
// Set up kinesis ddbProxy
when(config.getStreamName()).thenReturn(TEST_STRING);
when(config.getStreamArn()).thenReturn(TEST_ARN);
when(config.getListShardsBackoffTimeInMillis()).thenReturn(LIST_SHARDS_BACKOFF_TIME);
when(config.getMaxListShardsRetryAttempts()).thenReturn(LIST_SHARDS_RETRY_TIMES);
when(config.getKinesisCredentialsProvider()).thenReturn(mockCredentialsProvider);
@ -161,7 +172,8 @@ public class KinesisProxyTest {
// Second call describeStream returning response with rest shards.
DescribeStreamResult responseWithMoreData = createGetStreamInfoResponse(shards.subList(0, 2), true);
DescribeStreamResult responseFinal = createGetStreamInfoResponse(shards.subList(2, shards.size()), false);
doReturn(responseWithMoreData).when(mockDDBStreamClient).describeStream(argThat(new IsRequestWithStartShardId(null)));
IsRequestWithStartShardId requestMatcher = IsRequestWithStartShardId.builder().streamName(TEST_STRING).build();
doReturn(responseWithMoreData).when(mockDDBStreamClient).describeStream(argThat(requestMatcher));
doReturn(responseFinal).when(mockDDBStreamClient)
.describeStream(argThat(new OldIsRequestWithStartShardId(shards.get(1).getShardId())));
@ -249,54 +261,6 @@ public class KinesisProxyTest {
ddbProxy.getShardList();
}
@Test
public void testGetStreamInfoStoresOffset() throws Exception {
when(describeStreamResult.getStreamDescription()).thenReturn(streamDescription);
when(streamDescription.getStreamStatus()).thenReturn(StreamStatus.ACTIVE.name());
Shard shard1 = mock(Shard.class);
Shard shard2 = mock(Shard.class);
Shard shard3 = mock(Shard.class);
List<Shard> shardList1 = Collections.singletonList(shard1);
List<Shard> shardList2 = Collections.singletonList(shard2);
List<Shard> shardList3 = Collections.singletonList(shard3);
String shardId1 = "ShardId-0001";
String shardId2 = "ShardId-0002";
String shardId3 = "ShardId-0003";
when(shard1.getShardId()).thenReturn(shardId1);
when(shard2.getShardId()).thenReturn(shardId2);
when(shard3.getShardId()).thenReturn(shardId3);
when(streamDescription.getShards()).thenReturn(shardList1).thenReturn(shardList2).thenReturn(shardList3);
when(streamDescription.isHasMoreShards()).thenReturn(true, true, false);
when(mockDDBStreamClient.describeStream(argThat(describeWithoutShardId()))).thenReturn(describeStreamResult);
when(mockDDBStreamClient.describeStream(argThat(describeWithShardId(shardId1))))
.thenThrow(new LimitExceededException("1"), new LimitExceededException("2"),
new LimitExceededException("3"))
.thenReturn(describeStreamResult);
when(mockDDBStreamClient.describeStream(argThat(describeWithShardId(shardId2)))).thenReturn(describeStreamResult);
boolean limitExceeded = false;
try {
ddbProxy.getShardList();
} catch (LimitExceededException le) {
limitExceeded = true;
}
assertThat(limitExceeded, equalTo(true));
List<Shard> actualShards = ddbProxy.getShardList();
List<Shard> expectedShards = Arrays.asList(shard1, shard2, shard3);
assertThat(actualShards, equalTo(expectedShards));
verify(mockDDBStreamClient).describeStream(argThat(describeWithoutShardId()));
verify(mockDDBStreamClient, times(4)).describeStream(argThat(describeWithShardId(shardId1)));
verify(mockDDBStreamClient).describeStream(argThat(describeWithShardId(shardId2)));
}
@Test
public void testListShardsWithMoreDataAvailable() {
ListShardsResult responseWithMoreData = new ListShardsResult().withShards(shards.subList(0, 2)).withNextToken(NEXT_TOKEN);
@ -356,7 +320,8 @@ public class KinesisProxyTest {
public void testGetShardListWithDDBChildClient() {
DescribeStreamResult responseWithMoreData = createGetStreamInfoResponse(shards.subList(0, 2), true);
DescribeStreamResult responseFinal = createGetStreamInfoResponse(shards.subList(2, shards.size()), false);
doReturn(responseWithMoreData).when(mockDDBChildClient).describeStream(argThat(new IsRequestWithStartShardId(null)));
IsRequestWithStartShardId requestMatcher = IsRequestWithStartShardId.builder().streamName(TEST_STRING).build();
doReturn(responseWithMoreData).when(mockDDBChildClient).describeStream(argThat(requestMatcher));
doReturn(responseFinal).when(mockDDBChildClient)
.describeStream(argThat(new OldIsRequestWithStartShardId(shards.get(1).getShardId())));
@ -483,6 +448,47 @@ public class KinesisProxyTest {
verify(mockClient).listShards(any());
}
/**
* Tests that if we fail halfway through a listShards call, we fail gracefully and subsequent calls are not
* affected by the failure of the first request.
*/
@Test
public void testNoDuplicateShardsInPartialFailure() {
proxy.setCachedShardMap(null);
ListShardsResult firstPage = new ListShardsResult().withShards(shards.subList(0, 2)).withNextToken(NEXT_TOKEN);
ListShardsResult lastPage = new ListShardsResult().withShards(shards.subList(2, shards.size())).withNextToken(null);
when(mockClient.listShards(any()))
.thenReturn(firstPage).thenThrow(new RuntimeException("Failed!"))
.thenReturn(firstPage).thenReturn(lastPage);
try {
proxy.getShardList();
fail("First ListShards call should have failed!");
} catch (Exception e) {
// Do nothing
}
assertEquals(shards, proxy.getShardList());
}
/**
* Tests that if we receive any duplicate shard responses from the service during a shard sync, we dedup the response
* and continue gracefully.
*/
@Test
public void testDuplicateShardResponseDedupedGracefully() {
proxy.setCachedShardMap(null);
List<Shard> duplicateShards = new ArrayList<>(shards);
duplicateShards.addAll(shards);
ListShardsResult pageOfShards = new ListShardsResult().withShards(duplicateShards).withNextToken(null);
when(mockClient.listShards(any())).thenReturn(pageOfShards);
proxy.getShardList();
assertEquals(shards, proxy.getShardList());
}
private void mockListShardsForSingleResponse(List<Shard> shards) {
when(mockClient.listShards(any())).thenReturn(listShardsResult);
when(listShardsResult.getShards()).thenReturn(shards);
@ -503,37 +509,61 @@ public class KinesisProxyTest {
return response;
}
private IsRequestWithStartShardId describeWithoutShardId() {
return describeWithShardId(null);
}
private IsRequestWithStartShardId describeWithShardId(String shardId) {
return new IsRequestWithStartShardId(shardId);
return IsRequestWithStartShardId.builder()
.streamName(TEST_STRING)
.streamArn(TEST_ARN)
.shardId(shardId)
.build();
}
@Builder
private static class IsRequestWithStartShardId extends TypeSafeDiagnosingMatcher<DescribeStreamRequest> {
private final String streamName;
private final Arn streamArn;
private final String shardId;
public IsRequestWithStartShardId(String shardId) {
this.shardId = shardId;
}
@Override
protected boolean matchesSafely(DescribeStreamRequest item, Description mismatchDescription) {
boolean matches = true;
if (streamName == null) {
if (item.getStreamName() != null) {
mismatchDescription.appendText("Expected streamName of null, but was ")
.appendValue(item.getStreamName());
matches = false;
}
} else if (!streamName.equals(item.getStreamName())) {
mismatchDescription.appendValue(streamName).appendText(" doesn't match expected ")
.appendValue(item.getStreamName());
matches = false;
}
if (streamArn == null) {
if (item.getStreamARN() != null) {
mismatchDescription.appendText("Expected streamArn of null, but was ")
.appendValue(item.getStreamARN());
matches = false;
}
} else if (!streamArn.equals(Arn.fromString(item.getStreamARN()))) {
mismatchDescription.appendValue(streamArn).appendText(" doesn't match expected ")
.appendValue(item.getStreamARN());
matches = false;
}
if (shardId == null) {
if (item.getExclusiveStartShardId() != null) {
mismatchDescription.appendText("Expected starting shard id of null, but was ")
.appendValue(item.getExclusiveStartShardId());
return false;
matches = false;
}
} else if (!shardId.equals(item.getExclusiveStartShardId())) {
mismatchDescription.appendValue(shardId).appendText(" doesn't match expected ")
.appendValue(item.getExclusiveStartShardId());
return false;
matches = false;
}
return true;
return matches;
}
@Override
@ -562,49 +592,87 @@ public class KinesisProxyTest {
}
private static ListShardsRequestMatcher initialListShardsRequestMatcher() {
return new ListShardsRequestMatcher(null, null);
return ListShardsRequestMatcher.builder()
.streamName(TEST_STRING)
.streamArn(TEST_ARN)
.build();
}
private static ListShardsRequestMatcher listShardsNextToken(final String nextToken) {
return new ListShardsRequestMatcher(null, nextToken);
return ListShardsRequestMatcher.builder()
.nextToken(nextToken)
.build();
}
@AllArgsConstructor
@Builder
private static class ListShardsRequestMatcher extends TypeSafeDiagnosingMatcher<ListShardsRequest> {
private final String streamName;
private final Arn streamArn;
private final String shardId;
private final String nextToken;
@Override
protected boolean matchesSafely(final ListShardsRequest listShardsRequest, final Description description) {
boolean matches = true;
if (streamName == null) {
if (StringUtils.isNotEmpty(listShardsRequest.getStreamName())) {
description.appendText("Expected streamName to be null, but was ")
.appendValue(listShardsRequest.getStreamName());
matches = false;
}
} else {
if (!streamName.equals(listShardsRequest.getStreamName())) {
description.appendText("Expected streamName: ").appendValue(streamName)
.appendText(" doesn't match actual streamName: ")
.appendValue(listShardsRequest.getStreamName());
matches = false;
}
}
if (streamArn == null) {
if (StringUtils.isNotEmpty(listShardsRequest.getStreamARN())) {
description.appendText("Expected streamArn to be null, but was ")
.appendValue(listShardsRequest.getStreamARN());
matches = false;
}
} else {
if (!streamArn.equals(Arn.fromString(listShardsRequest.getStreamARN()))) {
description.appendText("Expected streamArn: ").appendValue(streamArn)
.appendText(" doesn't match actual streamArn: ")
.appendValue(listShardsRequest.getStreamARN());
matches = false;
}
}
if (shardId == null) {
if (StringUtils.isNotEmpty(listShardsRequest.getExclusiveStartShardId())) {
description.appendText("Expected ExclusiveStartShardId to be null, but was ")
.appendValue(listShardsRequest.getExclusiveStartShardId());
return false;
matches = false;
}
} else {
if (!shardId.equals(listShardsRequest.getExclusiveStartShardId())) {
description.appendText("Expected shardId: ").appendValue(shardId)
.appendText(" doesn't match actual shardId: ")
.appendValue(listShardsRequest.getExclusiveStartShardId());
return false;
matches = false;
}
}
if (StringUtils.isNotEmpty(listShardsRequest.getNextToken())) {
if (StringUtils.isNotEmpty(listShardsRequest.getStreamName()) || StringUtils.isNotEmpty(listShardsRequest.getExclusiveStartShardId())) {
return false;
matches = false;
}
if (!listShardsRequest.getNextToken().equals(nextToken)) {
description.appendText("Found nextToken: ").appendValue(listShardsRequest.getNextToken())
.appendText(" when it was supposed to be null.");
return false;
matches = false;
}
} else {
return nextToken == null;
}
return true;
return matches;
}
@Override

View file

@ -163,6 +163,7 @@ public class KinesisLocalFileDataCreator {
HashKeyRange hashKeyRange = new HashKeyRange();
hashKeyRange.setStartingHashKey(hashKeyRangeStart.toString());
hashKeyRange.setEndingHashKey(hashKeyRangeEnd.toString());
shard.setHashKeyRange(hashKeyRange);
shards.add(shard);
}

View file

@ -93,6 +93,18 @@ public class LeaseCleanupManagerTest {
leaseCleanupManager.start();
}
/**
* Tests subsequent calls to shutdown {@link LeaseCleanupManager}.
*/
@Test
public final void testSubsequentShutdowns() {
leaseCleanupManager.start();
Assert.assertTrue(leaseCleanupManager.isRunning());
leaseCleanupManager.shutdown();
Assert.assertFalse(leaseCleanupManager.isRunning());
leaseCleanupManager.shutdown();
}
/**
* Tests that when both child shard leases are present, we are able to delete the parent shard for the completed
* shard case.