'Version 1.0.0 of the Amazon Kinesis Client Library'
This commit is contained in:
parent
12646f1f3b
commit
ce9054cb1b
86 changed files with 10086 additions and 3 deletions
2
.gitignore
vendored
Normal file
2
.gitignore
vendored
Normal file
|
|
@ -0,0 +1,2 @@
|
|||
target/
|
||||
AwsCredentials.properties
|
||||
40
LICENSE.txt
Normal file
40
LICENSE.txt
Normal file
|
|
@ -0,0 +1,40 @@
|
|||
|
||||
Amazon Software License
|
||||
|
||||
This Amazon Software License (“License”) governs your use, reproduction, and distribution of the accompanying software as specified below.
|
||||
1. Definitions
|
||||
|
||||
“Licensor” means any person or entity that distributes its Work.
|
||||
|
||||
“Software” means the original work of authorship made available under this License.
|
||||
|
||||
“Work” means the Software and any additions to or derivative works of the Software that are made available under this License.
|
||||
|
||||
The terms “reproduce,” “reproduction,” “derivative works,” and “distribution” have the meaning as provided under U.S. copyright law; provided, however, that for the purposes of this License, derivative works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work.
|
||||
|
||||
Works, including the Software, are “made available” under this License by including in or with the Work either (a) a copyright notice referencing the applicability of this License to the Work, or (b) a copy of this License.
|
||||
2. License Grants
|
||||
|
||||
2.1 Copyright Grant. Subject to the terms and conditions of this License, each Licensor grants to you a perpetual, worldwide, non-exclusive, royalty-free, copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense and distribute its Work and any resulting derivative works in any form.
|
||||
|
||||
2.2 Patent Grant. Subject to the terms and conditions of this License, each Licensor grants to you a perpetual, worldwide, non-exclusive, royalty-free patent license to make, have made, use, sell, offer for sale, import, and otherwise transfer its Work, in whole or in part. The foregoing license applies only to the patent claims licensable by Licensor that would be infringed by Licensor’s Work (or portion thereof) individually and excluding any combinations with any other materials or technology.
|
||||
3. Limitations
|
||||
|
||||
3.1 Redistribution. You may reproduce or distribute the Work only if (a) you do so under this License, (b) you include a complete copy of this License with your distribution, and (c) you retain without modification any copyright, patent, trademark, or attribution notices that are present in the Work.
|
||||
|
||||
3.2 Derivative Works. You may specify that additional or different terms apply to the use, reproduction, and distribution of your derivative works of the Work (“Your Terms”) only if (a) Your Terms provide that the use limitation in Section 3.3 applies to your derivative works, and (b) you identify the specific derivative works that are subject to Your Terms. Notwithstanding Your Terms, this License (including the redistribution requirements in Section 3.1) will continue to apply to the Work itself.
|
||||
|
||||
3.3 Use Limitation. The Work and any derivative works thereof only may be used or intended for use with the web services, computing platforms or applications provided by Amazon.com, Inc. or its affiliates, including Amazon Web Services, Inc.
|
||||
|
||||
3.4 Patent Claims. If you bring or threaten to bring a patent claim against any Licensor (including any claim, cross-claim or counterclaim in a lawsuit) to enforce any patents that you allege are infringed by any Work, then your rights under this License from such Licensor (including the grants in Sections 2.1 and 2.2) will terminate immediately.
|
||||
|
||||
3.5 Trademarks. This License does not grant any rights to use any Licensor’s or its affiliates’ names, logos, or trademarks, except as necessary to reproduce the notices described in this License.
|
||||
|
||||
3.6 Termination. If you violate any term of this License, then your rights under this License (including the grants in Sections 2.1 and 2.2) will terminate immediately.
|
||||
4. Disclaimer of Warranty.
|
||||
|
||||
THE WORK IS PROVIDED “AS IS” WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OR CONDITIONS OF M ERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR NON-INFRINGEMENT. YOU BEAR THE RISK OF UNDERTAKING ANY ACTIVITIES UNDER THIS LICENSE. SOME STATES’ CONSUMER LAWS DO NOT ALLOW EXCLUSION OF AN IMPLIED WARRANTY, SO THIS DISCLAIMER MAY NOT APPLY TO YOU.
|
||||
5. Limitation of Liability.
|
||||
|
||||
EXCEPT AS PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL THEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE SHALL ANY LICENSOR BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF OR RELATED TO THIS LICENSE, THE USE OR INABILITY TO USE THE WORK (INCLUDING BUT NOT LIMITED TO LOSS OF GOODWILL, BUSINESS INTERRUPTION, LOST PROFITS OR DATA, COMPUTER FAILURE OR MALFUNCTION, OR ANY OTHER COMM ERCIAL DAMAGES OR LOSSES), EVEN IF THE LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
|
||||
|
||||
33
META-INF/MANIFEST.MF
Normal file
33
META-INF/MANIFEST.MF
Normal file
|
|
@ -0,0 +1,33 @@
|
|||
Manifest-Version: 1.0
|
||||
Bundle-ManifestVersion: 2
|
||||
Bundle-Name: Amazon Kinesis Client Library for Java
|
||||
Bundle-SymbolicName: com.amazonaws.kinesisclientlibrary;singleton:=true
|
||||
Bundle-Version: 1.0.0
|
||||
Bundle-Vendor: Amazon Technologies, Inc
|
||||
Bundle-RequiredExecutionEnvironment: JavaSE-1.7
|
||||
Require-Bundle: org.apache.commons.codec;bundle-version="1.3.0",
|
||||
org.apache.commons.logging;bundle-version="1.1.1";visibility:=reexport,
|
||||
com.fasterxml.jackson.core.jackson-databind;bundle-version="2.1.1",
|
||||
com.fasterxml.jackson.core.jackson-core;bundle-version="2.1.1",
|
||||
com.fasterxml.jackson.core.jackson-annotations;bundle-version="2.1.1",
|
||||
org.apache.httpcomponents.httpcore;bundle-version="4.2.0",
|
||||
org.apache.httpcomponents.httpclient;bundle-version="4.2.0"
|
||||
com.amazonaws.sdk;bundle-version="1.6.9",
|
||||
Export-Package: com.amazonaws.services.kinesis,
|
||||
com.amazonaws.services.kinesis.clientlibrary,
|
||||
com.amazonaws.services.kinesis.clientlibrary.exceptions,
|
||||
com.amazonaws.services.kinesis.clientlibrary.exceptions.internal,
|
||||
com.amazonaws.services.kinesis.clientlibrary.interfaces,
|
||||
com.amazonaws.services.kinesis.clientlibrary.types,
|
||||
com.amazonaws.services.kinesis.clientlibrary.proxies,
|
||||
com.amazonaws.services.kinesis.clientlibrary.lib,
|
||||
com.amazonaws.services.kinesis.clientlibrary.lib.checkpoint,
|
||||
com.amazonaws.services.kinesis.clientlibrary.lib.worker,
|
||||
com.amazonaws.services.kinesis.leases,
|
||||
com.amazonaws.services.kinesis.leases.exceptions,
|
||||
com.amazonaws.services.kinesis.leases.impl,
|
||||
com.amazonaws.services.kinesis.leases.interfaces,
|
||||
com.amazonaws.services.kinesis.leases.util,
|
||||
com.amazonaws.services.kinesis.metrics,
|
||||
com.amazonaws.services.kinesis.metrics.impl,
|
||||
com.amazonaws.services.kinesis.metrics.interfaces
|
||||
3
NOTICE.txt
Normal file
3
NOTICE.txt
Normal file
|
|
@ -0,0 +1,3 @@
|
|||
AmazonKinesisClientLibrary
|
||||
Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
|
||||
35
README.md
35
README.md
|
|
@ -1,4 +1,33 @@
|
|||
amazon-kinesis-client
|
||||
=====================
|
||||
# Amazon Kinesis Client Library for Java
|
||||
|
||||
The **Amazon Kinesis Client Library for Java** enables Java developers to easily consume and process data from [Amazon Kinesis][kinesis].
|
||||
|
||||
* [Kinesis Product Page][kinesis]
|
||||
* [Forum][kinesis-forum]
|
||||
* [Issues][kinesis-client-library-issues]
|
||||
|
||||
## Features
|
||||
|
||||
* Provides an easy-to-use programming model for processing data using Amazon Kinesis
|
||||
* Helps with scale-out and fault-tolerant processing
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. **Sign up for AWS** — Before you begin, you need an AWS account. For more information about creating an AWS account and retrieving your AWS credentials, see [AWS Account and Credentials][docs-signup] in the AWS SDK for Java Developer Guide.
|
||||
1. **Sign up for Amazon Kinesis** — Go to the Amazon Kinesis console to sign up for the service and create an Amazon Kinesis stream. For more information, see [Create an Amazon Kinesis Stream][kinesis-guide-create] in the Amazon Kinesis Developer Guide.
|
||||
1. **Minimum requirements** — To use the Amazon Kinesis Client Library, you'll need **Java 1.7+**. For more information about Amazon Kinesis Client Library requirements, see [Before You Begin][kinesis-guide-begin] in the Amazon Kinesis Developer Guide.
|
||||
1. **Using the Amazon Kinesis Client Library** — The best way to get familiar with the Amazon Kinesis Client Library is to read [Developing Record Consumer Applications][kinesis-guide-applications] in the Amazon Kinesis Developer Guide.
|
||||
|
||||
## Building from Source
|
||||
|
||||
After you've downloaded the code from GitHub, you can build it using Maven. To disable GPG signing in the build, use this command: `mvn clean install -Dgpg.skip=true`
|
||||
|
||||
[kinesis]: http://aws.amazon.com/kinesis
|
||||
[kinesis-forum]: http://developer.amazonwebservices.com/connect/forum.jspa?forumID=169
|
||||
[kinesis-client-library-issues]: https://github.com/awslabs/amazon-kinesis-client/issues
|
||||
[docs-signup]: http://docs.aws.amazon.com/AWSSdkDocsJava/latest/DeveloperGuide/java-dg-setup.html
|
||||
[kinesis-guide]: http://docs.aws.amazon.com/kinesis/latest/dev/introduction.html
|
||||
[kinesis-guide-begin]: http://docs.aws.amazon.com/kinesis/latest/dev/before-you-begin.html
|
||||
[kinesis-guide-create]: http://docs.aws.amazon.com/kinesis/latest/dev/step-one-create-stream.html
|
||||
[kinesis-guide-applications]: http://docs.aws.amazon.com/kinesis/latest/dev/kinesis-record-processor-app.html
|
||||
|
||||
Client library for Amazon Kinesis
|
||||
|
|
|
|||
10
build.properties
Normal file
10
build.properties
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
source.. = src/main/java,\
|
||||
src/main/resources
|
||||
output.. = bin/
|
||||
|
||||
bin.includes = LICENSE.txt,\
|
||||
NOTICE.txt,\
|
||||
META-INF/,\
|
||||
.
|
||||
|
||||
jre.compilation.profile = JavaSE-1.7
|
||||
120
pom.xml
Normal file
120
pom.xml
Normal file
|
|
@ -0,0 +1,120 @@
|
|||
<project xmlns="http://maven.apache.org/POM/4.0.0"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/maven-v4_0_0.xsd">
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
<groupId>com.amazonaws</groupId>
|
||||
<artifactId>amazon-kinesis-client</artifactId>
|
||||
<packaging>jar</packaging>
|
||||
<name>Amazon Kinesis Client Library for Java</name>
|
||||
<version>1.0.0</version>
|
||||
<description>
|
||||
|
||||
</description>
|
||||
<url>https://aws.amazon.com/kinesis</url>
|
||||
|
||||
<scm>
|
||||
<url>https://github.com/awslabs/amazon-kinesis-client.git</url>
|
||||
</scm>
|
||||
|
||||
<licenses>
|
||||
<license>
|
||||
<name>Amazon Software License</name>
|
||||
<url>https://aws.amazon.com/asl</url>
|
||||
<distribution>repo</distribution>
|
||||
</license>
|
||||
</licenses>
|
||||
|
||||
<properties>
|
||||
<aws-java-sdk.version>1.6.9.1</aws-java-sdk.version>
|
||||
<jackson.version>2.1.1</jackson.version>
|
||||
</properties>
|
||||
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>com.amazonaws</groupId>
|
||||
<artifactId>aws-java-sdk</artifactId>
|
||||
<version>${aws-java-sdk.version}</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>commons-logging</groupId>
|
||||
<artifactId>commons-logging</artifactId>
|
||||
<version>1.1.1</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.httpcomponents</groupId>
|
||||
<artifactId>httpclient</artifactId>
|
||||
<version>4.2</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>commons-codec</groupId>
|
||||
<artifactId>commons-codec</artifactId>
|
||||
<version>1.3</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.core</groupId>
|
||||
<artifactId>jackson-core</artifactId>
|
||||
<version>${jackson.version}</version>
|
||||
<type>jar</type>
|
||||
<scope>compile</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.core</groupId>
|
||||
<artifactId>jackson-databind</artifactId>
|
||||
<version>${jackson.version}</version>
|
||||
<type>jar</type>
|
||||
<scope>compile</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.core</groupId>
|
||||
<artifactId>jackson-annotations</artifactId>
|
||||
<version>${jackson.version}</version>
|
||||
<type>jar</type>
|
||||
<scope>compile</scope>
|
||||
</dependency>
|
||||
|
||||
</dependencies>
|
||||
|
||||
<developers>
|
||||
<developer>
|
||||
<id>amazonwebservices</id>
|
||||
<organization>Amazon Web Services</organization>
|
||||
<organizationUrl>https://aws.amazon.com</organizationUrl>
|
||||
<roles>
|
||||
<role>developer</role>
|
||||
</roles>
|
||||
</developer>
|
||||
</developers>
|
||||
|
||||
<build>
|
||||
<pluginManagement>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-compiler-plugin</artifactId>
|
||||
<configuration>
|
||||
<source>1.7</source>
|
||||
<target>1.7</target>
|
||||
<encoding>UTF-8</encoding>
|
||||
</configuration>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</pluginManagement>
|
||||
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-gpg-plugin</artifactId>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>sign-artifacts</id>
|
||||
<phase>verify</phase>
|
||||
<goals>
|
||||
<goal>sign</goal>
|
||||
</goals>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
|
||||
</project>
|
||||
|
|
@ -0,0 +1,40 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.exceptions;
|
||||
|
||||
/**
|
||||
* This is thrown when the Amazon Kinesis Client Library encounters issues with its internal state (e.g. DynamoDB table
|
||||
* is not found).
|
||||
*/
|
||||
public class InvalidStateException extends KinesisClientLibNonRetryableException {
|
||||
|
||||
private static final long serialVersionUID = 1L;
|
||||
|
||||
/**
|
||||
* @param message provides more details about the cause and potential ways to debug/address.
|
||||
*/
|
||||
public InvalidStateException(String message) {
|
||||
super(message);
|
||||
}
|
||||
|
||||
/**
|
||||
* @param message provides more details about the cause and potential ways to debug/address.
|
||||
* @param e Cause of the exception
|
||||
*/
|
||||
public InvalidStateException(String message, Exception e) {
|
||||
super(message, e);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,41 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.exceptions;
|
||||
|
||||
/**
|
||||
* This is thrown when the Amazon Kinesis Client Library encounters issues talking to its dependencies
|
||||
* (e.g. fetching data from Kinesis, DynamoDB table reads/writes, emitting metrics to CloudWatch).
|
||||
*
|
||||
*/
|
||||
public class KinesisClientLibDependencyException extends KinesisClientLibRetryableException {
|
||||
|
||||
private static final long serialVersionUID = 1L;
|
||||
|
||||
/**
|
||||
* @param message provides more details about the cause and potential ways to debug/address.
|
||||
*/
|
||||
public KinesisClientLibDependencyException(String message) {
|
||||
super(message);
|
||||
}
|
||||
|
||||
/**
|
||||
* @param message provides more details about the cause and potential ways to debug/address.
|
||||
* @param e Cause of the exception
|
||||
*/
|
||||
public KinesisClientLibDependencyException(String message, Exception e) {
|
||||
super(message, e);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,45 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.exceptions;
|
||||
|
||||
/**
|
||||
* Abstract class for exceptions of the Amazon Kinesis Client Library.
|
||||
* This exception has two subclasses:
|
||||
* 1. KinesisClientLibNonRetryableException
|
||||
* 2. KinesisClientLibRetryableException.
|
||||
*/
|
||||
public abstract class KinesisClientLibException extends Exception {
|
||||
private static final long serialVersionUID = 1L;
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*
|
||||
* @param message Message of with details of the exception.
|
||||
*/
|
||||
public KinesisClientLibException(String message) {
|
||||
super(message);
|
||||
}
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*
|
||||
* @param message Message with details of the exception.
|
||||
* @param cause Cause.
|
||||
*/
|
||||
public KinesisClientLibException(String message, Throwable cause) {
|
||||
super(message, cause);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,43 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.exceptions;
|
||||
|
||||
/**
|
||||
* Non-retryable exceptions. Simply retrying the same request/operation is not expected to succeed.
|
||||
*
|
||||
*/
|
||||
public abstract class KinesisClientLibNonRetryableException extends KinesisClientLibException {
|
||||
|
||||
private static final long serialVersionUID = 1L;
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*
|
||||
* @param message Message.
|
||||
*/
|
||||
public KinesisClientLibNonRetryableException(String message) {
|
||||
super(message);
|
||||
}
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*
|
||||
* @param message Message.
|
||||
* @param e Cause.
|
||||
*/
|
||||
public KinesisClientLibNonRetryableException(String message, Exception e) {
|
||||
super(message, e);
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,41 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.exceptions;
|
||||
|
||||
/**
|
||||
* Retryable exceptions (e.g. transient errors). The request/operation is expected to succeed upon (back off and) retry.
|
||||
*/
|
||||
public abstract class KinesisClientLibRetryableException extends RuntimeException {
|
||||
private static final long serialVersionUID = 1L;
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*
|
||||
* @param message Message with details about the exception.
|
||||
*/
|
||||
public KinesisClientLibRetryableException(String message) {
|
||||
super(message);
|
||||
}
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*
|
||||
* @param message Message with details about the exception.
|
||||
* @param e Cause.
|
||||
*/
|
||||
public KinesisClientLibRetryableException(String message, Exception e) {
|
||||
super(message, e);
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,39 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.exceptions;
|
||||
|
||||
/**
|
||||
* The RecordProcessor instance has been shutdown (e.g. and attempts a checkpoint).
|
||||
*/
|
||||
public class ShutdownException extends KinesisClientLibNonRetryableException {
|
||||
|
||||
private static final long serialVersionUID = 1L;
|
||||
|
||||
/**
|
||||
* @param message provides more details about the cause and potential ways to debug/address.
|
||||
*/
|
||||
public ShutdownException(String message) {
|
||||
super(message);
|
||||
}
|
||||
|
||||
/**
|
||||
* @param message provides more details about the cause and potential ways to debug/address.
|
||||
* @param e Cause of the exception
|
||||
*/
|
||||
public ShutdownException(String message, Exception e) {
|
||||
super(message, e);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,39 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.exceptions;
|
||||
|
||||
/**
|
||||
* Thrown when requests are throttled by a service (e.g. DynamoDB when storing a checkpoint).
|
||||
*/
|
||||
public class ThrottlingException extends KinesisClientLibRetryableException {
|
||||
|
||||
private static final long serialVersionUID = 1L;
|
||||
|
||||
/**
|
||||
* @param message Message about what was throttled and any guidance we can provide.
|
||||
*/
|
||||
public ThrottlingException(String message) {
|
||||
super(message);
|
||||
}
|
||||
|
||||
/**
|
||||
* @param message provides more details about the cause and potential ways to debug/address.
|
||||
* @param e Underlying cause of the exception.
|
||||
*/
|
||||
public ThrottlingException(String message, Exception e) {
|
||||
super(message, e);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,48 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
|
||||
package com.amazonaws.services.kinesis.clientlibrary.exceptions.internal;
|
||||
|
||||
import com.amazonaws.services.kinesis.clientlibrary.exceptions.KinesisClientLibRetryableException;
|
||||
|
||||
/**
|
||||
* Used internally in the Amazon Kinesis Client Library. Indicates that we cannot start processing data for a shard
|
||||
* because the data from the parent shard has not been completely processed (yet).
|
||||
*/
|
||||
public class BlockedOnParentShardException extends KinesisClientLibRetryableException {
|
||||
|
||||
private static final long serialVersionUID = 1L;
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*
|
||||
* @param message Error message.
|
||||
*/
|
||||
public BlockedOnParentShardException(String message) {
|
||||
super(message);
|
||||
}
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*
|
||||
* @param message Error message.
|
||||
* @param e Cause of the exception.
|
||||
*/
|
||||
public BlockedOnParentShardException(String message, Exception e) {
|
||||
super(message, e);
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,44 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.exceptions.internal;
|
||||
|
||||
import com.amazonaws.services.kinesis.clientlibrary.exceptions.KinesisClientLibRetryableException;
|
||||
|
||||
/**
|
||||
* Thrown when we encounter issues when reading/writing information (e.g. shard information from Kinesis may not be
|
||||
* current/complete).
|
||||
*/
|
||||
public class KinesisClientLibIOException extends KinesisClientLibRetryableException {
|
||||
private static final long serialVersionUID = 1L;
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*
|
||||
* @param message Error message.
|
||||
*/
|
||||
public KinesisClientLibIOException(String message) {
|
||||
super(message);
|
||||
}
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*
|
||||
* @param message Error message.
|
||||
* @param e Cause.
|
||||
*/
|
||||
public KinesisClientLibIOException(String message, Exception e) {
|
||||
super(message, e);
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,48 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.interfaces;
|
||||
|
||||
import com.amazonaws.services.kinesis.clientlibrary.exceptions.KinesisClientLibException;
|
||||
|
||||
/**
|
||||
* Interface for checkpoint trackers.
|
||||
*/
|
||||
public interface ICheckpoint {
|
||||
|
||||
/**
|
||||
* Record a checkpoint for a shard (e.g. sequence number of last record processed by application).
|
||||
* Upon failover, record processing is resumed from this point.
|
||||
*
|
||||
* @param shardId Checkpoint is specified for this shard.
|
||||
* @param checkpointValue Value of the checkpoint (e.g. Kinesis sequence number)
|
||||
* @param concurrencyToken Used with conditional writes to prevent stale updates
|
||||
* (e.g. if there was a fail over to a different record processor, we don't want to
|
||||
* overwrite it's checkpoint)
|
||||
* @throws KinesisClientLibException Thrown if we were unable to save the checkpoint
|
||||
*/
|
||||
void setCheckpoint(String shardId, String checkpointValue, String concurrencyToken)
|
||||
throws KinesisClientLibException;
|
||||
|
||||
/**
|
||||
* Get the current checkpoint stored for the specified shard. Useful for checking that the parent shard
|
||||
* has been completely processed before we start processing the child shard.
|
||||
*
|
||||
* @param shardId Current checkpoint for this shard is fetched
|
||||
* @return Current checkpoint for this shard, null if there is no record for this shard.
|
||||
* @throws KinesisClientLibException Thrown if we are unable to fetch the checkpoint
|
||||
*/
|
||||
String getCheckpoint(String shardId) throws KinesisClientLibException;
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,62 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.interfaces;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
import com.amazonaws.services.kinesis.model.Record;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.types.ShutdownReason;
|
||||
|
||||
/**
|
||||
* The Amazon Kinesis Client Library will instantiate record processors to process data records fetched from Amazon
|
||||
* Kinesis.
|
||||
*/
|
||||
public interface IRecordProcessor {
|
||||
|
||||
/**
|
||||
* Invoked by the Amazon Kinesis Client Library before data records are delivered to the RecordProcessor instance
|
||||
* (via processRecords).
|
||||
*
|
||||
* @param shardId The record processor will be responsible for processing records of this shard.
|
||||
*/
|
||||
void initialize(String shardId);
|
||||
|
||||
/**
|
||||
* Process data records. The Amazon Kinesis Client Library will invoke this method to deliver data records to the
|
||||
* application.
|
||||
* Upon fail over, the new instance will get records with sequence number > checkpoint position
|
||||
* for each partition key.
|
||||
*
|
||||
* @param records Data records to be processed
|
||||
* @param checkpointer RecordProcessor should use this instance to checkpoint their progress.
|
||||
*/
|
||||
void processRecords(List<Record> records, IRecordProcessorCheckpointer checkpointer);
|
||||
|
||||
/**
|
||||
* Invoked by the Amazon Kinesis Client Library to indicate it will no longer send data records to this
|
||||
* RecordProcessor instance. The reason parameter indicates:
|
||||
* a/ ShutdownReason.TERMINATE - The shard has been closed and there will not be any more records to process. The
|
||||
* record processor should checkpoint (after doing any housekeeping) to acknowledge that it has successfully
|
||||
* completed processing all records in this shard.
|
||||
* b/ ShutdownReason.ZOMBIE: A fail over has occurred and a different record processor is (or will be) responsible
|
||||
* for processing records.
|
||||
*
|
||||
* @param checkpointer RecordProcessor should use this instance to checkpoint.
|
||||
* @param reason Reason for the shutdown (ShutdownReason.TERMINATE indicates the shard is closed and there are no
|
||||
* more records to process. Shutdown.ZOMBIE indicates a fail over has occurred).
|
||||
*/
|
||||
void shutdown(IRecordProcessorCheckpointer checkpointer, ShutdownReason reason);
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,50 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.interfaces;
|
||||
|
||||
import com.amazonaws.services.kinesis.clientlibrary.exceptions.InvalidStateException;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.exceptions.KinesisClientLibDependencyException;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.exceptions.ShutdownException;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.exceptions.ThrottlingException;
|
||||
|
||||
/**
|
||||
* Used by RecordProcessors when they want to checkpoint their progress.
|
||||
* The Amazon Kinesis Client Library will pass an object implementing this interface to RecordProcessors, so they can
|
||||
* checkpoint their progress.
|
||||
*/
|
||||
public interface IRecordProcessorCheckpointer {
|
||||
|
||||
/**
|
||||
* This method will checkpoint the progress at the last data record that was delivered to the record processor.
|
||||
* Upon fail over (after a successful checkpoint() call), the new/replacement RecordProcessor instance
|
||||
* will receive data records whose sequenceNumber > checkpoint position (for each partition key).
|
||||
* In steady state, applications should checkpoint periodically (e.g. once every 5 minutes).
|
||||
* Calling this API too frequently can slow down the application (because it puts pressure on the underlying
|
||||
* checkpoint storage layer).
|
||||
*
|
||||
* @throws ThrottlingException Can't store checkpoint. Can be caused by checkpointing too frequently.
|
||||
* Consider increasing the throughput/capacity of the checkpoint store or reducing checkpoint frequency.
|
||||
* @throws ShutdownException The record processor instance has been shutdown. Another instance may have
|
||||
* started processing some of these records already.
|
||||
* The application should abort processing via this RecordProcessor instance.
|
||||
* @throws InvalidStateException Can't store checkpoint.
|
||||
* Unable to store the checkpoint in the DynamoDB table (e.g. table doesn't exist).
|
||||
* @throws KinesisClientLibDependencyException Encountered an issue when storing the checkpoint. The application can
|
||||
* backoff and retry.
|
||||
*/
|
||||
void checkpoint()
|
||||
throws KinesisClientLibDependencyException, InvalidStateException, ThrottlingException, ShutdownException;
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,30 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.interfaces;
|
||||
|
||||
/**
|
||||
* The Amazon Kinesis Client Library will use this to instantiate a record processor per shard.
|
||||
* Clients may choose to create separate instantiations, or re-use instantiations.
|
||||
*/
|
||||
public interface IRecordProcessorFactory {
|
||||
|
||||
/**
|
||||
* Returns a record processor to be used for processing data records for a (assigned) shard.
|
||||
*
|
||||
* @return Returns a processor object.
|
||||
*/
|
||||
IRecordProcessor createProcessor();
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,35 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.lib.checkpoint;
|
||||
|
||||
/**
|
||||
* Enumeration of the sentinel values of checkpoints.
|
||||
* Used during initialization of ShardConsumers to determine the starting point
|
||||
* in the shard and to flag that a shard has been completely processed.
|
||||
*/
|
||||
public enum SentinelCheckpoint {
|
||||
/**
|
||||
* Start from the first available record in the shard.
|
||||
*/
|
||||
TRIM_HORIZON,
|
||||
/**
|
||||
* Start from the latest record in the shard.
|
||||
*/
|
||||
LATEST,
|
||||
/**
|
||||
* We've completely processed all records in this shard.
|
||||
*/
|
||||
SHARD_END;
|
||||
}
|
||||
|
|
@ -0,0 +1,108 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
|
||||
import com.amazonaws.services.kinesis.clientlibrary.exceptions.internal.BlockedOnParentShardException;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.lib.checkpoint.SentinelCheckpoint;
|
||||
import com.amazonaws.services.kinesis.leases.impl.KinesisClientLease;
|
||||
import com.amazonaws.services.kinesis.leases.interfaces.ILeaseManager;
|
||||
|
||||
/**
|
||||
* Task to block until processing of all data records in the parent shard(s) is completed.
|
||||
* We check if we have checkpoint(s) for the parent shard(s).
|
||||
* If a checkpoint for a parent shard is found, we poll and wait until the checkpoint value is SHARD_END
|
||||
* (application has checkpointed after processing all records in the shard).
|
||||
* If we don't find a checkpoint for the parent shard(s), we assume they have been trimmed and directly
|
||||
* proceed with processing data from the shard.
|
||||
*/
|
||||
class BlockOnParentShardTask implements ITask {
|
||||
|
||||
private static final Log LOG = LogFactory.getLog(BlockOnParentShardTask.class);
|
||||
private final ShardInfo shardInfo;
|
||||
private final ILeaseManager<KinesisClientLease> leaseManager;
|
||||
|
||||
private final TaskType taskType = TaskType.BLOCK_ON_PARENT_SHARDS;
|
||||
// Sleep for this duration if the parent shards have not completed processing, or we encounter an exception.
|
||||
private final long parentShardPollIntervalMillis;
|
||||
|
||||
/**
|
||||
* @param shardInfo Information about the shard we are working on
|
||||
* @param leaseManager Used to fetch the lease and checkpoint info for parent shards
|
||||
* @param parentShardPollIntervalMillis Sleep time if the parent shard has not completed processing
|
||||
*/
|
||||
BlockOnParentShardTask(ShardInfo shardInfo,
|
||||
ILeaseManager<KinesisClientLease> leaseManager,
|
||||
long parentShardPollIntervalMillis) {
|
||||
this.shardInfo = shardInfo;
|
||||
this.leaseManager = leaseManager;
|
||||
this.parentShardPollIntervalMillis = parentShardPollIntervalMillis;
|
||||
}
|
||||
|
||||
/* (non-Javadoc)
|
||||
* @see com.amazonaws.services.kinesis.clientlibrary.lib.worker.ITask#call()
|
||||
*/
|
||||
@Override
|
||||
public TaskResult call() {
|
||||
Exception exception = null;
|
||||
|
||||
try {
|
||||
boolean blockedOnParentShard = false;
|
||||
for (String shardId : shardInfo.getParentShardIds()) {
|
||||
KinesisClientLease lease = leaseManager.getLease(shardId);
|
||||
if (lease != null) {
|
||||
String checkpoint = lease.getCheckpoint();
|
||||
if ((checkpoint == null) || (!checkpoint.equals(SentinelCheckpoint.SHARD_END.toString()))) {
|
||||
LOG.debug("Shard " + shardId + " is not yet done. Its current checkpoint is " + checkpoint);
|
||||
blockedOnParentShard = true;
|
||||
exception = new BlockedOnParentShardException("Parent shard not yet done");
|
||||
break;
|
||||
} else {
|
||||
LOG.debug("Shard " + shardId + " has been completely processed.");
|
||||
}
|
||||
} else {
|
||||
LOG.info("No lease found for shard " + shardId + ". Not blocking on completion of this shard.");
|
||||
}
|
||||
}
|
||||
|
||||
if (!blockedOnParentShard) {
|
||||
LOG.info("No need to block on parents " + shardInfo.getParentShardIds() + " of shard "
|
||||
+ shardInfo.getShardId());
|
||||
return new TaskResult(null);
|
||||
}
|
||||
} catch (Exception e) {
|
||||
LOG.error("Caught exception when checking for parent shard checkpoint", e);
|
||||
exception = e;
|
||||
}
|
||||
try {
|
||||
Thread.sleep(parentShardPollIntervalMillis);
|
||||
} catch (InterruptedException e) {
|
||||
LOG.error("Sleep interrupted when waiting on parent shard(s) of " + shardInfo.getShardId(), e);
|
||||
}
|
||||
|
||||
return new TaskResult(exception);
|
||||
}
|
||||
|
||||
/* (non-Javadoc)
|
||||
* @see com.amazonaws.services.kinesis.clientlibrary.lib.worker.ITask#getTaskType()
|
||||
*/
|
||||
@Override
|
||||
public TaskType getTaskType() {
|
||||
return taskType;
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
|
||||
|
||||
import java.util.concurrent.Callable;
|
||||
|
||||
/**
|
||||
* Interface for shard processing tasks.
|
||||
* A task may execute an application callback (e.g. initialize, process, shutdown).
|
||||
*/
|
||||
interface ITask extends Callable<TaskResult> {
|
||||
|
||||
/**
|
||||
* Perform task logic.
|
||||
* E.g. perform set up (e.g. fetch records) and invoke a callback (e.g. processRecords() API).
|
||||
*
|
||||
* @return TaskResult (captures any exceptions encountered during execution of the task)
|
||||
*/
|
||||
TaskResult call();
|
||||
|
||||
/**
|
||||
* @return TaskType
|
||||
*/
|
||||
TaskType getTaskType();
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,32 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
|
||||
|
||||
/**
|
||||
* Used to specify the position in the stream where a new application should start from.
|
||||
* This is used during initial application bootstrap (when a checkpoint doesn't exist for a shard or its parents).
|
||||
*/
|
||||
public enum InitialPositionInStream {
|
||||
|
||||
/**
|
||||
* Start after the most recent data record (fetch new data).
|
||||
*/
|
||||
LATEST,
|
||||
|
||||
/**
|
||||
* Start from the oldest available data record.
|
||||
*/
|
||||
TRIM_HORIZON;
|
||||
}
|
||||
|
|
@ -0,0 +1,105 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
|
||||
import com.amazonaws.services.kinesis.clientlibrary.interfaces.ICheckpoint;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessor;
|
||||
|
||||
/**
|
||||
* Task for initializing shard position and invoking the RecordProcessor initialize() API.
|
||||
*/
|
||||
class InitializeTask implements ITask {
|
||||
|
||||
private static final Log LOG = LogFactory.getLog(InitializeTask.class);
|
||||
private final ShardInfo shardInfo;
|
||||
private final IRecordProcessor recordProcessor;
|
||||
private final KinesisDataFetcher dataFetcher;
|
||||
private final TaskType taskType = TaskType.INITIALIZE;
|
||||
private final ICheckpoint checkpoint;
|
||||
private final RecordProcessorCheckpointer recordProcessorCheckpointer;
|
||||
// Back off for this interval if we encounter a problem (exception)
|
||||
private final long backoffTimeMillis;
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*/
|
||||
InitializeTask(ShardInfo shardInfo,
|
||||
IRecordProcessor recordProcessor,
|
||||
ICheckpoint checkpoint,
|
||||
RecordProcessorCheckpointer recordProcessorCheckpointer,
|
||||
KinesisDataFetcher dataFetcher,
|
||||
long backoffTimeMillis) {
|
||||
this.shardInfo = shardInfo;
|
||||
this.recordProcessor = recordProcessor;
|
||||
this.checkpoint = checkpoint;
|
||||
this.recordProcessorCheckpointer = recordProcessorCheckpointer;
|
||||
this.dataFetcher = dataFetcher;
|
||||
this.backoffTimeMillis = backoffTimeMillis;
|
||||
}
|
||||
|
||||
/* Initializes the data fetcher (position in shard) and invokes the RecordProcessor initialize() API.
|
||||
* (non-Javadoc)
|
||||
* @see com.amazonaws.services.kinesis.clientlibrary.lib.worker.ITask#call()
|
||||
*/
|
||||
@Override
|
||||
public TaskResult call() {
|
||||
boolean applicationException = false;
|
||||
Exception exception = null;
|
||||
|
||||
try {
|
||||
LOG.debug("Initializing ShardId " + shardInfo.getShardId());
|
||||
String initialCheckpoint = checkpoint.getCheckpoint(shardInfo.getShardId());
|
||||
dataFetcher.initialize(initialCheckpoint);
|
||||
recordProcessorCheckpointer.setSequenceNumber(initialCheckpoint);
|
||||
try {
|
||||
LOG.debug("Calling the record processor initialize().");
|
||||
recordProcessor.initialize(shardInfo.getShardId());
|
||||
LOG.debug("Record processor initialize() completed.");
|
||||
} catch (Exception e) {
|
||||
applicationException = true;
|
||||
throw e;
|
||||
}
|
||||
|
||||
return new TaskResult(null);
|
||||
} catch (Exception e) {
|
||||
if (applicationException) {
|
||||
LOG.error("Application initialize() threw exception: ", e);
|
||||
} else {
|
||||
LOG.error("Caught exception: ", e);
|
||||
}
|
||||
exception = e;
|
||||
// backoff if we encounter an exception.
|
||||
try {
|
||||
Thread.sleep(this.backoffTimeMillis);
|
||||
} catch (InterruptedException ie) {
|
||||
LOG.debug("Interrupted sleep", ie);
|
||||
}
|
||||
}
|
||||
|
||||
return new TaskResult(exception);
|
||||
}
|
||||
|
||||
/* (non-Javadoc)
|
||||
* @see com.amazonaws.services.kinesis.clientlibrary.lib.worker.ITask#getTaskType()
|
||||
*/
|
||||
@Override
|
||||
public TaskType getTaskType() {
|
||||
return taskType;
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,598 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
|
||||
|
||||
import com.amazonaws.ClientConfiguration;
|
||||
import com.amazonaws.auth.AWSCredentialsProvider;
|
||||
|
||||
/**
|
||||
* Configuration for the Amazon Kinesis Client Library.
|
||||
*/
|
||||
public class KinesisClientLibConfiguration {
|
||||
|
||||
private static final long EPSILON_MS = 25;
|
||||
|
||||
/**
|
||||
* Fail over time in milliseconds. A worker which does not renew it's lease within this time interval
|
||||
* will be regarded as having problems and it's shards will be assigned to other workers.
|
||||
* For applications that have a large number of shards, this msy be set to a higher number to reduce
|
||||
* the number of DynamoDB IOPS required for tracking leases.
|
||||
*/
|
||||
public static final long DEFAULT_FAILOVER_TIME_MILLIS = 10000L;
|
||||
|
||||
/**
|
||||
* Max records to fetch from Kinesis in a single GetRecords call.
|
||||
*/
|
||||
public static final int DEFAULT_MAX_RECORDS = 10000;
|
||||
|
||||
/**
|
||||
* Idle time between record reads in milliseconds.
|
||||
*/
|
||||
public static final long DEFAULT_IDLETIME_BETWEEN_READS_MILLIS = 1000L;
|
||||
|
||||
/**
|
||||
* Don't call processRecords() on the record processor for empty record lists.
|
||||
*/
|
||||
public static final boolean DEFAULT_DONT_CALL_PROCESS_RECORDS_FOR_EMPTY_RECORD_LIST = false;
|
||||
|
||||
/**
|
||||
* Interval in milliseconds between polling to check for parent shard completion.
|
||||
* Polling frequently will take up more DynamoDB IOPS (when there are leases for shards waiting on
|
||||
* completion of parent shards).
|
||||
*/
|
||||
public static final long DEFAULT_PARENT_SHARD_POLL_INTERVAL_MILLIS = 10000L;
|
||||
|
||||
/**
|
||||
* Shard sync interval in milliseconds - e.g. wait for this long between shard sync tasks.
|
||||
*/
|
||||
public static final long DEFAULT_SHARD_SYNC_INTERVAL_MILLIS = 60000L;
|
||||
|
||||
/**
|
||||
* Cleanup leases upon shards completion (don't wait until they expire in Kinesis).
|
||||
* Keeping leases takes some tracking/resources (e.g. they need to be renewed, assigned), so by default we try
|
||||
* to delete the ones we don't need any longer.
|
||||
*/
|
||||
public static final boolean DEFAULT_CLEANUP_LEASES_UPON_SHARDS_COMPLETION = true;
|
||||
|
||||
/**
|
||||
* Backoff time in milliseconds for Amazon Kinesis Client Library tasks (in the event of failures).
|
||||
*/
|
||||
public static final long DEFAULT_TASK_BACKOFF_TIME_MILLIS = 500L;
|
||||
|
||||
/**
|
||||
* Buffer metrics for at most this long before publishing to CloudWatch.
|
||||
*/
|
||||
public static final long DEFAULT_METRICS_BUFFER_TIME_MILLIS = 10000L;
|
||||
|
||||
/**
|
||||
* Buffer at most this many metrics before publishing to CloudWatch.
|
||||
*/
|
||||
public static final int DEFAULT_METRICS_MAX_QUEUE_SIZE = 10000;
|
||||
|
||||
/**
|
||||
* User agent set when Amazon Kinesis Client Library makes AWS requests.
|
||||
*/
|
||||
public static final String KINESIS_CLIENT_LIB_USER_AGENT = "amazon-kinesis-client-library-java-1.0.0";
|
||||
|
||||
private String applicationName;
|
||||
private String streamName;
|
||||
private String kinesisEndpoint;
|
||||
private InitialPositionInStream initialPositionInStream;
|
||||
private AWSCredentialsProvider kinesisCredentialsProvider;
|
||||
private AWSCredentialsProvider dynamoDBCredentialsProvider;
|
||||
private AWSCredentialsProvider cloudWatchCredentialsProvider;
|
||||
private long failoverTimeMillis;
|
||||
private String workerIdentifier;
|
||||
private long shardSyncIntervalMillis;
|
||||
private int maxRecords;
|
||||
private long idleTimeBetweenReadsInMillis;
|
||||
// Enables applications flush/checkpoint (if they have some data "in progress", but don't get new data for while)
|
||||
private boolean callProcessRecordsEvenForEmptyRecordList;
|
||||
private long parentShardPollIntervalMillis;
|
||||
private boolean cleanupLeasesUponShardCompletion;
|
||||
private ClientConfiguration kinesisClientConfig;
|
||||
private ClientConfiguration dynamoDBClientConfig;
|
||||
private ClientConfiguration cloudWatchClientConfig;
|
||||
private long taskBackoffTimeMillis;
|
||||
private long metricsBufferTimeMillis;
|
||||
private int metricsMaxQueueSize;
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
* @param applicationName Name of the Amazon Kinesis application.
|
||||
* By default the application name is included in the user agent string used to make AWS requests. This
|
||||
* can assist with troubleshooting (e.g. distinguish requests made by separate applications).
|
||||
* @param streamName Name of the Kinesis stream
|
||||
* @param credentialsProvider Provides credentials used to sign AWS requests
|
||||
* @param workerId Used to distinguish different workers/processes of a Kinesis application
|
||||
*/
|
||||
public KinesisClientLibConfiguration(String applicationName,
|
||||
String streamName,
|
||||
AWSCredentialsProvider credentialsProvider,
|
||||
String workerId) {
|
||||
this(applicationName, streamName, credentialsProvider, credentialsProvider, credentialsProvider, workerId);
|
||||
}
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
* @param applicationName Name of the Amazon Kinesis application
|
||||
* By default the application name is included in the user agent string used to make AWS requests. This
|
||||
* can assist with troubleshooting (e.g. distinguish requests made by separate applications).
|
||||
* @param streamName Name of the Kinesis stream
|
||||
* @param kinesisCredentialsProvider Provides credentials used to access Kinesis
|
||||
* @param dynamoDBCredentialsProvider Provides credentials used to access DynamoDB
|
||||
* @param cloudWatchCredentialsProvider Provides credentials used to access CloudWatch
|
||||
* @param workerId Used to distinguish different workers/processes of a Kinesis application
|
||||
*/
|
||||
public KinesisClientLibConfiguration(String applicationName,
|
||||
String streamName,
|
||||
AWSCredentialsProvider kinesisCredentialsProvider,
|
||||
AWSCredentialsProvider dynamoDBCredentialsProvider,
|
||||
AWSCredentialsProvider cloudWatchCredentialsProvider,
|
||||
String workerId) {
|
||||
this(applicationName, streamName, null, InitialPositionInStream.LATEST, kinesisCredentialsProvider,
|
||||
dynamoDBCredentialsProvider, cloudWatchCredentialsProvider, DEFAULT_FAILOVER_TIME_MILLIS, workerId,
|
||||
DEFAULT_MAX_RECORDS, DEFAULT_IDLETIME_BETWEEN_READS_MILLIS,
|
||||
DEFAULT_DONT_CALL_PROCESS_RECORDS_FOR_EMPTY_RECORD_LIST, DEFAULT_PARENT_SHARD_POLL_INTERVAL_MILLIS,
|
||||
DEFAULT_SHARD_SYNC_INTERVAL_MILLIS, DEFAULT_CLEANUP_LEASES_UPON_SHARDS_COMPLETION,
|
||||
new ClientConfiguration(), new ClientConfiguration(), new ClientConfiguration(),
|
||||
DEFAULT_TASK_BACKOFF_TIME_MILLIS, DEFAULT_METRICS_BUFFER_TIME_MILLIS,
|
||||
DEFAULT_METRICS_MAX_QUEUE_SIZE);
|
||||
}
|
||||
|
||||
/**
|
||||
* @param applicationName Name of the Kinesis application
|
||||
* By default the application name is included in the user agent string used to make AWS requests. This
|
||||
* can assist with troubleshooting (e.g. distinguish requests made by separate applications).
|
||||
* @param streamName Name of the Kinesis stream
|
||||
* @param kinesisEndpoint Kinesis endpoint
|
||||
* @param initialPositionInStream One of LATEST or TRIM_HORIZON. The KinesisClientLibrary will start fetching
|
||||
* records from that location in the stream when an application starts up for the first time and there
|
||||
* are no checkpoints. If there are checkpoints, then we start from the checkpoint position.
|
||||
* @param kinesisCredentialsProvider Provides credentials used to access Kinesis
|
||||
* @param dynamoDBCredentialsProvider Provides credentials used to access DynamoDB
|
||||
* @param cloudWatchCredentialsProvider Provides credentials used to access CloudWatch
|
||||
* @param failoverTimeMillis Lease duration (leases not renewed within this period will be claimed by others)
|
||||
* @param workerId Used to distinguish different workers/processes of a Kinesis application
|
||||
* @param maxRecords Max records to read per Kinesis getRecords() call
|
||||
* @param idleTimeBetweenReadsInMillis Idle time between calls to fetch data from Kinesis
|
||||
* @param callProcessRecordsEvenForEmptyRecordList Call the IRecordProcessor::processRecords() API even if
|
||||
* GetRecords returned an empty record list.
|
||||
* @param parentShardPollIntervalMillis Wait for this long between polls to check if parent shards are done
|
||||
* @param shardSyncIntervalMillis Time between tasks to sync leases and Kinesis shards
|
||||
* @param cleanupTerminatedShardsBeforeExpiry Clean up shards we've finished processing (don't wait for expiration
|
||||
* in Kinesis)
|
||||
* @param kinesisClientConfig Client Configuration used by Kinesis client
|
||||
* @param dynamoDBClientConfig Client Configuration used by DynamoDB client
|
||||
* @param cloudWatchClientConfig Client Configuration used by CloudWatch client
|
||||
* @param taskBackoffTimeMillis Backoff period when tasks encounter an exception
|
||||
* @param metricsBufferTimeMillis Metrics are buffered for at most this long before publishing to CloudWatch
|
||||
* @param metricsMaxQueueSize Max number of metrics to buffer before publishing to CloudWatch
|
||||
*
|
||||
*/
|
||||
// CHECKSTYLE:IGNORE HiddenFieldCheck FOR NEXT 25 LINES
|
||||
// CHECKSTYLE:IGNORE ParameterNumber FOR NEXT 25 LINES
|
||||
public KinesisClientLibConfiguration(String applicationName,
|
||||
String streamName,
|
||||
String kinesisEndpoint,
|
||||
InitialPositionInStream initialPositionInStream,
|
||||
AWSCredentialsProvider kinesisCredentialsProvider,
|
||||
AWSCredentialsProvider dynamoDBCredentialsProvider,
|
||||
AWSCredentialsProvider cloudWatchCredentialsProvider,
|
||||
long failoverTimeMillis,
|
||||
String workerId,
|
||||
int maxRecords,
|
||||
long idleTimeBetweenReadsInMillis,
|
||||
boolean callProcessRecordsEvenForEmptyRecordList,
|
||||
long parentShardPollIntervalMillis,
|
||||
long shardSyncIntervalMillis,
|
||||
boolean cleanupTerminatedShardsBeforeExpiry,
|
||||
ClientConfiguration kinesisClientConfig,
|
||||
ClientConfiguration dynamoDBClientConfig,
|
||||
ClientConfiguration cloudWatchClientConfig,
|
||||
long taskBackoffTimeMillis,
|
||||
long metricsBufferTimeMillis,
|
||||
int metricsMaxQueueSize) {
|
||||
// Check following values are greater than zero
|
||||
checkIsValuePositive("FailoverTimeMillis", failoverTimeMillis);
|
||||
checkIsValuePositive("IdleTimeBetweenReadsInMillis", idleTimeBetweenReadsInMillis);
|
||||
checkIsValuePositive("ParentShardPollIntervalMillis", parentShardPollIntervalMillis);
|
||||
checkIsValuePositive("ShardSyncIntervalMillis", shardSyncIntervalMillis);
|
||||
checkIsValuePositive("MaxRecords", (long) maxRecords);
|
||||
checkIsValuePositive("TaskBackoffTimeMillis", taskBackoffTimeMillis);
|
||||
checkIsValuePositive("MetricsBufferTimeMills", metricsBufferTimeMillis);
|
||||
checkIsValuePositive("MetricsMaxQueueSize", (long) metricsMaxQueueSize);
|
||||
this.applicationName = applicationName;
|
||||
this.streamName = streamName;
|
||||
this.kinesisEndpoint = kinesisEndpoint;
|
||||
this.initialPositionInStream = initialPositionInStream;
|
||||
this.kinesisCredentialsProvider = kinesisCredentialsProvider;
|
||||
this.dynamoDBCredentialsProvider = dynamoDBCredentialsProvider;
|
||||
this.cloudWatchCredentialsProvider = cloudWatchCredentialsProvider;
|
||||
this.failoverTimeMillis = failoverTimeMillis;
|
||||
this.maxRecords = maxRecords;
|
||||
this.idleTimeBetweenReadsInMillis = idleTimeBetweenReadsInMillis;
|
||||
this.callProcessRecordsEvenForEmptyRecordList = callProcessRecordsEvenForEmptyRecordList;
|
||||
this.parentShardPollIntervalMillis = parentShardPollIntervalMillis;
|
||||
this.shardSyncIntervalMillis = shardSyncIntervalMillis;
|
||||
this.cleanupLeasesUponShardCompletion = cleanupTerminatedShardsBeforeExpiry;
|
||||
this.workerIdentifier = workerId;
|
||||
this.kinesisClientConfig =
|
||||
checkAndAppendKinesisClientLibUserAgent(kinesisClientConfig);
|
||||
this.dynamoDBClientConfig =
|
||||
checkAndAppendKinesisClientLibUserAgent(dynamoDBClientConfig);
|
||||
this.cloudWatchClientConfig =
|
||||
checkAndAppendKinesisClientLibUserAgent(cloudWatchClientConfig);
|
||||
this.taskBackoffTimeMillis = taskBackoffTimeMillis;
|
||||
this.metricsBufferTimeMillis = metricsBufferTimeMillis;
|
||||
this.metricsMaxQueueSize = metricsMaxQueueSize;
|
||||
}
|
||||
|
||||
// Check if value is positive, otherwise throw an exception
|
||||
private void checkIsValuePositive(String key, long value) {
|
||||
if (value <= 0) {
|
||||
throw new IllegalArgumentException("Value of " + key
|
||||
+ " should be positive, but current value is " + value);
|
||||
}
|
||||
}
|
||||
|
||||
// Check if user agent in configuration is the default agent.
|
||||
// If so, replace it with application name plus KINESIS_CLIENT_LIB_USER_AGENT.
|
||||
// If not, append KINESIS_CLIENT_LIB_USER_AGENT to the end.
|
||||
private ClientConfiguration checkAndAppendKinesisClientLibUserAgent(ClientConfiguration config) {
|
||||
String existingUserAgent = config.getUserAgent();
|
||||
if (existingUserAgent.equals(ClientConfiguration.DEFAULT_USER_AGENT)) {
|
||||
existingUserAgent = applicationName;
|
||||
}
|
||||
if (!existingUserAgent.contains(KINESIS_CLIENT_LIB_USER_AGENT)) {
|
||||
existingUserAgent += "," + KINESIS_CLIENT_LIB_USER_AGENT;
|
||||
}
|
||||
config.setUserAgent(existingUserAgent);
|
||||
return config;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return Name of the application
|
||||
*/
|
||||
public String getApplicationName() {
|
||||
return applicationName;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return Time within which a worker should renew a lease (else it is assumed dead)
|
||||
*/
|
||||
public long getFailoverTimeMillis() {
|
||||
return failoverTimeMillis;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return Credentials provider used to access Kinesis
|
||||
*/
|
||||
public AWSCredentialsProvider getKinesisCredentialsProvider() {
|
||||
return kinesisCredentialsProvider;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return Credentials provider used to access DynamoDB
|
||||
*/
|
||||
public AWSCredentialsProvider getDynamoDBCredentialsProvider() {
|
||||
return dynamoDBCredentialsProvider;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return Credentials provider used to access CloudWatch
|
||||
*/
|
||||
public AWSCredentialsProvider getCloudWatchCredentialsProvider() {
|
||||
return cloudWatchCredentialsProvider;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return workerIdentifier
|
||||
*/
|
||||
public String getWorkerIdentifier() {
|
||||
return workerIdentifier;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return the shardSyncIntervalMillis
|
||||
*/
|
||||
public long getShardSyncIntervalMillis() {
|
||||
return shardSyncIntervalMillis;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return Max records to fetch per Kinesis getRecords call
|
||||
*/
|
||||
public int getMaxRecords() {
|
||||
return maxRecords;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return Idle time between calls to fetch data from Kinesis
|
||||
*/
|
||||
public long getIdleTimeBetweenReadsInMillis() {
|
||||
return idleTimeBetweenReadsInMillis;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return true if processRecords() should be called even for empty record lists
|
||||
*/
|
||||
boolean shouldCallProcessRecordsEvenForEmptyRecordList() {
|
||||
return callProcessRecordsEvenForEmptyRecordList;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return Epsilon milliseconds (used for lease timing margins)
|
||||
*/
|
||||
public long getEpsilonMillis() {
|
||||
return EPSILON_MS;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return stream name
|
||||
*/
|
||||
public String getStreamName() {
|
||||
return streamName;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return Kinesis endpoint
|
||||
*/
|
||||
public String getKinesisEndpoint() {
|
||||
return kinesisEndpoint;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return the initialPositionInStream
|
||||
*/
|
||||
public InitialPositionInStream getInitialPositionInStream() {
|
||||
return initialPositionInStream;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return interval between polls for parent shard completion
|
||||
*/
|
||||
public long getParentShardPollIntervalMillis() {
|
||||
return parentShardPollIntervalMillis;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return Kinesis client configuration
|
||||
*/
|
||||
public ClientConfiguration getKinesisClientConfiguration() {
|
||||
return kinesisClientConfig;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return DynamoDB client configuration
|
||||
*/
|
||||
public ClientConfiguration getDynamoDBClientConfiguration() {
|
||||
return dynamoDBClientConfig;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return CloudWatch client configuration
|
||||
*/
|
||||
public ClientConfiguration getCloudWatchClientConfiguration() {
|
||||
return cloudWatchClientConfig;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return backoff time when tasks encounter exceptions
|
||||
*/
|
||||
public long getTaskBackoffTimeMillis() {
|
||||
return taskBackoffTimeMillis;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return Metrics are buffered for at most this long before publishing to CloudWatch
|
||||
*/
|
||||
public long getMetricsBufferTimeMillis() {
|
||||
return metricsBufferTimeMillis;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return Max number of metrics to buffer before publishing to CloudWatch
|
||||
*/
|
||||
public int getMetricsMaxQueueSize() {
|
||||
return metricsMaxQueueSize;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return true if we should clean up leases of shards after processing is complete (don't wait for expiration)
|
||||
*/
|
||||
public boolean shouldCleanupLeasesUponShardCompletion() {
|
||||
return cleanupLeasesUponShardCompletion;
|
||||
}
|
||||
|
||||
// CHECKSTYLE:IGNORE HiddenFieldCheck FOR NEXT 180 LINES
|
||||
/**
|
||||
* @param kinesisEndpoint Kinesis endpoint
|
||||
* @return KinesisClientLibConfiguration
|
||||
*/
|
||||
public KinesisClientLibConfiguration withKinesisEndpoint(String kinesisEndpoint) {
|
||||
this.kinesisEndpoint = kinesisEndpoint;
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* @param initialPositionInStream One of LATEST or TRIM_HORIZON. The Amazon Kinesis Client Library will start
|
||||
* fetching records from this position when the application starts up if there are no checkpoints. If there
|
||||
* are checkpoints, we will process records from the checkpoint position.
|
||||
* @return KinesisClientLibConfiguration
|
||||
*/
|
||||
public KinesisClientLibConfiguration withInitialPositionInStream(InitialPositionInStream initialPositionInStream) {
|
||||
this.initialPositionInStream = initialPositionInStream;
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* @param failoverTimeMillis Lease duration (leases not renewed within this period will be claimed by others)
|
||||
* @return KinesisClientLibConfiguration
|
||||
*/
|
||||
public KinesisClientLibConfiguration withFailoverTimeMillis(long failoverTimeMillis) {
|
||||
checkIsValuePositive("FailoverTimeMillis", failoverTimeMillis);
|
||||
this.failoverTimeMillis = failoverTimeMillis;
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* @param shardSyncIntervalMillis Time between tasks to sync leases and Kinesis shards
|
||||
* @return KinesisClientLibConfiguration
|
||||
*/
|
||||
public KinesisClientLibConfiguration withShardSyncIntervalMillis(long shardSyncIntervalMillis) {
|
||||
checkIsValuePositive("ShardSyncIntervalMillis", shardSyncIntervalMillis);
|
||||
this.shardSyncIntervalMillis = shardSyncIntervalMillis;
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* @param maxRecords Max records to fetch in a Kinesis getRecords() call
|
||||
* @return KinesisClientLibConfiguration
|
||||
*/
|
||||
public KinesisClientLibConfiguration withMaxRecords(int maxRecords) {
|
||||
checkIsValuePositive("MaxRecords", (long) maxRecords);
|
||||
this.maxRecords = maxRecords;
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* @param idleTimeBetweenReadsInMillis Idle time between calls to fetch data from Kinesis
|
||||
* @return KinesisClientLibConfiguration
|
||||
*/
|
||||
public KinesisClientLibConfiguration withIdleTimeBetweenReadsInMillis(long idleTimeBetweenReadsInMillis) {
|
||||
checkIsValuePositive("IdleTimeBetweenReadsInMillis", idleTimeBetweenReadsInMillis);
|
||||
this.idleTimeBetweenReadsInMillis = idleTimeBetweenReadsInMillis;
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* @param callProcessRecordsEvenForEmptyRecordList Call the RecordProcessor::processRecords() API even if
|
||||
* GetRecords returned an empty record list
|
||||
* @return KinesisClientLibConfiguration
|
||||
*/
|
||||
public KinesisClientLibConfiguration withCallProcessRecordsEvenForEmptyRecordList(
|
||||
boolean callProcessRecordsEvenForEmptyRecordList) {
|
||||
this.callProcessRecordsEvenForEmptyRecordList = callProcessRecordsEvenForEmptyRecordList;
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* @param parentShardPollIntervalMillis Wait for this long between polls to check if parent shards are done
|
||||
* @return KinesisClientLibConfiguration
|
||||
*/
|
||||
public KinesisClientLibConfiguration withParentShardPollIntervalMillis(long parentShardPollIntervalMillis) {
|
||||
checkIsValuePositive("ParentShardPollIntervalMillis", parentShardPollIntervalMillis);
|
||||
this.parentShardPollIntervalMillis = parentShardPollIntervalMillis;
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* @param cleanupLeasesUponShardCompletion Clean up shards we've finished processing (don't wait for expiration
|
||||
* in Kinesis)
|
||||
* @return KinesisClientLibConfiguration
|
||||
*/
|
||||
public KinesisClientLibConfiguration withCleanupLeasesUponShardCompletion(
|
||||
boolean cleanupLeasesUponShardCompletion) {
|
||||
this.cleanupLeasesUponShardCompletion = cleanupLeasesUponShardCompletion;
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* @param clientConfig Common client configuration used by Kinesis/DynamoDB/CloudWatch client
|
||||
* @return KinesisClientLibConfiguration
|
||||
*/
|
||||
public KinesisClientLibConfiguration withCommonClientConfig(ClientConfiguration clientConfig) {
|
||||
ClientConfiguration tempClientConfig =
|
||||
checkAndAppendKinesisClientLibUserAgent(clientConfig);
|
||||
this.kinesisClientConfig = tempClientConfig;
|
||||
this.dynamoDBClientConfig = tempClientConfig;
|
||||
this.cloudWatchClientConfig = tempClientConfig;
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* @param kinesisClientConfig Client configuration used by Kinesis client
|
||||
* @return KinesisClientLibConfiguration
|
||||
*/
|
||||
public KinesisClientLibConfiguration withKinesisClientConfig(ClientConfiguration kinesisClientConfig) {
|
||||
this.kinesisClientConfig =
|
||||
checkAndAppendKinesisClientLibUserAgent(kinesisClientConfig);
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* @param dynamoDBClientConfig Client configuration used by DynamoDB client
|
||||
* @return KinesisClientLibConfiguration
|
||||
*/
|
||||
public KinesisClientLibConfiguration withDynamoDBClientConfig(ClientConfiguration dynamoDBClientConfig) {
|
||||
this.dynamoDBClientConfig =
|
||||
checkAndAppendKinesisClientLibUserAgent(dynamoDBClientConfig);
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* @param cloudWatchClientConfig Client configuration used by CloudWatch client
|
||||
* @return KinesisClientLibConfiguration
|
||||
*/
|
||||
public KinesisClientLibConfiguration withCloudWatchClientConfig(ClientConfiguration cloudWatchClientConfig) {
|
||||
this.cloudWatchClientConfig =
|
||||
checkAndAppendKinesisClientLibUserAgent(cloudWatchClientConfig);
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* Override the default user agent (application name).
|
||||
* @param userAgent User agent to use in AWS requests
|
||||
* @return KinesisClientLibConfiguration
|
||||
*/
|
||||
public KinesisClientLibConfiguration withUserAgent(String userAgent) {
|
||||
String customizedUserAgent = userAgent + "," + KINESIS_CLIENT_LIB_USER_AGENT;
|
||||
this.kinesisClientConfig.setUserAgent(customizedUserAgent);
|
||||
this.dynamoDBClientConfig.setUserAgent(customizedUserAgent);
|
||||
this.cloudWatchClientConfig.setUserAgent(customizedUserAgent);
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* @param taskBackoffTimeMillis Backoff period when tasks encounter an exception
|
||||
* @return KinesisClientLibConfiguration
|
||||
*/
|
||||
public KinesisClientLibConfiguration withTaskBackoffTimeMillis(long taskBackoffTimeMillis) {
|
||||
checkIsValuePositive("TaskBackoffTimeMillis", taskBackoffTimeMillis);
|
||||
this.taskBackoffTimeMillis = taskBackoffTimeMillis;
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* @param metricsBufferTimeMillis Metrics are buffered for at most this long before publishing to CloudWatch
|
||||
* @return KinesisClientLibConfiguration
|
||||
*/
|
||||
public KinesisClientLibConfiguration withMetricsBufferTimeMillis(long metricsBufferTimeMillis) {
|
||||
checkIsValuePositive("MetricsBufferTimeMillis", metricsBufferTimeMillis);
|
||||
this.metricsBufferTimeMillis = metricsBufferTimeMillis;
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* @param metricsMaxQueueSize Max number of metrics to buffer before publishing to CloudWatch
|
||||
* @return KinesisClientLibConfiguration
|
||||
*/
|
||||
public KinesisClientLibConfiguration withMetricsMaxQueueSize(int metricsMaxQueueSize) {
|
||||
checkIsValuePositive("MetricsMaxQueueSize", (long) metricsMaxQueueSize);
|
||||
this.metricsMaxQueueSize = metricsMaxQueueSize;
|
||||
return this;
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,216 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
|
||||
|
||||
import java.util.Collection;
|
||||
import java.util.LinkedList;
|
||||
import java.util.List;
|
||||
import java.util.Set;
|
||||
import java.util.UUID;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
|
||||
import com.amazonaws.services.kinesis.clientlibrary.exceptions.KinesisClientLibDependencyException;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.exceptions.KinesisClientLibException;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.exceptions.ShutdownException;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.exceptions.ThrottlingException;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.exceptions.internal.KinesisClientLibIOException;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.interfaces.ICheckpoint;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.DependencyException;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.InvalidStateException;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.ProvisionedThroughputException;
|
||||
import com.amazonaws.services.kinesis.leases.impl.LeaseCoordinator;
|
||||
import com.amazonaws.services.kinesis.leases.impl.KinesisClientLease;
|
||||
import com.amazonaws.services.kinesis.leases.interfaces.ILeaseManager;
|
||||
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsFactory;
|
||||
|
||||
/**
|
||||
* This class is used to coordinate/manage leases owned by this worker process and to get/set checkpoints.
|
||||
*/
|
||||
class KinesisClientLibLeaseCoordinator extends LeaseCoordinator<KinesisClientLease> implements ICheckpoint {
|
||||
|
||||
private static final Log LOG = LogFactory.getLog(KinesisClientLibLeaseCoordinator.class);
|
||||
private final ILeaseManager<KinesisClientLease> leaseManager;
|
||||
private final long initialLeaseTableReadCapacity = 10L;
|
||||
private final long initialLeaseTableWriteCapacity = 10L;
|
||||
|
||||
/**
|
||||
* @param leaseManager Lease manager which provides CRUD lease operations.
|
||||
* @param workerIdentifier Used to identify this worker process
|
||||
* @param leaseDurationMillis Duration of a lease in milliseconds
|
||||
* @param epsilonMillis Delta for timing operations (e.g. checking lease expiry)
|
||||
*/
|
||||
public KinesisClientLibLeaseCoordinator(ILeaseManager<KinesisClientLease> leaseManager,
|
||||
String workerIdentifier,
|
||||
long leaseDurationMillis,
|
||||
long epsilonMillis) {
|
||||
super(leaseManager, workerIdentifier, leaseDurationMillis, epsilonMillis);
|
||||
this.leaseManager = leaseManager;
|
||||
}
|
||||
|
||||
/**
|
||||
* @param leaseManager Lease manager which provides CRUD lease operations.
|
||||
* @param workerIdentifier Used to identify this worker process
|
||||
* @param leaseDurationMillis Duration of a lease in milliseconds
|
||||
* @param epsilonMillis Delta for timing operations (e.g. checking lease expiry)
|
||||
* @param metricsFactory Metrics factory used to emit metrics
|
||||
*/
|
||||
public KinesisClientLibLeaseCoordinator(ILeaseManager<KinesisClientLease> leaseManager,
|
||||
String workerIdentifier,
|
||||
long leaseDurationMillis,
|
||||
long epsilonMillis,
|
||||
IMetricsFactory metricsFactory) {
|
||||
super(leaseManager, workerIdentifier, leaseDurationMillis, epsilonMillis, metricsFactory);
|
||||
this.leaseManager = leaseManager;
|
||||
}
|
||||
|
||||
/**
|
||||
* Sets the checkpoint for a shard and updates ownerSwitchesSinceCheckpoint.
|
||||
*
|
||||
* @param shardId shardId to update the checkpoint for
|
||||
* @param checkpoint checkpoint value to set
|
||||
* @param concurrencyToken obtained by calling Lease.getConcurrencyToken for a currently held lease
|
||||
*
|
||||
* @return true if checkpoint update succeeded, false otherwise
|
||||
*
|
||||
* @throws InvalidStateException if lease table does not exist
|
||||
* @throws ProvisionedThroughputException if DynamoDB update fails due to lack of capacity
|
||||
* @throws DependencyException if DynamoDB update fails in an unexpected way
|
||||
*/
|
||||
boolean setCheckpoint(String shardId, String checkpoint, UUID concurrencyToken)
|
||||
throws DependencyException, InvalidStateException, ProvisionedThroughputException {
|
||||
KinesisClientLease lease = getCurrentlyHeldLease(shardId);
|
||||
if (lease == null) {
|
||||
LOG.info(String.format(
|
||||
"Worker %s could not update checkpoint for shard %s because it does not hold the lease",
|
||||
getWorkerIdentifier(),
|
||||
shardId));
|
||||
return false;
|
||||
}
|
||||
|
||||
lease.setCheckpoint(checkpoint);
|
||||
lease.setOwnerSwitchesSinceCheckpoint(0L);
|
||||
|
||||
return updateLease(lease, concurrencyToken);
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public void setCheckpoint(String shardId, String checkpointValue, String concurrencyToken)
|
||||
throws KinesisClientLibException {
|
||||
try {
|
||||
boolean wasSuccessful = setCheckpoint(shardId, checkpointValue, UUID.fromString(concurrencyToken));
|
||||
if (!wasSuccessful) {
|
||||
throw new ShutdownException("Can't update checkpoint - instance doesn't hold the lease for this shard");
|
||||
}
|
||||
} catch (ProvisionedThroughputException e) {
|
||||
throw new ThrottlingException("Got throttled while updating checkpoint.", e);
|
||||
} catch (InvalidStateException e) {
|
||||
String message = "Unable to save checkpoint for shardId " + shardId;
|
||||
LOG.error(message, e);
|
||||
throw new com.amazonaws.services.kinesis.clientlibrary.exceptions.InvalidStateException(message, e);
|
||||
} catch (DependencyException e) {
|
||||
throw new KinesisClientLibDependencyException("Unable to save checkpoint for shardId " + shardId, e);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public String getCheckpoint(String shardId) throws KinesisClientLibException {
|
||||
try {
|
||||
return leaseManager.getLease(shardId).getCheckpoint();
|
||||
} catch (DependencyException | InvalidStateException | ProvisionedThroughputException e) {
|
||||
String message = "Unable to fetch checkpoint for shardId " + shardId;
|
||||
LOG.error(message, e);
|
||||
throw new KinesisClientLibIOException(message, e);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @return Current shard/lease assignments
|
||||
*/
|
||||
public List<ShardInfo> getCurrentAssignments() {
|
||||
List<ShardInfo> assignments = new LinkedList<ShardInfo>();
|
||||
Collection<KinesisClientLease> leases = getAssignments();
|
||||
if ((leases != null) && (!leases.isEmpty())) {
|
||||
for (KinesisClientLease lease : leases) {
|
||||
Set<String> parentShardIds = lease.getParentShardIds();
|
||||
ShardInfo assignment =
|
||||
new ShardInfo(
|
||||
lease.getLeaseKey(),
|
||||
lease.getConcurrencyToken().toString(),
|
||||
parentShardIds);
|
||||
assignments.add(assignment);
|
||||
}
|
||||
}
|
||||
return assignments;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize the lease coordinator (create the lease table if needed).
|
||||
* @throws DependencyException
|
||||
* @throws ProvisionedThroughputException
|
||||
*/
|
||||
void initialize() throws ProvisionedThroughputException, DependencyException, IllegalStateException {
|
||||
final boolean newTableCreated =
|
||||
leaseManager.createLeaseTableIfNotExists(initialLeaseTableReadCapacity, initialLeaseTableWriteCapacity);
|
||||
if (newTableCreated) {
|
||||
LOG.info("Created new lease table for coordinator");
|
||||
}
|
||||
// Need to wait for table in active state.
|
||||
final long secondsBetweenPolls = 10L;
|
||||
final long timeoutSeconds = 600L;
|
||||
final boolean isTableActive = leaseManager.waitUntilLeaseTableExists(secondsBetweenPolls, timeoutSeconds);
|
||||
if (!isTableActive) {
|
||||
throw new DependencyException(new IllegalStateException("Creating table timeout"));
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Package access for testing.
|
||||
*
|
||||
* @throws DependencyException
|
||||
* @throws InvalidStateException
|
||||
*/
|
||||
void runLeaseTaker() throws DependencyException, InvalidStateException {
|
||||
super.runTaker();
|
||||
}
|
||||
|
||||
/**
|
||||
* Package access for testing.
|
||||
*
|
||||
* @throws DependencyException
|
||||
* @throws InvalidStateException
|
||||
*/
|
||||
void runLeaseRenewer() throws DependencyException, InvalidStateException {
|
||||
super.runRenewer();
|
||||
}
|
||||
|
||||
/**
|
||||
* Used to get information about leases for Kinesis shards (e.g. sync shards and leases, check on parent shard
|
||||
* completion).
|
||||
*
|
||||
* @return LeaseManager
|
||||
*/
|
||||
ILeaseManager<KinesisClientLease> getLeaseManager() {
|
||||
return leaseManager;
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,163 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
|
||||
import com.amazonaws.services.kinesis.model.GetRecordsResult;
|
||||
import com.amazonaws.services.kinesis.model.Record;
|
||||
import com.amazonaws.services.kinesis.model.ResourceNotFoundException;
|
||||
import com.amazonaws.services.kinesis.model.ShardIteratorType;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.lib.checkpoint.SentinelCheckpoint;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.proxies.IKinesisProxy;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.proxies.MetricsCollectingKinesisProxyDecorator;
|
||||
/**
|
||||
* Used to get data from Amazon Kinesis. Tracks iterator state internally.
|
||||
*/
|
||||
class KinesisDataFetcher {
|
||||
|
||||
private static final Log LOG = LogFactory.getLog(KinesisDataFetcher.class);
|
||||
|
||||
private String nextIterator;
|
||||
private IKinesisProxy kinesisProxy;
|
||||
private final String shardId;
|
||||
private boolean isShardEndReached;
|
||||
private boolean isInitialized;
|
||||
|
||||
/**
|
||||
*
|
||||
* @param kinesisProxy Kinesis proxy
|
||||
* @param shardId shardId (we'll fetch data for this shard)
|
||||
* @param checkpoint used to get current checkpoint from which to start fetching records
|
||||
*/
|
||||
public KinesisDataFetcher(IKinesisProxy kinesisProxy, ShardInfo shardInfo) {
|
||||
this.shardId = shardInfo.getShardId();
|
||||
this.kinesisProxy =
|
||||
new MetricsCollectingKinesisProxyDecorator("KinesisDataFetcher", kinesisProxy, this.shardId);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get records from the current position in the stream (up to maxRecords).
|
||||
*
|
||||
* @param maxRecords Max records to fetch
|
||||
* @return list of records of up to maxRecords size
|
||||
*/
|
||||
public List<Record> getRecords(int maxRecords) {
|
||||
if (!isInitialized) {
|
||||
throw new IllegalArgumentException("KinesisDataFetcher.getRecords called before initialization.");
|
||||
}
|
||||
|
||||
List<Record> records = null;
|
||||
GetRecordsResult response = null;
|
||||
if (nextIterator != null) {
|
||||
try {
|
||||
response = kinesisProxy.get(nextIterator, maxRecords);
|
||||
records = response.getRecords();
|
||||
nextIterator = response.getNextShardIterator();
|
||||
} catch (ResourceNotFoundException e) {
|
||||
LOG.info("Caught ResourceNotFoundException when fetching records for shard " + shardId);
|
||||
nextIterator = null;
|
||||
}
|
||||
if (nextIterator == null) {
|
||||
isShardEndReached = true;
|
||||
}
|
||||
} else {
|
||||
isShardEndReached = true;
|
||||
}
|
||||
|
||||
return records;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initializes this KinesisDataFetcher's iterator based on the checkpoint.
|
||||
* @param initialCheckpoint Current checkpoint for this shard.
|
||||
*
|
||||
*/
|
||||
public void initialize(String initialCheckpoint) {
|
||||
|
||||
LOG.info("Initializing shard " + shardId + " with " + initialCheckpoint);
|
||||
advanceIteratorAfter(initialCheckpoint);
|
||||
isInitialized = true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Advances this KinesisDataFetcher's internal iterator to be after the passed-in sequence number.
|
||||
*
|
||||
* @param sequenceNumber advance the iterator to the first record after this sequence number.
|
||||
*/
|
||||
private void advanceIteratorAfterSequenceNumber(String sequenceNumber) {
|
||||
nextIterator = getIterator(ShardIteratorType.AFTER_SEQUENCE_NUMBER.toString(), sequenceNumber);
|
||||
}
|
||||
|
||||
/**
|
||||
* Advances this KinesisDataFetcher's internal iterator to be after the passed-in sequence number.
|
||||
*
|
||||
* @param sequenceNumber advance the iterator to the first record after this sequence number.
|
||||
*/
|
||||
void advanceIteratorAfter(String sequenceNumber) {
|
||||
if (sequenceNumber == null) {
|
||||
throw new IllegalArgumentException("SequenceNumber should not be null: shardId " + shardId);
|
||||
} else if (sequenceNumber.equals(SentinelCheckpoint.LATEST.toString())) {
|
||||
nextIterator = getIterator(ShardIteratorType.LATEST.toString(), null);
|
||||
} else if (sequenceNumber.equals(SentinelCheckpoint.TRIM_HORIZON.toString())) {
|
||||
nextIterator = getIterator(ShardIteratorType.TRIM_HORIZON.toString(), null);
|
||||
} else if (sequenceNumber.equals(SentinelCheckpoint.SHARD_END.toString())) {
|
||||
nextIterator = null;
|
||||
} else {
|
||||
advanceIteratorAfterSequenceNumber(sequenceNumber);
|
||||
}
|
||||
if (nextIterator == null) {
|
||||
isShardEndReached = true;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @param iteratorType
|
||||
* @param sequenceNumber
|
||||
*
|
||||
* @return iterator or null if we catch a ResourceNotFound exception
|
||||
*/
|
||||
private String getIterator(String iteratorType, String sequenceNumber) {
|
||||
String iterator = null;
|
||||
try {
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug("Calling getIterator for " + shardId + ", iterator type " + iteratorType
|
||||
+ " and sequence number " + sequenceNumber);
|
||||
}
|
||||
iterator = kinesisProxy.getIterator(shardId, iteratorType, sequenceNumber);
|
||||
} catch (ResourceNotFoundException e) {
|
||||
LOG.info("Caught ResourceNotFoundException when getting an iterator for shard " + shardId, e);
|
||||
}
|
||||
return iterator;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return the shardEndReached
|
||||
*/
|
||||
protected boolean isShardEndReached() {
|
||||
return isShardEndReached;
|
||||
}
|
||||
|
||||
/** Note: This method has package level access for testing purposes.
|
||||
* @return nextIterator
|
||||
*/
|
||||
String getNextIterator() {
|
||||
return nextIterator;
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,64 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
|
||||
|
||||
import com.amazonaws.services.kinesis.metrics.impl.MetricsHelper;
|
||||
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsFactory;
|
||||
|
||||
/**
|
||||
* Decorates an ITask and reports metrics about its timing and success/failure.
|
||||
*/
|
||||
class MetricsCollectingTaskDecorator implements ITask {
|
||||
|
||||
private final ITask other;
|
||||
private IMetricsFactory factory;
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*
|
||||
* @param other task to report metrics on
|
||||
* @param factory IMetricsFactory to use
|
||||
*/
|
||||
public MetricsCollectingTaskDecorator(ITask other, IMetricsFactory factory) {
|
||||
this.other = other;
|
||||
this.factory = factory;
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public TaskResult call() {
|
||||
String taskName = other.getClass().getSimpleName();
|
||||
MetricsHelper.startScope(factory, taskName);
|
||||
|
||||
long startTimeMillis = System.currentTimeMillis();
|
||||
TaskResult result = other.call();
|
||||
|
||||
MetricsHelper.addSuccessAndLatency(null, startTimeMillis, result.getException() == null);
|
||||
MetricsHelper.endScope();
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public TaskType getTaskType() {
|
||||
return other.getTaskType();
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,215 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
|
||||
|
||||
import java.math.BigInteger;
|
||||
import java.util.List;
|
||||
import java.util.ListIterator;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
|
||||
import com.amazonaws.services.kinesis.model.ExpiredIteratorException;
|
||||
import com.amazonaws.services.kinesis.model.Record;
|
||||
import com.amazonaws.services.cloudwatch.model.StandardUnit;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.exceptions.KinesisClientLibException;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessor;
|
||||
import com.amazonaws.services.kinesis.metrics.impl.MetricsHelper;
|
||||
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsScope;
|
||||
|
||||
/**
|
||||
* Task for fetching data records and invoking processRecords() on the record processor instance.
|
||||
*/
|
||||
class ProcessTask implements ITask {
|
||||
|
||||
private static final String EXPIRED_ITERATOR_METRIC = "ExpiredIterator";
|
||||
private static final String DATA_BYTES_PROCESSED_METRIC = "DataBytesProcessed";
|
||||
private static final String RECORDS_PROCESSED_METRIC = "RecordsProcessed";
|
||||
private static final Log LOG = LogFactory.getLog(ProcessTask.class);
|
||||
|
||||
private final ShardInfo shardInfo;
|
||||
private final IRecordProcessor recordProcessor;
|
||||
private final RecordProcessorCheckpointer recordProcessorCheckpointer;
|
||||
private final KinesisDataFetcher dataFetcher;
|
||||
private final TaskType taskType = TaskType.PROCESS;
|
||||
private final StreamConfig streamConfig;
|
||||
private final long backoffTimeMillis;
|
||||
|
||||
/**
|
||||
* @param shardInfo contains information about the shard
|
||||
* @param streamConfig Stream configuration
|
||||
* @param recordProcessor Record processor used to process the data records for the shard
|
||||
* @param recordProcessorCheckpointer Passed to the RecordProcessor so it can checkpoint
|
||||
* progress
|
||||
* @param dataFetcher Kinesis data fetcher (used to fetch records from Kinesis)
|
||||
* @param backoffTimeMillis backoff time when catching exceptions
|
||||
*/
|
||||
public ProcessTask(ShardInfo shardInfo,
|
||||
StreamConfig streamConfig,
|
||||
IRecordProcessor recordProcessor,
|
||||
RecordProcessorCheckpointer recordProcessorCheckpointer,
|
||||
KinesisDataFetcher dataFetcher,
|
||||
long backoffTimeMillis) {
|
||||
super();
|
||||
this.shardInfo = shardInfo;
|
||||
this.recordProcessor = recordProcessor;
|
||||
this.recordProcessorCheckpointer = recordProcessorCheckpointer;
|
||||
this.dataFetcher = dataFetcher;
|
||||
this.streamConfig = streamConfig;
|
||||
this.backoffTimeMillis = backoffTimeMillis;
|
||||
}
|
||||
|
||||
/* (non-Javadoc)
|
||||
* @see com.amazonaws.services.kinesis.clientlibrary.lib.worker.ITask#call()
|
||||
*/
|
||||
// CHECKSTYLE:OFF CyclomaticComplexity
|
||||
@Override
|
||||
public TaskResult call() {
|
||||
long startTimeMillis = System.currentTimeMillis();
|
||||
IMetricsScope scope = MetricsHelper.getMetricsScope();
|
||||
scope.addDimension("ShardId", shardInfo.getShardId());
|
||||
scope.addData(RECORDS_PROCESSED_METRIC, 0, StandardUnit.Count);
|
||||
scope.addData(DATA_BYTES_PROCESSED_METRIC, 0, StandardUnit.Bytes);
|
||||
|
||||
Exception exception = null;
|
||||
|
||||
try {
|
||||
if (dataFetcher.isShardEndReached()) {
|
||||
LOG.info("Reached end of shard " + shardInfo.getShardId());
|
||||
boolean shardEndReached = true;
|
||||
return new TaskResult(null, shardEndReached);
|
||||
}
|
||||
List<Record> records = getRecords();
|
||||
|
||||
if (records.isEmpty()) {
|
||||
LOG.debug("Kinesis didn't return any records for shard " + shardInfo.getShardId());
|
||||
|
||||
long sleepTimeMillis =
|
||||
streamConfig.getIdleTimeInMilliseconds() - (System.currentTimeMillis() - startTimeMillis);
|
||||
if (sleepTimeMillis > 0) {
|
||||
sleepTimeMillis = Math.max(sleepTimeMillis, streamConfig.getIdleTimeInMilliseconds());
|
||||
try {
|
||||
LOG.debug("Sleeping for " + sleepTimeMillis + " ms since there were no new records in shard "
|
||||
+ shardInfo.getShardId());
|
||||
Thread.sleep(sleepTimeMillis);
|
||||
} catch (InterruptedException e) {
|
||||
LOG.debug("ShardId " + shardInfo.getShardId() + ": Sleep was interrupted");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if ((!records.isEmpty()) || streamConfig.shouldCallProcessRecordsEvenForEmptyRecordList()) {
|
||||
|
||||
// If we got more records, record the max sequence number. Sleep if there are no records.
|
||||
if (!records.isEmpty()) {
|
||||
String maxSequenceNumber = getMaxSequenceNumber(scope, records);
|
||||
recordProcessorCheckpointer.setSequenceNumber(maxSequenceNumber);
|
||||
}
|
||||
try {
|
||||
LOG.debug("Calling application processRecords() with " + records.size() + " records from "
|
||||
+ shardInfo.getShardId());
|
||||
recordProcessor.processRecords(records, recordProcessorCheckpointer);
|
||||
} catch (Exception e) {
|
||||
LOG.error("ShardId " + shardInfo.getShardId()
|
||||
+ ": Application processRecords() threw an exception when processing shard ", e);
|
||||
LOG.error("ShardId " + shardInfo.getShardId() + ": Skipping over the following data records: "
|
||||
+ records);
|
||||
}
|
||||
}
|
||||
} catch (RuntimeException | KinesisClientLibException e) {
|
||||
LOG.error("ShardId " + shardInfo.getShardId() + ": Caught exception: ", e);
|
||||
exception = e;
|
||||
|
||||
// backoff if we encounter an exception.
|
||||
try {
|
||||
Thread.sleep(this.backoffTimeMillis);
|
||||
} catch (InterruptedException ie) {
|
||||
LOG.debug(shardInfo.getShardId() + ": Sleep was interrupted", ie);
|
||||
}
|
||||
}
|
||||
|
||||
return new TaskResult(exception);
|
||||
}
|
||||
// CHECKSTYLE:ON CyclomaticComplexity
|
||||
|
||||
/**
|
||||
* Scans a list of records and returns the greatest sequence number from the records. Also emits metrics about the
|
||||
* records.
|
||||
*
|
||||
* @param scope metrics scope to emit metrics into
|
||||
* @param records list of records to scan
|
||||
* @return greatest sequence number out of all the records.
|
||||
*/
|
||||
private String getMaxSequenceNumber(IMetricsScope scope, List<Record> records) {
|
||||
scope.addData(RECORDS_PROCESSED_METRIC, records.size(), StandardUnit.Count);
|
||||
ListIterator<Record> recordIterator = records.listIterator();
|
||||
BigInteger maxSequenceNumber = BigInteger.ZERO;
|
||||
|
||||
while (recordIterator.hasNext()) {
|
||||
Record record = recordIterator.next();
|
||||
BigInteger sequenceNumber = new BigInteger(record.getSequenceNumber());
|
||||
if (maxSequenceNumber.compareTo(sequenceNumber) < 0) {
|
||||
maxSequenceNumber = sequenceNumber;
|
||||
}
|
||||
|
||||
scope.addData(DATA_BYTES_PROCESSED_METRIC, record.getData().limit(), StandardUnit.Bytes);
|
||||
}
|
||||
|
||||
return maxSequenceNumber.toString();
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets records from Kinesis and retries once in the event of an ExpiredIteratorException.
|
||||
*
|
||||
* @return list of data records from Kinesis
|
||||
* @throws KinesisClientLibException if reading checkpoints fails in the edge case where we haven't passed any
|
||||
* records to the client code yet
|
||||
*/
|
||||
private List<Record> getRecords() throws KinesisClientLibException {
|
||||
int maxRecords = streamConfig.getMaxRecords();
|
||||
try {
|
||||
return dataFetcher.getRecords(maxRecords);
|
||||
} catch (ExpiredIteratorException e) {
|
||||
// If we see a ExpiredIteratorException, try once to restart from the greatest remembered sequence number
|
||||
LOG.info("ShardId " + shardInfo.getShardId()
|
||||
+ ": getRecords threw ExpiredIteratorException - restarting after greatest seqNum "
|
||||
+ "passed to customer", e);
|
||||
MetricsHelper.getMetricsScope().addData(EXPIRED_ITERATOR_METRIC, 1, StandardUnit.Count);
|
||||
|
||||
/*
|
||||
* Advance the iterator to after the greatest processed sequence number (remembered by
|
||||
* recordProcessorCheckpointer).
|
||||
*/
|
||||
dataFetcher.advanceIteratorAfter(recordProcessorCheckpointer.getSequenceNumber());
|
||||
|
||||
// Try a second time - if we fail this time, expose the failure.
|
||||
try {
|
||||
return dataFetcher.getRecords(maxRecords);
|
||||
} catch (ExpiredIteratorException ex) {
|
||||
String msg =
|
||||
"Shard " + shardInfo.getShardId()
|
||||
+ ": getRecords threw ExpiredIteratorException with a fresh iterator.";
|
||||
LOG.error(msg, ex);
|
||||
throw ex;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public TaskType getTaskType() {
|
||||
return taskType;
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,115 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
|
||||
import com.amazonaws.services.kinesis.clientlibrary.exceptions.InvalidStateException;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.exceptions.KinesisClientLibDependencyException;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.exceptions.KinesisClientLibException;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.exceptions.ShutdownException;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.exceptions.ThrottlingException;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.interfaces.ICheckpoint;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessorCheckpointer;
|
||||
|
||||
/**
|
||||
* This class is used to enable RecordProcessors to checkpoint their progress.
|
||||
* The Amazon Kinesis Client Library will instantiate an object and provide a reference to the application
|
||||
* RecordProcessor instance. Amazon Kinesis Client Library will create one instance per shard assignment.
|
||||
*/
|
||||
class RecordProcessorCheckpointer implements IRecordProcessorCheckpointer {
|
||||
|
||||
private static final Log LOG = LogFactory.getLog(RecordProcessorCheckpointer.class);
|
||||
|
||||
private ICheckpoint checkpoint;
|
||||
|
||||
private String sequenceNumber;
|
||||
// Set to the last value set via checkpoint().
|
||||
// Sample use: verify application shutdown() invoked checkpoint() at the end of a shard.
|
||||
private String lastCheckpointValue;
|
||||
|
||||
private ShardInfo shardInfo;
|
||||
|
||||
/**
|
||||
* Only has package level access, since only the Amazon Kinesis Client Library should be creating these.
|
||||
*
|
||||
* @param checkpoint Used to checkpoint progress of a RecordProcessor
|
||||
*/
|
||||
RecordProcessorCheckpointer(ShardInfo shardInfo, ICheckpoint checkpoint) {
|
||||
this.shardInfo = shardInfo;
|
||||
this.checkpoint = checkpoint;
|
||||
}
|
||||
|
||||
/* (non-Javadoc)
|
||||
* @see com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessorCheckpointer#checkpoint()
|
||||
*/
|
||||
@Override
|
||||
public synchronized void checkpoint()
|
||||
throws KinesisClientLibDependencyException, InvalidStateException, ThrottlingException, ShutdownException {
|
||||
advancePosition();
|
||||
}
|
||||
|
||||
/**
|
||||
* @return the lastCheckpointValue
|
||||
*/
|
||||
String getLastCheckpointValue() {
|
||||
return lastCheckpointValue;
|
||||
}
|
||||
|
||||
/**
|
||||
* Used for testing.
|
||||
*
|
||||
* @return the sequenceNumber
|
||||
*/
|
||||
synchronized String getSequenceNumber() {
|
||||
return sequenceNumber;
|
||||
}
|
||||
|
||||
/**
|
||||
* @param maxSequenceNumber the sequenceNumber to set
|
||||
*/
|
||||
synchronized void setSequenceNumber(String sequenceNumber) {
|
||||
this.sequenceNumber = sequenceNumber;
|
||||
}
|
||||
|
||||
/**
|
||||
* Internal API - has package level access only for testing purposes.
|
||||
*
|
||||
* @throws KinesisClientLibDependencyException
|
||||
* @throws ThrottlingException
|
||||
* @throws ShutdownException
|
||||
* @throws InvalidStateException
|
||||
*/
|
||||
void advancePosition()
|
||||
throws KinesisClientLibDependencyException, InvalidStateException, ThrottlingException, ShutdownException {
|
||||
try {
|
||||
checkpoint.setCheckpoint(shardInfo.getShardId(), sequenceNumber, shardInfo.getConcurrencyToken());
|
||||
lastCheckpointValue = sequenceNumber;
|
||||
} catch (ThrottlingException e) {
|
||||
throw e;
|
||||
} catch (ShutdownException e) {
|
||||
throw e;
|
||||
} catch (InvalidStateException e) {
|
||||
throw e;
|
||||
} catch (KinesisClientLibDependencyException e) {
|
||||
throw e;
|
||||
} catch (KinesisClientLibException e) {
|
||||
LOG.warn("Caught exception setting checkpoint.", e);
|
||||
throw new KinesisClientLibDependencyException("Caught exception while checkpointing", e);
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,344 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
|
||||
|
||||
import java.util.concurrent.ExecutionException;
|
||||
import java.util.concurrent.ExecutorService;
|
||||
import java.util.concurrent.Future;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
|
||||
import com.amazonaws.services.kinesis.clientlibrary.exceptions.internal.BlockedOnParentShardException;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.interfaces.ICheckpoint;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessor;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.types.ShutdownReason;
|
||||
import com.amazonaws.services.kinesis.leases.impl.KinesisClientLease;
|
||||
import com.amazonaws.services.kinesis.leases.interfaces.ILeaseManager;
|
||||
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsFactory;
|
||||
|
||||
/**
|
||||
* Responsible for consuming data records of a (specified) shard.
|
||||
* The instance should be shutdown when we lose the primary responsibility for a shard.
|
||||
* A new instance should be created if the primary responsibility is reassigned back to this process.
|
||||
*/
|
||||
class ShardConsumer {
|
||||
|
||||
/**
|
||||
* Enumerates processing states when working on a shard.
|
||||
*/
|
||||
enum ShardConsumerState {
|
||||
WAITING_ON_PARENT_SHARDS, INITIALIZING, PROCESSING, SHUTTING_DOWN, SHUTDOWN_COMPLETE;
|
||||
}
|
||||
|
||||
private static final Log LOG = LogFactory.getLog(ShardConsumer.class);
|
||||
|
||||
private final StreamConfig streamConfig;
|
||||
private final IRecordProcessor recordProcessor;
|
||||
private final RecordProcessorCheckpointer recordProcessorCheckpointer;
|
||||
private final ExecutorService executorService;
|
||||
private final ShardInfo shardInfo;
|
||||
private final KinesisDataFetcher dataFetcher;
|
||||
private final IMetricsFactory metricsFactory;
|
||||
private final ILeaseManager<KinesisClientLease> leaseManager;
|
||||
private ICheckpoint checkpoint;
|
||||
// Backoff time when polling to check if application has finished processing parent shards
|
||||
private final long parentShardPollIntervalMillis;
|
||||
private final boolean cleanupLeasesOfCompletedShards;
|
||||
private final long taskBackoffTimeMillis;
|
||||
|
||||
private ITask currentTask;
|
||||
private long currentTaskSubmitTime;
|
||||
private Future<TaskResult> future;
|
||||
|
||||
/*
|
||||
* Tracks current state. It is only updated via the consumeStream/shutdown APIs. Therefore we don't do
|
||||
* much coordination/synchronization to handle concurrent reads/updates.
|
||||
*/
|
||||
private ShardConsumerState currentState = ShardConsumerState.WAITING_ON_PARENT_SHARDS;
|
||||
/*
|
||||
* Used to track if we lost the primary responsibility. Once set to true, we will start shutting down.
|
||||
* If we regain primary responsibility before shutdown is complete, Worker should create a new ShardConsumer object.
|
||||
*/
|
||||
private boolean beginShutdown;
|
||||
private ShutdownReason shutdownReason;
|
||||
|
||||
|
||||
/**
|
||||
* @param shardInfo Shard information
|
||||
* @param streamConfig Stream configuration to use
|
||||
* @param checkpoint Checkpoint tracker
|
||||
* @param recordProcessor Record processor used to process the data records for the shard
|
||||
* @param leaseManager Used to create leases for new shards
|
||||
* @param parentShardPollIntervalMillis Wait for this long if parent shards are not done (or we get an exception)
|
||||
* @param executorService ExecutorService used to execute process tasks for this shard
|
||||
* @param metricsFactory IMetricsFactory used to construct IMetricsScopes for this shard
|
||||
* @param backoffTimeMillis backoff interval when we encounter exceptions
|
||||
*/
|
||||
// CHECKSTYLE:IGNORE ParameterNumber FOR NEXT 10 LINES
|
||||
ShardConsumer(ShardInfo shardInfo,
|
||||
StreamConfig streamConfig,
|
||||
ICheckpoint checkpoint,
|
||||
IRecordProcessor recordProcessor,
|
||||
ILeaseManager<KinesisClientLease> leaseManager,
|
||||
long parentShardPollIntervalMillis,
|
||||
boolean cleanupLeasesOfCompletedShards,
|
||||
ExecutorService executorService,
|
||||
IMetricsFactory metricsFactory,
|
||||
long backoffTimeMillis) {
|
||||
this.streamConfig = streamConfig;
|
||||
this.recordProcessor = recordProcessor;
|
||||
this.executorService = executorService;
|
||||
this.shardInfo = shardInfo;
|
||||
this.checkpoint = checkpoint;
|
||||
this.recordProcessorCheckpointer = new RecordProcessorCheckpointer(shardInfo, checkpoint);
|
||||
this.dataFetcher = new KinesisDataFetcher(streamConfig.getStreamProxy(), shardInfo);
|
||||
this.leaseManager = leaseManager;
|
||||
this.metricsFactory = metricsFactory;
|
||||
this.parentShardPollIntervalMillis = parentShardPollIntervalMillis;
|
||||
this.cleanupLeasesOfCompletedShards = cleanupLeasesOfCompletedShards;
|
||||
this.taskBackoffTimeMillis = backoffTimeMillis;
|
||||
}
|
||||
|
||||
/**
|
||||
* No-op if current task is pending, otherwise submits next task for this shard.
|
||||
* This method should NOT be called if the ShardConsumer is already in SHUTDOWN_COMPLETED state.
|
||||
*
|
||||
* @return true if a new process task was submitted, false otherwise
|
||||
*/
|
||||
synchronized boolean consumeShard() {
|
||||
return checkAndSubmitNextTask();
|
||||
}
|
||||
|
||||
// CHECKSTYLE:OFF CyclomaticComplexity
|
||||
private synchronized boolean checkAndSubmitNextTask() {
|
||||
// Task completed successfully (without exceptions)
|
||||
boolean taskCompletedSuccessfully = false;
|
||||
boolean submittedNewTask = false;
|
||||
if ((future == null) || future.isCancelled() || future.isDone()) {
|
||||
if ((future != null) && future.isDone()) {
|
||||
try {
|
||||
TaskResult result = future.get();
|
||||
if (result.getException() == null) {
|
||||
taskCompletedSuccessfully = true;
|
||||
if (result.isShardEndReached()) {
|
||||
markForShutdown(ShutdownReason.TERMINATE);
|
||||
}
|
||||
} else {
|
||||
if (LOG.isDebugEnabled()) {
|
||||
Exception taskException = result.getException();
|
||||
if (taskException instanceof BlockedOnParentShardException) {
|
||||
// No need to log the stack trace for this exception (it is very specific).
|
||||
LOG.debug("Shard " + shardInfo.getShardId()
|
||||
+ " is blocked on completion of parent shard.");
|
||||
} else {
|
||||
LOG.debug("Caught exception running " + currentTask.getTaskType() + " task: ",
|
||||
result.getException());
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch (InterruptedException e) {
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug(currentTask.getTaskType() + " task was interrupted: ", e);
|
||||
}
|
||||
} catch (ExecutionException e) {
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug(currentTask.getTaskType() + " task encountered execution exception: ", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
updateState(taskCompletedSuccessfully);
|
||||
ITask nextTask = getNextTask();
|
||||
if (nextTask != null) {
|
||||
currentTask = nextTask;
|
||||
future = executorService.submit(currentTask);
|
||||
currentTaskSubmitTime = System.currentTimeMillis();
|
||||
submittedNewTask = true;
|
||||
LOG.debug("Submitted new " + currentTask.getTaskType() + " task for shard " + shardInfo.getShardId());
|
||||
} else {
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug(String.format("No new task to submit for shard %s, currentState %s",
|
||||
shardInfo.getShardId(),
|
||||
currentState.toString()));
|
||||
}
|
||||
}
|
||||
} else {
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug("Previous " + currentTask.getTaskType() + " task still pending for shard "
|
||||
+ shardInfo.getShardId() + " since " + (System.currentTimeMillis() - currentTaskSubmitTime)
|
||||
+ " ms ago" + ". Not submitting new task.");
|
||||
}
|
||||
}
|
||||
|
||||
return submittedNewTask;
|
||||
}
|
||||
// CHECKSTYLE:ON CyclomaticComplexity
|
||||
|
||||
/**
|
||||
* Shutdown this ShardConsumer (including invoking the RecordProcessor shutdown API).
|
||||
* This is called by Worker when it loses responsibility for a shard.
|
||||
* @return true if shutdown is complete (false if shutdown is still in progress)
|
||||
*/
|
||||
synchronized boolean beginShutdown() {
|
||||
if (currentState != ShardConsumerState.SHUTDOWN_COMPLETE) {
|
||||
markForShutdown(ShutdownReason.ZOMBIE);
|
||||
checkAndSubmitNextTask();
|
||||
}
|
||||
return isShutdown();
|
||||
}
|
||||
|
||||
synchronized void markForShutdown(ShutdownReason reason) {
|
||||
beginShutdown = true;
|
||||
// ShutdownReason.ZOMBIE takes precedence over TERMINATE (we won't be able to save checkpoint at end of shard)
|
||||
if ((shutdownReason == null) || (shutdownReason == ShutdownReason.TERMINATE)) {
|
||||
shutdownReason = reason;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Used (by Worker) to check if this ShardConsumer instance has been shutdown
|
||||
* RecordProcessor shutdown() has been invoked, as appropriate.
|
||||
*
|
||||
* @return true if shutdown is complete
|
||||
*/
|
||||
boolean isShutdown() {
|
||||
return currentState == ShardConsumerState.SHUTDOWN_COMPLETE;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return the shutdownReason
|
||||
*/
|
||||
ShutdownReason getShutdownReason() {
|
||||
return shutdownReason;
|
||||
}
|
||||
|
||||
/**
|
||||
* Figure out next task to run based on current state, task, and shutdown context.
|
||||
* @return Return next task to run
|
||||
*/
|
||||
private ITask getNextTask() {
|
||||
ITask nextTask = null;
|
||||
switch (currentState) {
|
||||
case WAITING_ON_PARENT_SHARDS:
|
||||
nextTask = new BlockOnParentShardTask(shardInfo, leaseManager, parentShardPollIntervalMillis);
|
||||
break;
|
||||
case INITIALIZING:
|
||||
nextTask =
|
||||
new InitializeTask(shardInfo,
|
||||
recordProcessor,
|
||||
checkpoint,
|
||||
recordProcessorCheckpointer,
|
||||
dataFetcher,
|
||||
taskBackoffTimeMillis);
|
||||
break;
|
||||
case PROCESSING:
|
||||
nextTask =
|
||||
new ProcessTask(shardInfo,
|
||||
streamConfig,
|
||||
recordProcessor,
|
||||
recordProcessorCheckpointer,
|
||||
dataFetcher,
|
||||
taskBackoffTimeMillis);
|
||||
break;
|
||||
case SHUTTING_DOWN:
|
||||
nextTask =
|
||||
new ShutdownTask(shardInfo,
|
||||
recordProcessor,
|
||||
recordProcessorCheckpointer,
|
||||
shutdownReason,
|
||||
streamConfig.getStreamProxy(),
|
||||
streamConfig.getInitialPositionInStream(),
|
||||
cleanupLeasesOfCompletedShards,
|
||||
leaseManager,
|
||||
taskBackoffTimeMillis);
|
||||
break;
|
||||
case SHUTDOWN_COMPLETE:
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
if (nextTask == null) {
|
||||
return null;
|
||||
} else {
|
||||
return new MetricsCollectingTaskDecorator(nextTask, metricsFactory);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Note: This is a private/internal method with package level access solely for testing purposes.
|
||||
* Update state based on information about: task success, current state, and shutdown info.
|
||||
* @param taskCompletedSuccessfully Whether (current) task completed successfully.
|
||||
*/
|
||||
// CHECKSTYLE:OFF CyclomaticComplexity
|
||||
void updateState(boolean taskCompletedSuccessfully) {
|
||||
switch (currentState) {
|
||||
case WAITING_ON_PARENT_SHARDS:
|
||||
if (taskCompletedSuccessfully && TaskType.BLOCK_ON_PARENT_SHARDS.equals(currentTask.getTaskType())) {
|
||||
if (beginShutdown) {
|
||||
currentState = ShardConsumerState.SHUTTING_DOWN;
|
||||
} else {
|
||||
currentState = ShardConsumerState.INITIALIZING;
|
||||
}
|
||||
} else if ((currentTask == null) && beginShutdown) {
|
||||
currentState = ShardConsumerState.SHUTDOWN_COMPLETE;
|
||||
}
|
||||
break;
|
||||
case INITIALIZING:
|
||||
if (taskCompletedSuccessfully && TaskType.INITIALIZE.equals(currentTask.getTaskType())) {
|
||||
if (beginShutdown) {
|
||||
currentState = ShardConsumerState.SHUTTING_DOWN;
|
||||
} else {
|
||||
currentState = ShardConsumerState.PROCESSING;
|
||||
}
|
||||
} else if ((currentTask == null) && beginShutdown) {
|
||||
currentState = ShardConsumerState.SHUTDOWN_COMPLETE;
|
||||
}
|
||||
break;
|
||||
case PROCESSING:
|
||||
if (taskCompletedSuccessfully && TaskType.PROCESS.equals(currentTask.getTaskType())) {
|
||||
if (beginShutdown) {
|
||||
currentState = ShardConsumerState.SHUTTING_DOWN;
|
||||
} else {
|
||||
currentState = ShardConsumerState.PROCESSING;
|
||||
}
|
||||
}
|
||||
break;
|
||||
case SHUTTING_DOWN:
|
||||
if (currentTask == null
|
||||
|| (taskCompletedSuccessfully && TaskType.SHUTDOWN.equals(currentTask.getTaskType()))) {
|
||||
currentState = ShardConsumerState.SHUTDOWN_COMPLETE;
|
||||
}
|
||||
break;
|
||||
case SHUTDOWN_COMPLETE:
|
||||
break;
|
||||
default:
|
||||
LOG.error("Unexpected state: " + currentState);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// CHECKSTYLE:ON CyclomaticComplexity
|
||||
|
||||
/**
|
||||
* Private/Internal method - has package level access solely for testing purposes.
|
||||
*
|
||||
* @return the currentState
|
||||
*/
|
||||
ShardConsumerState getCurrentState() {
|
||||
return currentState;
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,125 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
|
||||
|
||||
import java.util.Collection;
|
||||
import java.util.Collections;
|
||||
import java.util.LinkedList;
|
||||
import java.util.List;
|
||||
|
||||
/**
|
||||
* Used to pass shard related info among different classes and as a key to the map of shard consumers.
|
||||
*/
|
||||
class ShardInfo {
|
||||
|
||||
private final String shardId;
|
||||
private final String concurrencyToken;
|
||||
// Sorted list of parent shardIds.
|
||||
private final List<String> parentShardIds;
|
||||
|
||||
/**
|
||||
* @param shardId Kinesis shardId
|
||||
* @param concurrencyToken Used to differentiate between lost and reclaimed leases
|
||||
* @param parentShardIds Parent shards of the shard identified by Kinesis shardId
|
||||
*/
|
||||
public ShardInfo(String shardId, String concurrencyToken, Collection<String> parentShardIds) {
|
||||
this.shardId = shardId;
|
||||
this.concurrencyToken = concurrencyToken;
|
||||
this.parentShardIds = new LinkedList<String>();
|
||||
if (parentShardIds != null) {
|
||||
this.parentShardIds.addAll(parentShardIds);
|
||||
}
|
||||
Collections.sort(this.parentShardIds);
|
||||
}
|
||||
|
||||
/**
|
||||
* @return the shardId
|
||||
*/
|
||||
protected String getShardId() {
|
||||
return shardId;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return the concurrencyToken
|
||||
*/
|
||||
protected String getConcurrencyToken() {
|
||||
return concurrencyToken;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return the parentShardIds
|
||||
*/
|
||||
protected List<String> getParentShardIds() {
|
||||
return new LinkedList<String>(parentShardIds);
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public int hashCode() {
|
||||
final int prime = 31;
|
||||
int result = 1;
|
||||
result = prime * result + ((concurrencyToken == null) ? 0 : concurrencyToken.hashCode());
|
||||
result = prime * result + ((parentShardIds == null) ? 0 : parentShardIds.hashCode());
|
||||
result = prime * result + ((shardId == null) ? 0 : shardId.hashCode());
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
// CHECKSTYLE:OFF CyclomaticComplexity
|
||||
// CHECKSTYLE:OFF NPathComplexity
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (this == obj) {
|
||||
return true;
|
||||
}
|
||||
if (obj == null) {
|
||||
return false;
|
||||
}
|
||||
if (getClass() != obj.getClass()) {
|
||||
return false;
|
||||
}
|
||||
ShardInfo other = (ShardInfo) obj;
|
||||
if (concurrencyToken == null) {
|
||||
if (other.concurrencyToken != null) {
|
||||
return false;
|
||||
}
|
||||
} else if (!concurrencyToken.equals(other.concurrencyToken)) {
|
||||
return false;
|
||||
}
|
||||
if (parentShardIds == null) {
|
||||
if (other.parentShardIds != null) {
|
||||
return false;
|
||||
}
|
||||
} else if (!parentShardIds.equals(other.parentShardIds)) {
|
||||
return false;
|
||||
}
|
||||
if (shardId == null) {
|
||||
if (other.shardId != null) {
|
||||
return false;
|
||||
}
|
||||
} else if (!shardId.equals(other.shardId)) {
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
// CHECKSTYLE:ON CyclomaticComplexity
|
||||
// CHECKSTYLE:ON NPathComplexity
|
||||
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,92 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
|
||||
import com.amazonaws.services.kinesis.clientlibrary.proxies.IKinesisProxy;
|
||||
import com.amazonaws.services.kinesis.leases.impl.KinesisClientLease;
|
||||
import com.amazonaws.services.kinesis.leases.interfaces.ILeaseManager;
|
||||
|
||||
/**
|
||||
* This task syncs leases/activies with shards of the stream.
|
||||
* It will create new leases/activites when it discovers new shards (e.g. setup/resharding).
|
||||
* It will clean up leases/activities for shards that have been completely processed (if
|
||||
* cleanupLeasesUponShardCompletion is true).
|
||||
*/
|
||||
class ShardSyncTask implements ITask {
|
||||
|
||||
private static final Log LOG = LogFactory.getLog(ShardSyncTask.class);
|
||||
|
||||
private final IKinesisProxy kinesisProxy;
|
||||
private final ILeaseManager<KinesisClientLease> leaseManager;
|
||||
private InitialPositionInStream initialPosition;
|
||||
private final boolean cleanupLeasesUponShardCompletion;
|
||||
private final long shardSyncTaskIdleTimeMillis;
|
||||
private final TaskType taskType = TaskType.SHARDSYNC;
|
||||
|
||||
/**
|
||||
* @param kinesisProxy Used to fetch information about the stream (e.g. shard list)
|
||||
* @param leaseManager Used to fetch and create leases
|
||||
* @param initialPosition One of LATEST or TRIM_HORIZON. Amazon Kinesis Client Library will start processing records
|
||||
* from this point in the stream (when an application starts up for the first time) except for shards that
|
||||
* already have a checkpoint (and their descendant shards).
|
||||
*/
|
||||
ShardSyncTask(IKinesisProxy kinesisProxy,
|
||||
ILeaseManager<KinesisClientLease> leaseManager,
|
||||
InitialPositionInStream initialPositionInStream,
|
||||
boolean cleanupLeasesUponShardCompletion,
|
||||
long shardSyncTaskIdleTimeMillis) {
|
||||
this.kinesisProxy = kinesisProxy;
|
||||
this.leaseManager = leaseManager;
|
||||
this.initialPosition = initialPositionInStream;
|
||||
this.cleanupLeasesUponShardCompletion = cleanupLeasesUponShardCompletion;
|
||||
this.shardSyncTaskIdleTimeMillis = shardSyncTaskIdleTimeMillis;
|
||||
}
|
||||
|
||||
/* (non-Javadoc)
|
||||
* @see com.amazonaws.services.kinesis.clientlibrary.lib.worker.ITask#call()
|
||||
*/
|
||||
@Override
|
||||
public TaskResult call() {
|
||||
Exception exception = null;
|
||||
|
||||
try {
|
||||
ShardSyncer.checkAndCreateLeasesForNewShards(kinesisProxy,
|
||||
leaseManager,
|
||||
initialPosition,
|
||||
cleanupLeasesUponShardCompletion);
|
||||
if (shardSyncTaskIdleTimeMillis > 0) {
|
||||
Thread.sleep(shardSyncTaskIdleTimeMillis);
|
||||
}
|
||||
} catch (Exception e) {
|
||||
LOG.error("Caught exception while sync'ing Kinesis shards and leases", e);
|
||||
exception = e;
|
||||
}
|
||||
|
||||
return new TaskResult(exception);
|
||||
}
|
||||
|
||||
|
||||
/* (non-Javadoc)
|
||||
* @see com.amazonaws.services.kinesis.clientlibrary.lib.worker.ITask#getTaskType()
|
||||
*/
|
||||
@Override
|
||||
public TaskType getTaskType() {
|
||||
return taskType;
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,117 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
|
||||
|
||||
import java.util.Set;
|
||||
import java.util.concurrent.ExecutionException;
|
||||
import java.util.concurrent.ExecutorService;
|
||||
import java.util.concurrent.Future;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
|
||||
import com.amazonaws.services.kinesis.clientlibrary.proxies.IKinesisProxy;
|
||||
import com.amazonaws.services.kinesis.leases.impl.KinesisClientLease;
|
||||
import com.amazonaws.services.kinesis.leases.interfaces.ILeaseManager;
|
||||
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsFactory;
|
||||
|
||||
/**
|
||||
* The ShardSyncTaskManager is used to track the task to sync shards with leases (create leases for new
|
||||
* Kinesis shards, remove obsolete leases). We'll have at most one outstanding sync task at any time.
|
||||
* Worker will use this class to kick off a sync task when it finds shards which have been completely processed.
|
||||
*/
|
||||
class ShardSyncTaskManager {
|
||||
|
||||
private static final Log LOG = LogFactory.getLog(ShardSyncTaskManager.class);
|
||||
|
||||
private ITask currentTask;
|
||||
private Future<TaskResult> future;
|
||||
private final IKinesisProxy kinesisProxy;
|
||||
private final ILeaseManager<KinesisClientLease> leaseManager;
|
||||
private final IMetricsFactory metricsFactory;
|
||||
private final ExecutorService executorService;
|
||||
private final InitialPositionInStream initialPositionInStream;
|
||||
private boolean cleanupLeasesUponShardCompletion;
|
||||
private final long shardSyncIdleTimeMillis;
|
||||
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*
|
||||
* @param kinesisProxy Proxy used to fetch streamInfo (shards)
|
||||
* @param leaseManager Lease manager (used to list and create leases for shards)
|
||||
* @param initialPositionInStream Initial position in stream
|
||||
* @param cleanupLeasesUponShardCompletion Clean up leases for shards that we've finished processing (don't wait
|
||||
* until they expire)
|
||||
* @param shardSyncIdleTimeMillis Time between tasks to sync leases and Kinesis shards
|
||||
* @param metricsFactory Metrics factory
|
||||
* @param executorService ExecutorService to execute the shard sync tasks
|
||||
*/
|
||||
ShardSyncTaskManager(final IKinesisProxy kinesisProxy,
|
||||
final ILeaseManager<KinesisClientLease> leaseManager,
|
||||
final InitialPositionInStream initialPositionInStream,
|
||||
final boolean cleanupLeasesUponShardCompletion,
|
||||
final long shardSyncIdleTimeMillis,
|
||||
final IMetricsFactory metricsFactory,
|
||||
ExecutorService executorService) {
|
||||
this.kinesisProxy = kinesisProxy;
|
||||
this.leaseManager = leaseManager;
|
||||
this.metricsFactory = metricsFactory;
|
||||
this.cleanupLeasesUponShardCompletion = cleanupLeasesUponShardCompletion;
|
||||
this.shardSyncIdleTimeMillis = shardSyncIdleTimeMillis;
|
||||
this.executorService = executorService;
|
||||
this.initialPositionInStream = initialPositionInStream;
|
||||
}
|
||||
|
||||
synchronized boolean syncShardAndLeaseInfo(Set<String> closedShardIds) {
|
||||
return checkAndSubmitNextTask(closedShardIds);
|
||||
}
|
||||
|
||||
private synchronized boolean checkAndSubmitNextTask(Set<String> closedShardIds) {
|
||||
boolean submittedNewTask = false;
|
||||
if ((future == null) || future.isCancelled() || future.isDone()) {
|
||||
if ((future != null) && future.isDone()) {
|
||||
try {
|
||||
TaskResult result = future.get();
|
||||
if (result.getException() != null) {
|
||||
LOG.error("Caught exception running " + currentTask.getTaskType() + " task: ",
|
||||
result.getException());
|
||||
}
|
||||
} catch (InterruptedException | ExecutionException e) {
|
||||
LOG.warn(currentTask.getTaskType() + " task encountered exception.", e);
|
||||
}
|
||||
}
|
||||
|
||||
currentTask =
|
||||
new MetricsCollectingTaskDecorator(new ShardSyncTask(kinesisProxy,
|
||||
leaseManager,
|
||||
initialPositionInStream,
|
||||
cleanupLeasesUponShardCompletion,
|
||||
shardSyncIdleTimeMillis), metricsFactory);
|
||||
future = executorService.submit(currentTask);
|
||||
submittedNewTask = true;
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug("Submitted new " + currentTask.getTaskType() + " task.");
|
||||
}
|
||||
} else {
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug("Previous " + currentTask.getTaskType() + " task still pending. Not submitting new task.");
|
||||
}
|
||||
}
|
||||
|
||||
return submittedNewTask;
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,803 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
|
||||
|
||||
import java.io.Serializable;
|
||||
import java.math.BigInteger;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collection;
|
||||
import java.util.Collections;
|
||||
import java.util.Comparator;
|
||||
import java.util.HashMap;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
|
||||
import com.amazonaws.services.kinesis.model.Shard;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.exceptions.internal.KinesisClientLibIOException;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.lib.checkpoint.SentinelCheckpoint;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.proxies.IKinesisProxy;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.DependencyException;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.InvalidStateException;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.ProvisionedThroughputException;
|
||||
import com.amazonaws.services.kinesis.leases.impl.KinesisClientLease;
|
||||
import com.amazonaws.services.kinesis.leases.interfaces.ILeaseManager;
|
||||
import com.amazonaws.services.kinesis.metrics.impl.MetricsHelper;
|
||||
|
||||
/**
|
||||
* Helper class to sync leases with shards of the Kinesis stream.
|
||||
* It will create new leases/activities when it discovers new Kinesis shards (bootstrap/resharding).
|
||||
* It deletes leases for shards that have been trimmed from Kinesis, or if we've completed processing it
|
||||
* and begun processing it's child shards.
|
||||
*/
|
||||
class ShardSyncer {
|
||||
|
||||
private static final Log LOG = LogFactory.getLog(ShardSyncer.class);
|
||||
|
||||
/**
|
||||
* Note constructor is private: We use static synchronized methods - this is a utility class.
|
||||
*/
|
||||
private ShardSyncer() {
|
||||
}
|
||||
|
||||
static synchronized void bootstrapShardLeases(IKinesisProxy kinesisProxy,
|
||||
ILeaseManager<KinesisClientLease> leaseManager,
|
||||
InitialPositionInStream initialPositionInStream,
|
||||
boolean cleanupLeasesOfCompletedShards)
|
||||
throws DependencyException, InvalidStateException, ProvisionedThroughputException, KinesisClientLibIOException {
|
||||
syncShardLeases(kinesisProxy, leaseManager, initialPositionInStream, cleanupLeasesOfCompletedShards);
|
||||
}
|
||||
|
||||
/**
|
||||
* Check and create leases for any new shards (e.g. following a reshard operation).
|
||||
*
|
||||
* @param kinesisProxy
|
||||
* @param leaseManager
|
||||
* @param initialPositionInStream
|
||||
* @param expectedClosedShardId If this is not null, we will assert that the shard list we get from Kinesis
|
||||
* shows this shard to be closed (e.g. parent shard must be closed after a reshard operation).
|
||||
* If it is open, we assume this is an race condition around a reshard event and throw
|
||||
* a KinesisClientLibIOException so client can backoff and retry later.
|
||||
* @throws DependencyException
|
||||
* @throws InvalidStateException
|
||||
* @throws ProvisionedThroughputException
|
||||
* @throws KinesisClientLibIOException
|
||||
*/
|
||||
static synchronized void checkAndCreateLeasesForNewShards(IKinesisProxy kinesisProxy,
|
||||
ILeaseManager<KinesisClientLease> leaseManager,
|
||||
InitialPositionInStream initialPositionInStream,
|
||||
boolean cleanupLeasesOfCompletedShards)
|
||||
throws DependencyException, InvalidStateException, ProvisionedThroughputException, KinesisClientLibIOException {
|
||||
syncShardLeases(kinesisProxy, leaseManager, initialPositionInStream, cleanupLeasesOfCompletedShards);
|
||||
}
|
||||
|
||||
/**
|
||||
* Sync leases with Kinesis shards (e.g. at startup, or when we reach end of a shard).
|
||||
*
|
||||
* @param kinesisProxy
|
||||
* @param leaseManager
|
||||
* @param expectedClosedShardId If this is not null, we will assert that the shard list we get from Kinesis
|
||||
* does not show this shard to be open (e.g. parent shard must be closed after a reshard operation).
|
||||
* If it is still open, we assume this is a race condition around a reshard event and
|
||||
* throw a KinesisClientLibIOException so client can backoff and retry later. If the shard doesn't exist in
|
||||
* Kinesis at all, we assume this is an old/expired shard and continue with the sync operation.
|
||||
* @throws DependencyException
|
||||
* @throws InvalidStateException
|
||||
* @throws ProvisionedThroughputException
|
||||
* @throws KinesisClientLibIOException
|
||||
*/
|
||||
// CHECKSTYLE:OFF CyclomaticComplexity
|
||||
private static synchronized void syncShardLeases(IKinesisProxy kinesisProxy,
|
||||
ILeaseManager<KinesisClientLease> leaseManager,
|
||||
InitialPositionInStream initialPosition,
|
||||
boolean cleanupLeasesOfCompletedShards)
|
||||
throws DependencyException, InvalidStateException, ProvisionedThroughputException, KinesisClientLibIOException {
|
||||
List<Shard> shards = getShardList(kinesisProxy);
|
||||
LOG.debug("Num shards: " + shards.size());
|
||||
|
||||
Map<String, Shard> shardIdToShardMap = constructShardIdToShardMap(shards);
|
||||
Map<String, Set<String>> shardIdToChildShardIdsMap = constructShardIdToChildShardIdsMap(shardIdToShardMap);
|
||||
assertAllParentShardsAreClosed(shardIdToChildShardIdsMap, shardIdToShardMap);
|
||||
|
||||
List<KinesisClientLease> currentLeases = leaseManager.listLeases();
|
||||
|
||||
List<KinesisClientLease> newLeasesToCreate = determineNewLeasesToCreate(shards, currentLeases, initialPosition);
|
||||
LOG.debug("Num new leases to create: " + newLeasesToCreate.size());
|
||||
for (KinesisClientLease lease : newLeasesToCreate) {
|
||||
long startTimeMillis = System.currentTimeMillis();
|
||||
boolean success = false;
|
||||
try {
|
||||
leaseManager.createLeaseIfNotExists(lease);
|
||||
success = true;
|
||||
} finally {
|
||||
MetricsHelper.addSuccessAndLatency("CreateLease", startTimeMillis, success);
|
||||
}
|
||||
}
|
||||
|
||||
List<KinesisClientLease> trackedLeases = new ArrayList<>();
|
||||
if (currentLeases != null) {
|
||||
trackedLeases.addAll(currentLeases);
|
||||
}
|
||||
trackedLeases.addAll(newLeasesToCreate);
|
||||
cleanupGarbageLeases(shards, trackedLeases, kinesisProxy, leaseManager);
|
||||
if (cleanupLeasesOfCompletedShards) {
|
||||
cleanupLeasesOfFinishedShards(currentLeases,
|
||||
shardIdToShardMap,
|
||||
shardIdToChildShardIdsMap,
|
||||
trackedLeases,
|
||||
leaseManager);
|
||||
}
|
||||
}
|
||||
// CHECKSTYLE:ON CyclomaticComplexity
|
||||
|
||||
/** Helper method to detect a race condition between fetching the shards via paginated DescribeStream calls
|
||||
* and a reshard operation.
|
||||
* @param shardIdToChildShardIdsMap
|
||||
* @param shardIdToShardMap
|
||||
* @throws KinesisClientLibIOException
|
||||
*/
|
||||
private static void assertAllParentShardsAreClosed(Map<String, Set<String>> shardIdToChildShardIdsMap,
|
||||
Map<String, Shard> shardIdToShardMap) throws KinesisClientLibIOException {
|
||||
for (String parentShardId : shardIdToChildShardIdsMap.keySet()) {
|
||||
Shard parentShard = shardIdToShardMap.get(parentShardId);
|
||||
if ((parentShardId == null) || (parentShard.getSequenceNumberRange().getEndingSequenceNumber() == null)) {
|
||||
throw new KinesisClientLibIOException("Parent shardId " + parentShardId + " is not closed. "
|
||||
+ "This can happen due to a race condition between describeStream and a reshard operation.");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper method to create a shardId->KinesisClientLease map.
|
||||
* Note: This has package level access for testing purposes only.
|
||||
* @param trackedLeaseList
|
||||
* @return
|
||||
*/
|
||||
static Map<String, KinesisClientLease> constructShardIdToKCLLeaseMap(List<KinesisClientLease> trackedLeaseList) {
|
||||
Map<String, KinesisClientLease> trackedLeasesMap = new HashMap<>();
|
||||
for (KinesisClientLease lease : trackedLeaseList) {
|
||||
trackedLeasesMap.put(lease.getLeaseKey(), lease);
|
||||
}
|
||||
return trackedLeasesMap;
|
||||
}
|
||||
|
||||
/**
|
||||
* Note: this has package level access for testing purposes.
|
||||
* Useful for asserting that we don't have an incomplete shard list following a reshard operation.
|
||||
* We verify that if the shard is present in the shard list, it is closed and its hash key range
|
||||
* is covered by its child shards.
|
||||
* @param shards List of all Kinesis shards
|
||||
* @param shardIdsOfClosedShards Id of the shard which is expected to be closed
|
||||
* @return ShardIds of child shards (children of the expectedClosedShard)
|
||||
* @throws KinesisClientLibIOException
|
||||
*/
|
||||
static synchronized void assertClosedShardsAreCoveredOrAbsent(Map<String, Shard> shardIdToShardMap,
|
||||
Map<String, Set<String>> shardIdToChildShardIdsMap,
|
||||
Set<String> shardIdsOfClosedShards) throws KinesisClientLibIOException {
|
||||
String exceptionMessageSuffix = "This can happen if we constructed the list of shards "
|
||||
+ " while a reshard operation was in progress.";
|
||||
|
||||
for (String shardId : shardIdsOfClosedShards) {
|
||||
Shard shard = shardIdToShardMap.get(shardId);
|
||||
if (shard == null) {
|
||||
LOG.info("Shard " + shardId + " is not present in Kinesis anymore.");
|
||||
continue;
|
||||
}
|
||||
|
||||
String endingSequenceNumber = shard.getSequenceNumberRange().getEndingSequenceNumber();
|
||||
if (endingSequenceNumber == null) {
|
||||
throw new KinesisClientLibIOException("Shard " + shardIdsOfClosedShards
|
||||
+ " is not closed. " + exceptionMessageSuffix);
|
||||
}
|
||||
|
||||
Set<String> childShardIds = shardIdToChildShardIdsMap.get(shardId);
|
||||
if (childShardIds == null) {
|
||||
throw new KinesisClientLibIOException("Incomplete shard list: Closed shard " + shardId
|
||||
+ " has no children." + exceptionMessageSuffix);
|
||||
}
|
||||
|
||||
assertHashRangeOfClosedShardIsCovered(shard, shardIdToShardMap, childShardIds);
|
||||
}
|
||||
}
|
||||
|
||||
private static synchronized void assertHashRangeOfClosedShardIsCovered(Shard closedShard,
|
||||
Map<String, Shard> shardIdToShardMap,
|
||||
Set<String> childShardIds) throws KinesisClientLibIOException {
|
||||
|
||||
BigInteger startingHashKeyOfClosedShard = new BigInteger(closedShard.getHashKeyRange().getStartingHashKey());
|
||||
BigInteger endingHashKeyOfClosedShard = new BigInteger(closedShard.getHashKeyRange().getEndingHashKey());
|
||||
BigInteger minStartingHashKeyOfChildren = null;
|
||||
BigInteger maxEndingHashKeyOfChildren = null;
|
||||
|
||||
for (String childShardId : childShardIds) {
|
||||
Shard childShard = shardIdToShardMap.get(childShardId);
|
||||
BigInteger startingHashKey = new BigInteger(childShard.getHashKeyRange().getStartingHashKey());
|
||||
if ((minStartingHashKeyOfChildren == null)
|
||||
|| (startingHashKey.compareTo(minStartingHashKeyOfChildren) < 0)) {
|
||||
minStartingHashKeyOfChildren = startingHashKey;
|
||||
}
|
||||
BigInteger endingHashKey = new BigInteger(childShard.getHashKeyRange().getEndingHashKey());
|
||||
if ((maxEndingHashKeyOfChildren == null)
|
||||
|| (endingHashKey.compareTo(maxEndingHashKeyOfChildren) > 0)) {
|
||||
maxEndingHashKeyOfChildren = endingHashKey;
|
||||
}
|
||||
}
|
||||
|
||||
if ((minStartingHashKeyOfChildren == null) || (maxEndingHashKeyOfChildren == null)
|
||||
|| (minStartingHashKeyOfChildren.compareTo(startingHashKeyOfClosedShard) > 0)
|
||||
|| (maxEndingHashKeyOfChildren.compareTo(endingHashKeyOfClosedShard) < 0)) {
|
||||
throw new KinesisClientLibIOException("Incomplete shard list: hash key range of shard "
|
||||
+ closedShard.getShardId() + " is not covered by its child shards.");
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper method to construct shardId->setOfChildShardIds map.
|
||||
* Note: This has package access for testing purposes only.
|
||||
* @param shardIdToShardMap
|
||||
* @return
|
||||
*/
|
||||
static Map<String, Set<String>> constructShardIdToChildShardIdsMap(
|
||||
Map<String, Shard> shardIdToShardMap) {
|
||||
Map<String, Set<String>> shardIdToChildShardIdsMap = new HashMap<>();
|
||||
for (Map.Entry<String, Shard> entry : shardIdToShardMap.entrySet()) {
|
||||
String shardId = entry.getKey();
|
||||
Shard shard = entry.getValue();
|
||||
String parentShardId = shard.getParentShardId();
|
||||
if ((parentShardId != null) && (shardIdToShardMap.containsKey(parentShardId))) {
|
||||
Set<String> childShardIds = shardIdToChildShardIdsMap.get(parentShardId);
|
||||
if (childShardIds == null) {
|
||||
childShardIds = new HashSet<String>();
|
||||
shardIdToChildShardIdsMap.put(parentShardId, childShardIds);
|
||||
}
|
||||
childShardIds.add(shardId);
|
||||
}
|
||||
|
||||
String adjacentParentShardId = shard.getAdjacentParentShardId();
|
||||
if ((adjacentParentShardId != null) && (shardIdToShardMap.containsKey(adjacentParentShardId))) {
|
||||
Set<String> childShardIds = shardIdToChildShardIdsMap.get(adjacentParentShardId);
|
||||
if (childShardIds == null) {
|
||||
childShardIds = new HashSet<String>();
|
||||
shardIdToChildShardIdsMap.put(adjacentParentShardId, childShardIds);
|
||||
}
|
||||
childShardIds.add(shardId);
|
||||
}
|
||||
}
|
||||
return shardIdToChildShardIdsMap;
|
||||
}
|
||||
|
||||
private static List<Shard> getShardList(IKinesisProxy kinesisProxy) throws KinesisClientLibIOException {
|
||||
List<Shard> shards = kinesisProxy.getShardList();
|
||||
if (shards == null) {
|
||||
throw new KinesisClientLibIOException(
|
||||
"Stream is not in ACTIVE OR UPDATING state - will retry getting the shard list.");
|
||||
}
|
||||
return shards;
|
||||
}
|
||||
|
||||
/**
|
||||
* Determine new leases to create and their initial checkpoint.
|
||||
* Note: Package level access only for testing purposes.
|
||||
*
|
||||
* For each open (no ending sequence number) shard that doesn't already have a lease,
|
||||
* determine if it is a descendent of any shard which is or will be processed (e.g. for which a lease exists):
|
||||
* If so, set checkpoint of the shard to TrimHorizon and also create leases for ancestors if needed.
|
||||
* If not, set checkpoint of the shard to the initial position specified by the client.
|
||||
* To check if we need to create leases for ancestors, we use the following rules:
|
||||
* * If we began (or will begin) processing data for a shard, then we must reach end of that shard before
|
||||
* we begin processing data from any of its descendants.
|
||||
* * A shard does not start processing data until data from all its parents has been processed.
|
||||
* Note, if the initial position is LATEST and a shard has two parents and only one is a descendant - we'll create
|
||||
* leases corresponding to both the parents - the parent shard which is not a descendant will have
|
||||
* its checkpoint set to Latest.
|
||||
*
|
||||
* We assume that if there is an existing lease for a shard, then either:
|
||||
* * we have previously created a lease for its parent (if it was needed), or
|
||||
* * the parent shard has expired.
|
||||
*
|
||||
* For example:
|
||||
* Shard structure (each level depicts a stream segment):
|
||||
* 0 1 2 3 4 5- shards till epoch 102
|
||||
* \ / \ / | |
|
||||
* 6 7 4 5- shards from epoch 103 - 205
|
||||
* \ / | /\
|
||||
* 8 4 9 10 - shards from epoch 206 (open - no ending sequenceNumber)
|
||||
* Current leases: (3, 4, 5)
|
||||
* New leases to create: (2, 6, 7, 8, 9, 10)
|
||||
*
|
||||
* The leases returned are sorted by the starting sequence number - following the same order
|
||||
* when persisting the leases in DynamoDB will ensure that we recover gracefully if we fail
|
||||
* before creating all the leases.
|
||||
*
|
||||
* @param shardIds Set of all shardIds in Kinesis (we'll create new leases based on this set)
|
||||
* @param currentLeases List of current leases
|
||||
* @param initialPosition One of LATEST or TRIM_HORIZON. We'll start fetching records from that location in the
|
||||
* shard (when an application starts up for the first time - and there are no checkpoints).
|
||||
* @return List of new leases to create sorted by starting sequenceNumber of the corresponding shard
|
||||
*/
|
||||
static List<KinesisClientLease> determineNewLeasesToCreate(List<Shard> shards,
|
||||
List<KinesisClientLease> currentLeases,
|
||||
InitialPositionInStream initialPosition) {
|
||||
Map<String, KinesisClientLease> shardIdToNewLeaseMap = new HashMap<String, KinesisClientLease>();
|
||||
Map<String, Shard> shardIdToShardMapOfAllKinesisShards = constructShardIdToShardMap(shards);
|
||||
|
||||
Set<String> shardIdsOfCurrentLeases = new HashSet<String>();
|
||||
for (KinesisClientLease lease : currentLeases) {
|
||||
shardIdsOfCurrentLeases.add(lease.getLeaseKey());
|
||||
LOG.debug("Existing lease: " + lease);
|
||||
}
|
||||
|
||||
List<Shard> openShards = getOpenShards(shards);
|
||||
Map<String, Boolean> memoizationContext = new HashMap<>();
|
||||
|
||||
// Iterate over the open shards and find those that don't have any lease entries.
|
||||
for (Shard shard : openShards) {
|
||||
String shardId = shard.getShardId();
|
||||
LOG.debug("Evaluating leases for open shard " + shardId + " and its ancestors.");
|
||||
if (shardIdsOfCurrentLeases.contains(shardId)) {
|
||||
LOG.debug("Lease for shardId " + shardId + " already exists. Not creating a lease");
|
||||
} else {
|
||||
LOG.debug("Need to create a lease for shardId " + shardId);
|
||||
KinesisClientLease newLease = newKCLLease(shard);
|
||||
boolean isDescendant =
|
||||
checkIfDescendantAndAddNewLeasesForAncestors(shardId,
|
||||
initialPosition,
|
||||
shardIdsOfCurrentLeases,
|
||||
shardIdToShardMapOfAllKinesisShards,
|
||||
shardIdToNewLeaseMap,
|
||||
memoizationContext);
|
||||
if (isDescendant) {
|
||||
newLease.setCheckpoint(SentinelCheckpoint.TRIM_HORIZON.toString());
|
||||
} else {
|
||||
newLease.setCheckpoint(convertToCheckpoint(initialPosition));
|
||||
}
|
||||
LOG.debug("Set checkpoint of " + newLease.getLeaseKey() + " to " + newLease.getCheckpoint());
|
||||
shardIdToNewLeaseMap.put(shardId, newLease);
|
||||
}
|
||||
}
|
||||
|
||||
List<KinesisClientLease> newLeasesToCreate = new ArrayList<KinesisClientLease>();
|
||||
newLeasesToCreate.addAll(shardIdToNewLeaseMap.values());
|
||||
Comparator<? super KinesisClientLease> startingSequenceNumberComparator =
|
||||
new StartingSequenceNumberAndShardIdBasedComparator(shardIdToShardMapOfAllKinesisShards);
|
||||
Collections.sort(newLeasesToCreate, startingSequenceNumberComparator);
|
||||
return newLeasesToCreate;
|
||||
}
|
||||
|
||||
/**
|
||||
* Note: Package level access for testing purposes only.
|
||||
* Check if this shard is a descendant of a shard that is (or will be) processed.
|
||||
* Create leases for the ancestors of this shard as required.
|
||||
* See javadoc of determineNewLeasesToCreate() for rules and example.
|
||||
*
|
||||
* @param shardIds Ancestors of these shards will be considered for addition into the new lease map
|
||||
* @param shardIdsOfCurrentLeases
|
||||
* @param shardIdToShardMapOfAllKinesisShards ShardId->Shard map containing all shards obtained via DescribeStream.
|
||||
* @param shardIdToLeaseMapOfNewShards Add lease POJOs corresponding to ancestors to this map.
|
||||
* @param memoizationContext Memoization of shards that have been evaluated as part of the evaluation
|
||||
* @return true if the shard is a descendant of any current shard (lease already exists)
|
||||
*/
|
||||
// CHECKSTYLE:OFF CyclomaticComplexity
|
||||
static boolean checkIfDescendantAndAddNewLeasesForAncestors(String shardId,
|
||||
InitialPositionInStream initialPosition,
|
||||
Set<String> shardIdsOfCurrentLeases,
|
||||
Map<String, Shard> shardIdToShardMapOfAllKinesisShards,
|
||||
Map<String, KinesisClientLease> shardIdToLeaseMapOfNewShards,
|
||||
Map<String, Boolean> memoizationContext) {
|
||||
|
||||
Boolean previousValue = memoizationContext.get(shardId);
|
||||
if (previousValue != null) {
|
||||
return previousValue;
|
||||
}
|
||||
|
||||
boolean isDescendant = false;
|
||||
Shard shard;
|
||||
Set<String> parentShardIds;
|
||||
Set<String> descendantParentShardIds = new HashSet<String>();
|
||||
|
||||
if ((shardId != null) && (shardIdToShardMapOfAllKinesisShards.containsKey(shardId))) {
|
||||
if (shardIdsOfCurrentLeases.contains(shardId)) {
|
||||
// This shard is a descendant of a current shard.
|
||||
isDescendant = true;
|
||||
// We don't need to add leases of its ancestors,
|
||||
// because we'd have done it when creating a lease for this shard.
|
||||
} else {
|
||||
shard = shardIdToShardMapOfAllKinesisShards.get(shardId);
|
||||
parentShardIds = getParentShardIds(shard, shardIdToShardMapOfAllKinesisShards);
|
||||
for (String parentShardId : parentShardIds) {
|
||||
// Check if the parent is a descendant, and include its ancestors.
|
||||
if (checkIfDescendantAndAddNewLeasesForAncestors(parentShardId,
|
||||
initialPosition,
|
||||
shardIdsOfCurrentLeases,
|
||||
shardIdToShardMapOfAllKinesisShards,
|
||||
shardIdToLeaseMapOfNewShards,
|
||||
memoizationContext)) {
|
||||
isDescendant = true;
|
||||
descendantParentShardIds.add(parentShardId);
|
||||
LOG.debug("Parent shard " + parentShardId + " is a descendant.");
|
||||
} else {
|
||||
LOG.debug("Parent shard " + parentShardId + " is NOT a descendant.");
|
||||
}
|
||||
}
|
||||
|
||||
// If this is a descendant, create leases for its parent shards (if they don't exist)
|
||||
if (isDescendant) {
|
||||
for (String parentShardId : parentShardIds) {
|
||||
if (!shardIdsOfCurrentLeases.contains(parentShardId)) {
|
||||
LOG.debug("Need to create a lease for shardId " + parentShardId);
|
||||
KinesisClientLease lease = shardIdToLeaseMapOfNewShards.get(parentShardId);
|
||||
if (lease == null) {
|
||||
lease = newKCLLease(shardIdToShardMapOfAllKinesisShards.get(parentShardId));
|
||||
shardIdToLeaseMapOfNewShards.put(parentShardId, lease);
|
||||
}
|
||||
|
||||
if (descendantParentShardIds.contains(parentShardId)) {
|
||||
lease.setCheckpoint(SentinelCheckpoint.TRIM_HORIZON.toString());
|
||||
} else {
|
||||
lease.setCheckpoint(convertToCheckpoint(initialPosition));
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// This shard should be included, if the customer wants to process all records in the stream.
|
||||
if (initialPosition.equals(InitialPositionInStream.TRIM_HORIZON)) {
|
||||
isDescendant = true;
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
memoizationContext.put(shardId, isDescendant);
|
||||
return isDescendant;
|
||||
}
|
||||
// CHECKSTYLE:ON CyclomaticComplexity
|
||||
|
||||
/**
|
||||
* Helper method to get parent shardIds of the current shard - includes the parent shardIds if:
|
||||
* a/ they are not null
|
||||
* b/ if they exist in the current shard map (i.e. haven't expired)
|
||||
*
|
||||
* @param shard Will return parents of this shard
|
||||
* @param shardIdToShardMapOfAllKinesisShards ShardId->Shard map containing all shards obtained via DescribeStream.
|
||||
* @return Set of parentShardIds
|
||||
*/
|
||||
static Set<String> getParentShardIds(Shard shard, Map<String, Shard> shardIdToShardMapOfAllKinesisShards) {
|
||||
Set<String> parentShardIds = new HashSet<String>(2);
|
||||
String parentShardId = shard.getParentShardId();
|
||||
if ((parentShardId != null) && shardIdToShardMapOfAllKinesisShards.containsKey(parentShardId)) {
|
||||
parentShardIds.add(parentShardId);
|
||||
}
|
||||
String adjacentParentShardId = shard.getAdjacentParentShardId();
|
||||
if ((adjacentParentShardId != null) && shardIdToShardMapOfAllKinesisShards.containsKey(adjacentParentShardId)) {
|
||||
parentShardIds.add(adjacentParentShardId);
|
||||
}
|
||||
return parentShardIds;
|
||||
}
|
||||
|
||||
/**
|
||||
* Delete leases corresponding to shards that no longer exist in the stream.
|
||||
* Current scheme: Delete a lease if:
|
||||
* * the corresponding shard is not present in the list of Kinesis shards, AND
|
||||
* * the parentShardIds listed in the lease are also not present in the list of Kinesis shards.
|
||||
* @param shards List of all Kinesis shards (assumed to be a consistent snapshot - when stream is in Active state).
|
||||
* @param trackedLeases List of
|
||||
* @param kinesisProxy Kinesis proxy (used to get shard list)
|
||||
* @param leaseManager
|
||||
* @throws KinesisClientLibIOException Thrown if we couldn't get a fresh shard list from Kinesis.
|
||||
* @throws ProvisionedThroughputException
|
||||
* @throws InvalidStateException
|
||||
* @throws DependencyException
|
||||
*/
|
||||
private static void cleanupGarbageLeases(List<Shard> shards,
|
||||
List<KinesisClientLease> trackedLeases,
|
||||
IKinesisProxy kinesisProxy,
|
||||
ILeaseManager<KinesisClientLease> leaseManager)
|
||||
throws KinesisClientLibIOException, DependencyException, InvalidStateException, ProvisionedThroughputException {
|
||||
Set<String> kinesisShards = new HashSet<>();
|
||||
for (Shard shard : shards) {
|
||||
kinesisShards.add(shard.getShardId());
|
||||
}
|
||||
|
||||
// Check if there are leases for non-existent shards
|
||||
List<KinesisClientLease> garbageLeases = new ArrayList<>();
|
||||
for (KinesisClientLease lease : trackedLeases) {
|
||||
if (isCandidateForCleanup(lease, kinesisShards)) {
|
||||
garbageLeases.add(lease);
|
||||
}
|
||||
}
|
||||
|
||||
if (!garbageLeases.isEmpty()) {
|
||||
LOG.info("Found " + garbageLeases.size()
|
||||
+ " candidate leases for cleanup. Refreshing list of"
|
||||
+ " Kinesis shards to pick up recent/latest shards");
|
||||
List<Shard> currentShardList = getShardList(kinesisProxy);
|
||||
Set<String> currentKinesisShardIds = new HashSet<>();
|
||||
for (Shard shard : currentShardList) {
|
||||
currentKinesisShardIds.add(shard.getShardId());
|
||||
}
|
||||
|
||||
for (KinesisClientLease lease : garbageLeases) {
|
||||
if (isCandidateForCleanup(lease, currentKinesisShardIds)) {
|
||||
LOG.info("Deleting lease for shard " + lease.getLeaseKey()
|
||||
+ " as it is not present in Kinesis stream.");
|
||||
leaseManager.deleteLease(lease);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* Note: This method has package level access, solely for testing purposes.
|
||||
*
|
||||
* @param lease Candidate shard we are considering for deletion.
|
||||
* @param currentKinesisShardIds
|
||||
* @return true if neither the shard (corresponding to the lease), nor its parents are present in
|
||||
* currentKinesisShardIds
|
||||
* @throws KinesisClientLibIOException Thrown if currentKinesisShardIds contains a parent shard but not the child
|
||||
* shard (we are evaluating for deletion).
|
||||
*/
|
||||
static boolean isCandidateForCleanup(KinesisClientLease lease, Set<String> currentKinesisShardIds)
|
||||
throws KinesisClientLibIOException {
|
||||
boolean isCandidateForCleanup = true;
|
||||
|
||||
if (currentKinesisShardIds.contains(lease.getLeaseKey())) {
|
||||
isCandidateForCleanup = false;
|
||||
} else {
|
||||
LOG.info("Found lease for non-existent shard: " + lease.getLeaseKey() + ". Checking its parent shards");
|
||||
Set<String> parentShardIds = lease.getParentShardIds();
|
||||
for (String parentShardId : parentShardIds) {
|
||||
|
||||
// Throw an exception if the parent shard exists (but the child does not).
|
||||
// This may be a (rare) race condition between fetching the shard list and Kinesis expiring shards.
|
||||
if (currentKinesisShardIds.contains(parentShardId)) {
|
||||
String message =
|
||||
"Parent shard " + parentShardId + " exists but not the child shard "
|
||||
+ lease.getLeaseKey();
|
||||
LOG.info(message);
|
||||
throw new KinesisClientLibIOException(message);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return isCandidateForCleanup;
|
||||
}
|
||||
|
||||
/**
|
||||
* Private helper method.
|
||||
* Clean up leases for shards that meet the following criteria:
|
||||
* a/ the shard has been fully processed (checkpoint is set to SHARD_END)
|
||||
* b/ we've begun processing all the child shards: we have leases for all child shards and their checkpoint is not
|
||||
* TRIM_HORIZON.
|
||||
*
|
||||
* @param currentLeases List of leases we evaluate for clean up
|
||||
* @param shardIdToShardMap Map of shardId->Shard (assumed to include all Kinesis shards)
|
||||
* @param shardIdToChildShardIdsMap Map of shardId->childShardIds (assumed to include all Kinesis shards)
|
||||
* @param trackedLeases List of all leases we are tracking.
|
||||
* @param leaseManager Lease manager (will be used to delete leases)
|
||||
* @throws DependencyException
|
||||
* @throws InvalidStateException
|
||||
* @throws ProvisionedThroughputException
|
||||
* @throws KinesisClientLibIOException
|
||||
*/
|
||||
private static synchronized void cleanupLeasesOfFinishedShards(Collection<KinesisClientLease> currentLeases,
|
||||
Map<String, Shard> shardIdToShardMap,
|
||||
Map<String, Set<String>> shardIdToChildShardIdsMap,
|
||||
List<KinesisClientLease> trackedLeases,
|
||||
ILeaseManager<KinesisClientLease> leaseManager)
|
||||
throws DependencyException, InvalidStateException, ProvisionedThroughputException, KinesisClientLibIOException {
|
||||
Set<String> shardIdsOfClosedShards = new HashSet<>();
|
||||
List<KinesisClientLease> leasesOfClosedShards = new ArrayList<>();
|
||||
for (KinesisClientLease lease : currentLeases) {
|
||||
if (lease.getCheckpoint().equals(SentinelCheckpoint.SHARD_END.toString())) {
|
||||
shardIdsOfClosedShards.add(lease.getLeaseKey());
|
||||
leasesOfClosedShards.add(lease);
|
||||
}
|
||||
}
|
||||
|
||||
if (!leasesOfClosedShards.isEmpty()) {
|
||||
assertClosedShardsAreCoveredOrAbsent(shardIdToShardMap,
|
||||
shardIdToChildShardIdsMap,
|
||||
shardIdsOfClosedShards);
|
||||
Comparator<? super KinesisClientLease> startingSequenceNumberComparator =
|
||||
new StartingSequenceNumberAndShardIdBasedComparator(shardIdToShardMap);
|
||||
Collections.sort(leasesOfClosedShards, startingSequenceNumberComparator);
|
||||
Map<String, KinesisClientLease> trackedLeaseMap = constructShardIdToKCLLeaseMap(trackedLeases);
|
||||
|
||||
for (KinesisClientLease leaseOfClosedShard : leasesOfClosedShards) {
|
||||
String closedShardId = leaseOfClosedShard.getLeaseKey();
|
||||
Set<String> childShardIds = shardIdToChildShardIdsMap.get(closedShardId);
|
||||
if ((closedShardId != null) && (childShardIds != null) && (!childShardIds.isEmpty())) {
|
||||
cleanupLeaseForClosedShard(closedShardId, childShardIds, trackedLeaseMap, leaseManager);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Delete lease for the closed shard. Rules for deletion are:
|
||||
* a/ the checkpoint for the closed shard is SHARD_END,
|
||||
* b/ there are leases for all the childShardIds and their checkpoint is NOT TRIM_HORIZON
|
||||
* Note: This method has package level access solely for testing purposes.
|
||||
*
|
||||
* @param closedShardId Identifies the closed shard
|
||||
* @param childShardIds ShardIds of children of the closed shard
|
||||
* @param trackedLeases shardId->KinesisClientLease map with all leases we are tracking (should not be null)
|
||||
* @param leaseManager
|
||||
* @throws ProvisionedThroughputException
|
||||
* @throws InvalidStateException
|
||||
* @throws DependencyException
|
||||
*/
|
||||
static synchronized void cleanupLeaseForClosedShard(String closedShardId,
|
||||
Set<String> childShardIds,
|
||||
Map<String, KinesisClientLease> trackedLeases,
|
||||
ILeaseManager<KinesisClientLease> leaseManager)
|
||||
throws DependencyException, InvalidStateException, ProvisionedThroughputException {
|
||||
KinesisClientLease leaseForClosedShard = trackedLeases.get(closedShardId);
|
||||
List<KinesisClientLease> childShardLeases = new ArrayList<>();
|
||||
|
||||
for (String childShardId : childShardIds) {
|
||||
KinesisClientLease childLease = trackedLeases.get(childShardId);
|
||||
if (childLease != null) {
|
||||
childShardLeases.add(childLease);
|
||||
}
|
||||
}
|
||||
|
||||
if ((leaseForClosedShard != null)
|
||||
&& (leaseForClosedShard.getCheckpoint().equals(SentinelCheckpoint.SHARD_END.toString()))
|
||||
&& (childShardLeases.size() == childShardIds.size())) {
|
||||
boolean okayToDelete = true;
|
||||
for (KinesisClientLease lease : childShardLeases) {
|
||||
if (lease.getCheckpoint().equals(SentinelCheckpoint.TRIM_HORIZON.toString())) {
|
||||
okayToDelete = false;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (okayToDelete) {
|
||||
LOG.info("Deleting lease for shard " + leaseForClosedShard.getLeaseKey()
|
||||
+ " as it has been completely processed and processing of child shards has begun.");
|
||||
leaseManager.deleteLease(leaseForClosedShard);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper method to create a new KinesisClientLease POJO for a shard.
|
||||
* Note: Package level access only for testing purposes
|
||||
*
|
||||
* @param shard
|
||||
* @return
|
||||
*/
|
||||
static KinesisClientLease newKCLLease(Shard shard) {
|
||||
KinesisClientLease newLease = new KinesisClientLease();
|
||||
newLease.setLeaseKey(shard.getShardId());
|
||||
List<String> parentShardIds = new ArrayList<String>(2);
|
||||
if (shard.getParentShardId() != null) {
|
||||
parentShardIds.add(shard.getParentShardId());
|
||||
}
|
||||
if (shard.getAdjacentParentShardId() != null) {
|
||||
parentShardIds.add(shard.getAdjacentParentShardId());
|
||||
}
|
||||
newLease.setParentShardIds(parentShardIds);
|
||||
newLease.setOwnerSwitchesSinceCheckpoint(0L);
|
||||
|
||||
return newLease;
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper method to construct a shardId->Shard map for the specified list of shards.
|
||||
*
|
||||
* @param shards List of shards
|
||||
* @return ShardId->Shard map
|
||||
*/
|
||||
static Map<String, Shard> constructShardIdToShardMap(List<Shard> shards) {
|
||||
Map<String, Shard> shardIdToShardMap = new HashMap<String, Shard>();
|
||||
for (Shard shard : shards) {
|
||||
shardIdToShardMap.put(shard.getShardId(), shard);
|
||||
}
|
||||
return shardIdToShardMap;
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper method to return all the open shards for a stream.
|
||||
* Note: Package level access only for testing purposes.
|
||||
*
|
||||
* @param allShards All shards returved via DescribeStream. We assume this to represent a consistent shard list.
|
||||
* @return List of open shards (shards at the tip of the stream) - may include shards that are not yet active.
|
||||
*/
|
||||
static List<Shard> getOpenShards(List<Shard> allShards) {
|
||||
List<Shard> openShards = new ArrayList<Shard>();
|
||||
for (Shard shard : allShards) {
|
||||
String endingSequenceNumber = shard.getSequenceNumberRange().getEndingSequenceNumber();
|
||||
if (endingSequenceNumber == null) {
|
||||
openShards.add(shard);
|
||||
LOG.debug("Found open shard: " + shard.getShardId());
|
||||
}
|
||||
}
|
||||
return openShards;
|
||||
}
|
||||
|
||||
private static String convertToCheckpoint(InitialPositionInStream position) {
|
||||
String checkpoint = null;
|
||||
|
||||
if (position.equals(InitialPositionInStream.TRIM_HORIZON)) {
|
||||
checkpoint = SentinelCheckpoint.TRIM_HORIZON.toString();
|
||||
} else if (position.equals(InitialPositionInStream.LATEST)) {
|
||||
checkpoint = SentinelCheckpoint.LATEST.toString();
|
||||
}
|
||||
|
||||
return checkpoint;
|
||||
}
|
||||
|
||||
/** Helper class to compare leases based on starting sequence number of the corresponding shards.
|
||||
*
|
||||
*/
|
||||
private static class StartingSequenceNumberAndShardIdBasedComparator implements Comparator<KinesisClientLease>,
|
||||
Serializable {
|
||||
|
||||
private static final long serialVersionUID = 1L;
|
||||
|
||||
private final Map<String, Shard> shardIdToShardMap;
|
||||
|
||||
/**
|
||||
* @param shardIdToShardMapOfAllKinesisShards
|
||||
*/
|
||||
public StartingSequenceNumberAndShardIdBasedComparator(Map<String, Shard> shardIdToShardMapOfAllKinesisShards) {
|
||||
shardIdToShardMap = shardIdToShardMapOfAllKinesisShards;
|
||||
}
|
||||
|
||||
/**
|
||||
* Compares two leases based on the starting sequence number of corresponding shards.
|
||||
* If shards are not found in the shardId->shard map supplied, we do a string comparison on the shardIds.
|
||||
* We assume that lease1 and lease2 are:
|
||||
* a/ not null,
|
||||
* b/ shards (if found) have non-null starting sequence numbers
|
||||
*
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public int compare(KinesisClientLease lease1, KinesisClientLease lease2) {
|
||||
int result = 0;
|
||||
String shardId1 = lease1.getLeaseKey();
|
||||
String shardId2 = lease2.getLeaseKey();
|
||||
Shard shard1 = shardIdToShardMap.get(shardId1);
|
||||
Shard shard2 = shardIdToShardMap.get(shardId2);
|
||||
|
||||
// If we found shards for the two leases, use comparison of the starting sequence numbers
|
||||
if ((shard1 != null) && (shard2 != null)) {
|
||||
BigInteger sequenceNumber1 =
|
||||
new BigInteger(shard1.getSequenceNumberRange().getStartingSequenceNumber());
|
||||
BigInteger sequenceNumber2 =
|
||||
new BigInteger(shard2.getSequenceNumberRange().getStartingSequenceNumber());
|
||||
result = sequenceNumber1.compareTo(sequenceNumber2);
|
||||
}
|
||||
|
||||
if (result == 0) {
|
||||
result = shardId1.compareTo(shardId2);
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,138 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
|
||||
import com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessor;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.lib.checkpoint.SentinelCheckpoint;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.proxies.IKinesisProxy;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.types.ShutdownReason;
|
||||
import com.amazonaws.services.kinesis.leases.impl.KinesisClientLease;
|
||||
import com.amazonaws.services.kinesis.leases.interfaces.ILeaseManager;
|
||||
|
||||
/**
|
||||
* Task for invoking the RecordProcessor shutdown() callback.
|
||||
*/
|
||||
class ShutdownTask implements ITask {
|
||||
|
||||
private static final Log LOG = LogFactory.getLog(ShutdownTask.class);
|
||||
private final ShardInfo shardInfo;
|
||||
private final IRecordProcessor recordProcessor;
|
||||
private final RecordProcessorCheckpointer recordProcessorCheckpointer;
|
||||
private final ShutdownReason reason;
|
||||
private final IKinesisProxy kinesisProxy;
|
||||
private final ILeaseManager<KinesisClientLease> leaseManager;
|
||||
private final InitialPositionInStream initialPositionInStream;
|
||||
private final boolean cleanupLeasesOfCompletedShards;
|
||||
private final TaskType taskType = TaskType.SHUTDOWN;
|
||||
private final long backoffTimeMillis;
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*/
|
||||
// CHECKSTYLE:IGNORE ParameterNumber FOR NEXT 10 LINES
|
||||
ShutdownTask(ShardInfo shardInfo,
|
||||
IRecordProcessor recordProcessor,
|
||||
RecordProcessorCheckpointer recordProcessorCheckpointer,
|
||||
ShutdownReason reason,
|
||||
IKinesisProxy kinesisProxy,
|
||||
InitialPositionInStream initialPositionInStream,
|
||||
boolean cleanupLeasesOfCompletedShards,
|
||||
ILeaseManager<KinesisClientLease> leaseManager,
|
||||
long backoffTimeMillis) {
|
||||
this.shardInfo = shardInfo;
|
||||
this.recordProcessor = recordProcessor;
|
||||
this.recordProcessorCheckpointer = recordProcessorCheckpointer;
|
||||
this.reason = reason;
|
||||
this.kinesisProxy = kinesisProxy;
|
||||
this.initialPositionInStream = initialPositionInStream;
|
||||
this.cleanupLeasesOfCompletedShards = cleanupLeasesOfCompletedShards;
|
||||
this.leaseManager = leaseManager;
|
||||
this.backoffTimeMillis = backoffTimeMillis;
|
||||
}
|
||||
|
||||
/* Invokes RecordProcessor shutdown() API.
|
||||
* (non-Javadoc)
|
||||
* @see com.amazonaws.services.kinesis.clientlibrary.lib.worker.ITask#call()
|
||||
*/
|
||||
@Override
|
||||
public TaskResult call() {
|
||||
Exception exception = null;
|
||||
boolean applicationException = false;
|
||||
|
||||
try {
|
||||
// If we reached end of the shard, set sequence number to SHARD_END.
|
||||
if (reason == ShutdownReason.TERMINATE) {
|
||||
recordProcessorCheckpointer.setSequenceNumber(SentinelCheckpoint.SHARD_END.toString());
|
||||
}
|
||||
|
||||
LOG.debug("Invoking shutdown() for shard " + shardInfo.getShardId() + ", concurrencyToken "
|
||||
+ shardInfo.getConcurrencyToken() + ". Shutdown reason: " + reason);
|
||||
try {
|
||||
recordProcessor.shutdown(recordProcessorCheckpointer, reason);
|
||||
String lastCheckpointValue = recordProcessorCheckpointer.getLastCheckpointValue();
|
||||
if (reason == ShutdownReason.TERMINATE) {
|
||||
if ((lastCheckpointValue == null)
|
||||
|| (!lastCheckpointValue.equals(SentinelCheckpoint.SHARD_END.toString()))) {
|
||||
throw new IllegalArgumentException("Application didn't checkpoint at end of shard "
|
||||
+ shardInfo.getShardId());
|
||||
}
|
||||
}
|
||||
LOG.debug("Record processor completed shutdown() for shard " + shardInfo.getShardId());
|
||||
} catch (Exception e) {
|
||||
applicationException = true;
|
||||
throw e;
|
||||
}
|
||||
|
||||
if (reason == ShutdownReason.TERMINATE) {
|
||||
LOG.debug("Looking for child shards of shard " + shardInfo.getShardId());
|
||||
// create leases for the child shards
|
||||
ShardSyncer.checkAndCreateLeasesForNewShards(kinesisProxy,
|
||||
leaseManager,
|
||||
initialPositionInStream,
|
||||
cleanupLeasesOfCompletedShards);
|
||||
LOG.debug("Finished checking for child shards of shard " + shardInfo.getShardId());
|
||||
}
|
||||
|
||||
return new TaskResult(null);
|
||||
} catch (Exception e) {
|
||||
if (applicationException) {
|
||||
LOG.error("Application exception. ", e);
|
||||
} else {
|
||||
LOG.error("Caught exception: ", e);
|
||||
}
|
||||
exception = e;
|
||||
// backoff if we encounter an exception.
|
||||
try {
|
||||
Thread.sleep(this.backoffTimeMillis);
|
||||
} catch (InterruptedException ie) {
|
||||
LOG.debug("Interrupted sleep", ie);
|
||||
}
|
||||
}
|
||||
|
||||
return new TaskResult(exception);
|
||||
}
|
||||
|
||||
/* (non-Javadoc)
|
||||
* @see com.amazonaws.services.kinesis.clientlibrary.lib.worker.ITask#getTaskType()
|
||||
*/
|
||||
@Override
|
||||
public TaskType getTaskType() {
|
||||
return taskType;
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,100 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
|
||||
|
||||
import com.amazonaws.services.kinesis.clientlibrary.proxies.IKinesisProxy;
|
||||
|
||||
/**
|
||||
* Used to capture stream configuration and pass it along.
|
||||
*/
|
||||
class StreamConfig {
|
||||
|
||||
private final IKinesisProxy streamProxy;
|
||||
private final int maxRecords;
|
||||
private final long idleTimeInMilliseconds;
|
||||
private final boolean callProcessRecordsEvenForEmptyRecordList;
|
||||
private InitialPositionInStream initialPositionInStream;
|
||||
|
||||
/**
|
||||
* @param proxy Used to fetch records and information about the stream
|
||||
* @param maxRecords Max records to fetch in a call
|
||||
* @param idleTimeInMilliseconds Idle time between get calls to the stream
|
||||
* @param callProcessRecordsEvenForEmptyRecordList Call the RecordProcessor::processRecords() API even if
|
||||
* GetRecords returned an empty record list.
|
||||
*/
|
||||
StreamConfig(IKinesisProxy proxy,
|
||||
int maxRecords,
|
||||
long idleTimeInMilliseconds,
|
||||
boolean callProcessRecordsEvenForEmptyRecordList) {
|
||||
this(proxy, maxRecords, idleTimeInMilliseconds, callProcessRecordsEvenForEmptyRecordList,
|
||||
InitialPositionInStream.LATEST);
|
||||
}
|
||||
|
||||
/**
|
||||
* @param proxy Used to fetch records and information about the stream
|
||||
* @param maxRecords Max records to be fetched in a call
|
||||
* @param idleTimeInMilliseconds Idle time between get calls to the stream
|
||||
* @param callProcessRecordsEvenForEmptyRecordList Call the IRecordProcessor::processRecords() API even if
|
||||
* GetRecords returned an empty record list.
|
||||
* @param initialPositionInStream Initial position in stream
|
||||
*/
|
||||
StreamConfig(IKinesisProxy proxy,
|
||||
int maxRecords,
|
||||
long idleTimeInMilliseconds,
|
||||
boolean callProcessRecordsEvenForEmptyRecordList,
|
||||
InitialPositionInStream initialPositionInStream) {
|
||||
this.streamProxy = proxy;
|
||||
this.maxRecords = maxRecords;
|
||||
this.idleTimeInMilliseconds = idleTimeInMilliseconds;
|
||||
this.callProcessRecordsEvenForEmptyRecordList = callProcessRecordsEvenForEmptyRecordList;
|
||||
this.initialPositionInStream = initialPositionInStream;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return the streamProxy
|
||||
*/
|
||||
IKinesisProxy getStreamProxy() {
|
||||
return streamProxy;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return the maxRecords
|
||||
*/
|
||||
int getMaxRecords() {
|
||||
return maxRecords;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return the idleTimeInMilliseconds
|
||||
*/
|
||||
long getIdleTimeInMilliseconds() {
|
||||
return idleTimeInMilliseconds;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return the callProcessRecordsEvenForEmptyRecordList
|
||||
*/
|
||||
boolean shouldCallProcessRecordsEvenForEmptyRecordList() {
|
||||
return callProcessRecordsEvenForEmptyRecordList;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return the initialPositionInStream
|
||||
*/
|
||||
InitialPositionInStream getInitialPositionInStream() {
|
||||
return initialPositionInStream;
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,73 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
|
||||
|
||||
/**
|
||||
* Used to capture information from a task that we want to communicate back to the higher layer.
|
||||
* E.g. exception thrown when executing the task, if we reach end of a shard.
|
||||
*/
|
||||
class TaskResult {
|
||||
|
||||
// Did we reach the end of the shard while processing this task.
|
||||
private boolean shardEndReached;
|
||||
|
||||
// Any exception caught while executing the task.
|
||||
private Exception exception;
|
||||
|
||||
/**
|
||||
* @return the shardEndReached
|
||||
*/
|
||||
protected boolean isShardEndReached() {
|
||||
return shardEndReached;
|
||||
}
|
||||
|
||||
/**
|
||||
* @param shardEndReached the shardEndReached to set
|
||||
*/
|
||||
protected void setShardEndReached(boolean shardEndReached) {
|
||||
this.shardEndReached = shardEndReached;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return the exception
|
||||
*/
|
||||
public Exception getException() {
|
||||
return exception;
|
||||
}
|
||||
|
||||
/**
|
||||
* @param e Any exception encountered when running the process task.
|
||||
*/
|
||||
TaskResult(Exception e) {
|
||||
this(e, false);
|
||||
}
|
||||
|
||||
/**
|
||||
* @param isShardEndReached Whether we reached the end of the shard (no more records will ever be fetched)
|
||||
*/
|
||||
TaskResult(boolean isShardEndReached) {
|
||||
this(null, isShardEndReached);
|
||||
}
|
||||
|
||||
/**
|
||||
* @param e Any exception encountered when executing task.
|
||||
* @param isShardEndReached Whether we reached the end of the shard (no more records will ever be fetched)
|
||||
*/
|
||||
TaskResult(Exception e, boolean isShardEndReached) {
|
||||
this.exception = e;
|
||||
this.shardEndReached = isShardEndReached;
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,41 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
|
||||
|
||||
/**
|
||||
* Enumerates types of tasks executed as part of processing a shard.
|
||||
*/
|
||||
enum TaskType {
|
||||
/**
|
||||
* Polls and waits until parent shard(s) have been fully processed.
|
||||
*/
|
||||
BLOCK_ON_PARENT_SHARDS,
|
||||
/**
|
||||
* Initialization of RecordProcessor (and Amazon Kinesis Client Library internal state for a shard).
|
||||
*/
|
||||
INITIALIZE,
|
||||
/**
|
||||
* Fetching and processing of records.
|
||||
*/
|
||||
PROCESS,
|
||||
/**
|
||||
* Shutdown of RecordProcessor.
|
||||
*/
|
||||
SHUTDOWN,
|
||||
/**
|
||||
* Sync leases/activities corresponding to Kinesis shards.
|
||||
*/
|
||||
SHARDSYNC;
|
||||
}
|
||||
|
|
@ -0,0 +1,530 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.lib.worker;
|
||||
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Set;
|
||||
import java.util.concurrent.ConcurrentHashMap;
|
||||
import java.util.concurrent.ConcurrentMap;
|
||||
import java.util.concurrent.ExecutorService;
|
||||
import java.util.concurrent.Executors;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
|
||||
import com.amazonaws.services.cloudwatch.AmazonCloudWatchClient;
|
||||
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient;
|
||||
import com.amazonaws.services.kinesis.AmazonKinesisClient;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.interfaces.ICheckpoint;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessor;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessorFactory;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.proxies.KinesisProxyFactory;
|
||||
import com.amazonaws.services.kinesis.clientlibrary.types.ShutdownReason;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.LeasingException;
|
||||
import com.amazonaws.services.kinesis.leases.impl.KinesisClientLeaseManager;
|
||||
import com.amazonaws.services.kinesis.metrics.impl.CWMetricsFactory;
|
||||
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsFactory;
|
||||
|
||||
/**
|
||||
* Worker is the high level class that Kinesis applications use to start processing data.
|
||||
* It initializes and oversees different components (e.g. syncing shard and lease information, tracking shard
|
||||
* assignments, and processing data from the shards).
|
||||
*/
|
||||
public class Worker implements Runnable {
|
||||
|
||||
private static final int MAX_INITIALIZATION_ATTEMPTS = 20;
|
||||
private static final Log LOG = LogFactory.getLog(Worker.class);
|
||||
private WorkerLog wlog = new WorkerLog();
|
||||
|
||||
private final String applicationName;
|
||||
private final IRecordProcessorFactory recordProcessorFactory;
|
||||
private final StreamConfig streamConfig;
|
||||
private final InitialPositionInStream initialPosition;
|
||||
private final ICheckpoint checkpointTracker;
|
||||
private final long idleTimeInMilliseconds;
|
||||
// Backoff time when polling to check if application has finished processing parent shards
|
||||
private final long parentShardPollIntervalMillis;
|
||||
private final ExecutorService executorService;
|
||||
private final IMetricsFactory metricsFactory;
|
||||
// Backoff time when running tasks if they encounter exceptions
|
||||
private final long taskBackoffTimeMillis;
|
||||
|
||||
// private final KinesisClientLeaseManager leaseManager;
|
||||
private final KinesisClientLibLeaseCoordinator leaseCoordinator;
|
||||
private final ShardSyncTaskManager controlServer;
|
||||
|
||||
private boolean shutdown;
|
||||
|
||||
// Holds consumers for shards the worker is currently tracking. Key is shard id, value is ShardConsumer.
|
||||
private ConcurrentMap<String, ShardConsumer> shardIdShardConsumerMap =
|
||||
new ConcurrentHashMap<String, ShardConsumer>();
|
||||
private final boolean cleanupLeasesUponShardCompletion;
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
* @param recordProcessorFactory Used to get record processor instances for processing data from shards
|
||||
* @param config Kinesis Client Library configuration
|
||||
*/
|
||||
public Worker(IRecordProcessorFactory recordProcessorFactory, KinesisClientLibConfiguration config) {
|
||||
this(recordProcessorFactory, config, Executors.newCachedThreadPool());
|
||||
}
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
* @param recordProcessorFactory Used to get record processor instances for processing data from shards
|
||||
* @param config Kinesis Client Library configuration
|
||||
* @param execService ExecutorService to use for processing records (support for multi-threaded
|
||||
* consumption)
|
||||
*/
|
||||
public Worker(IRecordProcessorFactory recordProcessorFactory,
|
||||
KinesisClientLibConfiguration config, ExecutorService execService) {
|
||||
this(recordProcessorFactory, config,
|
||||
new AmazonKinesisClient(config.getKinesisCredentialsProvider(), config.getKinesisClientConfiguration()),
|
||||
new AmazonDynamoDBClient(config.getDynamoDBCredentialsProvider(), config.getDynamoDBClientConfiguration()),
|
||||
new AmazonCloudWatchClient(config.getCloudWatchCredentialsProvider(),
|
||||
config.getCloudWatchClientConfiguration()),
|
||||
execService);
|
||||
}
|
||||
|
||||
/**
|
||||
* @param recordProcessorFactory Used to get record processor instances for processing data from shards
|
||||
* @param config Kinesis Client Library configuration
|
||||
* @param metricsFactory Metrics factory used to emit metrics
|
||||
*/
|
||||
public Worker(IRecordProcessorFactory recordProcessorFactory,
|
||||
KinesisClientLibConfiguration config,
|
||||
IMetricsFactory metricsFactory) {
|
||||
this(recordProcessorFactory, config, metricsFactory, Executors.newCachedThreadPool());
|
||||
}
|
||||
|
||||
/**
|
||||
* @param recordProcessorFactory Used to get record processor instances for processing data from shards
|
||||
* @param config Kinesis Client Library configuration
|
||||
* @param metricsFactory Metrics factory used to emit metrics
|
||||
* @param execService ExecutorService to use for processing records (support for multi-threaded
|
||||
* consumption)
|
||||
*/
|
||||
public Worker(IRecordProcessorFactory recordProcessorFactory,
|
||||
KinesisClientLibConfiguration config,
|
||||
IMetricsFactory metricsFactory,
|
||||
ExecutorService execService) {
|
||||
this(recordProcessorFactory, config,
|
||||
new AmazonKinesisClient(config.getKinesisCredentialsProvider(), config.getKinesisClientConfiguration()),
|
||||
new AmazonDynamoDBClient(config.getDynamoDBCredentialsProvider(), config.getDynamoDBClientConfiguration()),
|
||||
metricsFactory,
|
||||
execService);
|
||||
}
|
||||
|
||||
/**
|
||||
* @param recordProcessorFactory Used to get record processor instances for processing data from shards
|
||||
* @param config Kinesis Client Library configuration
|
||||
* @param kinesisClient Kinesis Client used for fetching data
|
||||
* @param dynamoDBClient DynamoDB client used for checkpoints and tracking leases
|
||||
* @param cloudWatchClient CloudWatch Client for publishing metrics
|
||||
*/
|
||||
public Worker(IRecordProcessorFactory recordProcessorFactory,
|
||||
KinesisClientLibConfiguration config,
|
||||
AmazonKinesisClient kinesisClient,
|
||||
AmazonDynamoDBClient dynamoDBClient,
|
||||
AmazonCloudWatchClient cloudWatchClient) {
|
||||
this(recordProcessorFactory, config,
|
||||
kinesisClient, dynamoDBClient, cloudWatchClient,
|
||||
Executors.newCachedThreadPool());
|
||||
}
|
||||
|
||||
/**
|
||||
* @param recordProcessorFactory Used to get record processor instances for processing data from shards
|
||||
* @param config Kinesis Client Library configuration
|
||||
* @param kinesisClient Kinesis Client used for fetching data
|
||||
* @param dynamoDBClient DynamoDB client used for checkpoints and tracking leases
|
||||
* @param cloudWatchClient Clould Watch Client for using cloud watch
|
||||
* @param execService ExecutorService to use for processing records (support for multi-threaded
|
||||
* consumption)
|
||||
*/
|
||||
public Worker(IRecordProcessorFactory recordProcessorFactory,
|
||||
KinesisClientLibConfiguration config,
|
||||
AmazonKinesisClient kinesisClient,
|
||||
AmazonDynamoDBClient dynamoDBClient,
|
||||
AmazonCloudWatchClient cloudWatchClient,
|
||||
ExecutorService execService) {
|
||||
this(recordProcessorFactory, config,
|
||||
kinesisClient, dynamoDBClient,
|
||||
new CWMetricsFactory(
|
||||
cloudWatchClient,
|
||||
config.getApplicationName(),
|
||||
config.getMetricsBufferTimeMillis(),
|
||||
config.getMetricsMaxQueueSize()),
|
||||
execService);
|
||||
}
|
||||
|
||||
/**
|
||||
* @param recordProcessorFactory Used to get record processor instances for processing data from shards
|
||||
* @param config Kinesis Client Library configuration
|
||||
* @param kinesisClient Kinesis Client used for fetching data
|
||||
* @param dynamoDBClient DynamoDB client used for checkpoints and tracking leases
|
||||
* @param metricsFactory Metrics factory used to emit metrics
|
||||
* @param execService ExecutorService to use for processing records (support for multi-threaded
|
||||
* consumption)
|
||||
*/
|
||||
public Worker(IRecordProcessorFactory recordProcessorFactory,
|
||||
KinesisClientLibConfiguration config,
|
||||
AmazonKinesisClient kinesisClient,
|
||||
AmazonDynamoDBClient dynamoDBClient,
|
||||
IMetricsFactory metricsFactory,
|
||||
ExecutorService execService) {
|
||||
this(recordProcessorFactory, config,
|
||||
new StreamConfig(
|
||||
new KinesisProxyFactory(config.getKinesisCredentialsProvider(),
|
||||
kinesisClient).getProxy(config.getStreamName()),
|
||||
config.getMaxRecords(),
|
||||
config.getIdleTimeBetweenReadsInMillis(),
|
||||
config.shouldCallProcessRecordsEvenForEmptyRecordList()),
|
||||
new KinesisClientLibLeaseCoordinator(
|
||||
new KinesisClientLeaseManager(config.getApplicationName(), dynamoDBClient),
|
||||
config.getWorkerIdentifier(),
|
||||
config.getFailoverTimeMillis(),
|
||||
config.getEpsilonMillis(),
|
||||
metricsFactory),
|
||||
metricsFactory, execService);
|
||||
// If an endpoint was explicitly specified, use it.
|
||||
if (config.getKinesisEndpoint() != null) {
|
||||
kinesisClient.setEndpoint(config.getKinesisEndpoint());
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @param recordProcessorFactory Used to get record processor instances for processing data from shards
|
||||
* @param config Kinesis Client Library configuration
|
||||
* @param streamConfig Stream configuration
|
||||
* @param leaseCoordinator Lease coordinator (coordinates currently owned leases and checkpoints)
|
||||
* @param metricsFactory Metrics factory used to emit metrics
|
||||
* @param execService ExecutorService to use for processing records (support for multi-threaded
|
||||
* consumption)
|
||||
*/
|
||||
private Worker(IRecordProcessorFactory recordProcessorFactory,
|
||||
KinesisClientLibConfiguration config,
|
||||
StreamConfig streamConfig,
|
||||
KinesisClientLibLeaseCoordinator leaseCoordinator,
|
||||
IMetricsFactory metricsFactory,
|
||||
ExecutorService execService) {
|
||||
this(config.getApplicationName(), recordProcessorFactory, streamConfig, config.getInitialPositionInStream(),
|
||||
config.getParentShardPollIntervalMillis(), config.getShardSyncIntervalMillis(),
|
||||
config.shouldCleanupLeasesUponShardCompletion(), leaseCoordinator, leaseCoordinator, execService,
|
||||
metricsFactory, config.getTaskBackoffTimeMillis());
|
||||
}
|
||||
|
||||
/**
|
||||
* @param applicationName Name of the Kinesis application
|
||||
* @param recordProcessorFactory Used to get record processor instances for processing data from shards
|
||||
* @param streamConfig Stream configuration
|
||||
* @param initialPositionInStream One of LATEST or TRIM_HORIZON. The KinesisClientLibrary will start fetching data
|
||||
* from this location in the stream when an application starts up for the first time and there are no
|
||||
* checkpoints. If there are checkpoints, we start from the checkpoint position.
|
||||
* @param parentShardPollIntervalMillis Wait for this long between polls to check if parent shards are done
|
||||
* @param shardSyncIdleTimeMillis Time between tasks to sync leases and Kinesis shards
|
||||
* @param cleanupLeasesUponShardCompletion Clean up shards we've finished processing (don't wait till they expire in
|
||||
* Kinesis)
|
||||
* @param checkpoint Used to get/set checkpoints
|
||||
* @param leaseCoordinator Lease coordinator (coordinates currently owned leases)
|
||||
* @param execService ExecutorService to use for processing records (support for multi-threaded
|
||||
* consumption)
|
||||
* @param metricsFactory Metrics factory used to emit metrics
|
||||
* @param taskBackoffTimeMillis Backoff period when tasks encounter an exception
|
||||
*/
|
||||
// NOTE: This has package level access solely for testing
|
||||
// CHECKSTYLE:IGNORE ParameterNumber FOR NEXT 10 LINES
|
||||
Worker(String applicationName,
|
||||
IRecordProcessorFactory recordProcessorFactory,
|
||||
StreamConfig streamConfig,
|
||||
InitialPositionInStream initialPositionInStream,
|
||||
long parentShardPollIntervalMillis,
|
||||
long shardSyncIdleTimeMillis,
|
||||
boolean cleanupLeasesUponShardCompletion,
|
||||
ICheckpoint checkpoint,
|
||||
KinesisClientLibLeaseCoordinator leaseCoordinator,
|
||||
ExecutorService execService,
|
||||
IMetricsFactory metricsFactory,
|
||||
long taskBackoffTimeMillis) {
|
||||
this.applicationName = applicationName;
|
||||
this.recordProcessorFactory = recordProcessorFactory;
|
||||
this.streamConfig = streamConfig;
|
||||
this.initialPosition = initialPositionInStream;
|
||||
this.parentShardPollIntervalMillis = parentShardPollIntervalMillis;
|
||||
this.cleanupLeasesUponShardCompletion = cleanupLeasesUponShardCompletion;
|
||||
this.checkpointTracker = checkpoint;
|
||||
this.idleTimeInMilliseconds = streamConfig.getIdleTimeInMilliseconds();
|
||||
this.executorService = execService;
|
||||
this.leaseCoordinator = leaseCoordinator;
|
||||
this.metricsFactory = metricsFactory;
|
||||
this.controlServer =
|
||||
new ShardSyncTaskManager(streamConfig.getStreamProxy(),
|
||||
leaseCoordinator.getLeaseManager(),
|
||||
initialPositionInStream,
|
||||
cleanupLeasesUponShardCompletion,
|
||||
shardSyncIdleTimeMillis,
|
||||
metricsFactory,
|
||||
executorService);
|
||||
this.taskBackoffTimeMillis = taskBackoffTimeMillis;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return the applicationName
|
||||
*/
|
||||
public String getApplicationName() {
|
||||
return applicationName;
|
||||
}
|
||||
|
||||
/**
|
||||
* Start consuming data from the stream, and pass it to the application record processors.
|
||||
*/
|
||||
public void run() {
|
||||
try {
|
||||
initialize();
|
||||
LOG.info("Initialization complete. Starting worker loop.");
|
||||
} catch (RuntimeException e1) {
|
||||
LOG.error("Unable to initialize after " + MAX_INITIALIZATION_ATTEMPTS + " attempts. Shutting down.", e1);
|
||||
shutdown();
|
||||
}
|
||||
|
||||
while (!shutdown) {
|
||||
try {
|
||||
boolean foundCompletedShard = false;
|
||||
Set<String> assignedShardIds = new HashSet<String>();
|
||||
for (ShardInfo shardInfo : getShardInfoForAssignments()) {
|
||||
ShardConsumer shardConsumer = createOrGetShardConsumer(shardInfo, recordProcessorFactory);
|
||||
if (shardConsumer.isShutdown()
|
||||
&& shardConsumer.getShutdownReason().equals(ShutdownReason.TERMINATE)) {
|
||||
foundCompletedShard = true;
|
||||
} else {
|
||||
shardConsumer.consumeShard();
|
||||
}
|
||||
assignedShardIds.add(shardInfo.getShardId());
|
||||
}
|
||||
|
||||
if (foundCompletedShard) {
|
||||
controlServer.syncShardAndLeaseInfo(null);
|
||||
}
|
||||
|
||||
// clean up shard consumers for unassigned shards
|
||||
cleanupShardConsumers(assignedShardIds);
|
||||
|
||||
wlog.info("Sleeping ...");
|
||||
Thread.sleep(idleTimeInMilliseconds);
|
||||
} catch (Exception e) {
|
||||
LOG.error(String.format("Worker.run caught exception, sleeping for %s milli seconds!",
|
||||
String.valueOf(idleTimeInMilliseconds)),
|
||||
e);
|
||||
try {
|
||||
Thread.sleep(idleTimeInMilliseconds);
|
||||
} catch (InterruptedException ex) {
|
||||
LOG.info("Worker: sleep interrupted after catching exception ", ex);
|
||||
}
|
||||
}
|
||||
wlog.resetInfoLogging();
|
||||
}
|
||||
|
||||
LOG.info("Stopping LeaseCoordinator.");
|
||||
leaseCoordinator.stop();
|
||||
}
|
||||
|
||||
private void initialize() {
|
||||
boolean isDone = false;
|
||||
Exception lastException = null;
|
||||
|
||||
for (int i = 0; (!isDone) && (i < MAX_INITIALIZATION_ATTEMPTS); i++) {
|
||||
try {
|
||||
LOG.info("Initialization attempt " + (i + 1));
|
||||
LOG.info("Initializing LeaseCoordinator");
|
||||
leaseCoordinator.initialize();
|
||||
|
||||
LOG.info("Syncing Kinesis shard info");
|
||||
ShardSyncTask shardSyncTask =
|
||||
new ShardSyncTask(streamConfig.getStreamProxy(),
|
||||
leaseCoordinator.getLeaseManager(),
|
||||
initialPosition,
|
||||
cleanupLeasesUponShardCompletion,
|
||||
0L);
|
||||
TaskResult result = new MetricsCollectingTaskDecorator(shardSyncTask, metricsFactory).call();
|
||||
|
||||
if (result.getException() == null) {
|
||||
if (!leaseCoordinator.isRunning()) {
|
||||
LOG.info("Starting LeaseCoordinator");
|
||||
leaseCoordinator.start();
|
||||
} else {
|
||||
LOG.info("LeaseCoordinator is already running. No need to start it.");
|
||||
}
|
||||
isDone = true;
|
||||
} else {
|
||||
lastException = result.getException();
|
||||
}
|
||||
} catch (LeasingException e) {
|
||||
LOG.error("Caught exception when initializing LeaseCoordinator", e);
|
||||
lastException = e;
|
||||
} catch (Exception e) {
|
||||
lastException = e;
|
||||
}
|
||||
|
||||
try {
|
||||
Thread.sleep(parentShardPollIntervalMillis);
|
||||
} catch (InterruptedException e) {
|
||||
LOG.debug("Sleep interrupted while initializing worker.");
|
||||
}
|
||||
}
|
||||
|
||||
if (!isDone) {
|
||||
throw new RuntimeException(lastException);
|
||||
}
|
||||
}
|
||||
|
||||
private void cleanupShardConsumers(Set<String> assignedShardIds) {
|
||||
for (String shardId : shardIdShardConsumerMap.keySet()) {
|
||||
if (!assignedShardIds.contains(shardId)) {
|
||||
// Shutdown the consumer since we are not longer responsible for the shard.
|
||||
boolean isShutdown = shardIdShardConsumerMap.get(shardId).beginShutdown();
|
||||
if (isShutdown) {
|
||||
shardIdShardConsumerMap.remove(shardId);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private List<ShardInfo> getShardInfoForAssignments() {
|
||||
List<ShardInfo> assignedStreamShards = leaseCoordinator.getCurrentAssignments();
|
||||
|
||||
if ((assignedStreamShards != null) && (!assignedStreamShards.isEmpty())) {
|
||||
if (wlog.isInfoEnabled()) {
|
||||
StringBuilder builder = new StringBuilder();
|
||||
boolean firstItem = true;
|
||||
for (ShardInfo shardInfo : assignedStreamShards) {
|
||||
if (!firstItem) {
|
||||
builder.append(", ");
|
||||
}
|
||||
builder.append(shardInfo.getShardId());
|
||||
firstItem = false;
|
||||
}
|
||||
wlog.info("Current stream shard assignments: " + builder.toString());
|
||||
}
|
||||
} else {
|
||||
wlog.info("No activities assigned");
|
||||
}
|
||||
|
||||
return assignedStreamShards;
|
||||
}
|
||||
|
||||
/**
|
||||
* Sets the killed flag so this worker will stop on the next iteration of its loop.
|
||||
*/
|
||||
public void shutdown() {
|
||||
this.shutdown = true;
|
||||
}
|
||||
|
||||
/**
|
||||
* NOTE: This method is internal/private to the Worker class. It has package access solely for
|
||||
* testing.
|
||||
*
|
||||
* @param shardInfo Kinesis shard info
|
||||
* @param factory RecordProcessor factory
|
||||
* @return ShardConsumer for the shard
|
||||
*/
|
||||
ShardConsumer createOrGetShardConsumer(ShardInfo shardInfo, IRecordProcessorFactory factory) {
|
||||
synchronized (shardIdShardConsumerMap) {
|
||||
String shardId = shardInfo.getShardId();
|
||||
ShardConsumer consumer = shardIdShardConsumerMap.get(shardId);
|
||||
// Instantiate a new consumer if we don't have one, or the one we had was from an earlier
|
||||
// lease instance (and was shutdown). Don't need to create another one if the shard has been
|
||||
// completely processed (shutdown reason terminate).
|
||||
if ((consumer == null)
|
||||
|| (consumer.isShutdown() && consumer.getShutdownReason().equals(ShutdownReason.ZOMBIE))) {
|
||||
IRecordProcessor recordProcessor = factory.createProcessor();
|
||||
|
||||
consumer =
|
||||
new ShardConsumer(shardInfo,
|
||||
streamConfig,
|
||||
checkpointTracker,
|
||||
recordProcessor,
|
||||
leaseCoordinator.getLeaseManager(),
|
||||
parentShardPollIntervalMillis,
|
||||
cleanupLeasesUponShardCompletion,
|
||||
executorService,
|
||||
metricsFactory,
|
||||
taskBackoffTimeMillis);
|
||||
shardIdShardConsumerMap.put(shardId, consumer);
|
||||
wlog.infoForce("Created new shardConsumer for shardId: " + shardId + ", concurrencyToken: "
|
||||
+ shardInfo.getConcurrencyToken());
|
||||
}
|
||||
return consumer;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Logger for suppressing too much INFO logging.
|
||||
* To avoid too much logging information Worker will output logging at INFO level
|
||||
* for a single pass through the main loop every minute.
|
||||
* At DEBUG level it will output all INFO logs on every pass.
|
||||
*/
|
||||
private static class WorkerLog {
|
||||
|
||||
private long reportIntervalMillis = TimeUnit.MINUTES.toMillis(1);
|
||||
private long nextReportTime = System.currentTimeMillis() + reportIntervalMillis;
|
||||
private boolean infoReporting;
|
||||
|
||||
private WorkerLog() {
|
||||
|
||||
}
|
||||
|
||||
@SuppressWarnings("unused")
|
||||
public void debug(Object message, Throwable t) {
|
||||
LOG.debug(message, t);
|
||||
}
|
||||
|
||||
public void info(Object message) {
|
||||
if (this.isInfoEnabled()) {
|
||||
LOG.info(message);
|
||||
}
|
||||
}
|
||||
|
||||
public void infoForce(Object message) {
|
||||
LOG.info(message);
|
||||
}
|
||||
|
||||
@SuppressWarnings("unused")
|
||||
public void warn(Object message) {
|
||||
LOG.warn(message);
|
||||
}
|
||||
|
||||
@SuppressWarnings("unused")
|
||||
public void error(Object message, Throwable t) {
|
||||
LOG.error(message, t);
|
||||
}
|
||||
|
||||
private boolean isInfoEnabled() {
|
||||
return infoReporting;
|
||||
}
|
||||
|
||||
private void resetInfoLogging() {
|
||||
if (infoReporting) {
|
||||
// We just logged at INFO level for a pass through worker loop
|
||||
if (LOG.isInfoEnabled()) {
|
||||
infoReporting = false;
|
||||
nextReportTime = System.currentTimeMillis() + reportIntervalMillis;
|
||||
} // else is DEBUG or TRACE so leave reporting true
|
||||
} else if (nextReportTime <= System.currentTimeMillis()) {
|
||||
infoReporting = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,100 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.proxies;
|
||||
|
||||
import java.nio.ByteBuffer;
|
||||
import java.util.List;
|
||||
import java.util.Set;
|
||||
|
||||
import com.amazonaws.services.kinesis.model.DescribeStreamResult;
|
||||
import com.amazonaws.services.kinesis.model.ExpiredIteratorException;
|
||||
import com.amazonaws.services.kinesis.model.GetRecordsResult;
|
||||
import com.amazonaws.services.kinesis.model.InvalidArgumentException;
|
||||
import com.amazonaws.services.kinesis.model.PutRecordResult;
|
||||
import com.amazonaws.services.kinesis.model.ResourceNotFoundException;
|
||||
import com.amazonaws.services.kinesis.model.Shard;
|
||||
|
||||
/**
|
||||
* Kinesis proxy interface. Operates on a single stream (set up at initialization).
|
||||
*/
|
||||
public interface IKinesisProxy {
|
||||
|
||||
/**
|
||||
* Get records from stream.
|
||||
*
|
||||
* @param shardIterator Fetch data records using this shard iterator
|
||||
* @param maxRecords Fetch at most this many records
|
||||
* @return List of data records from Kinesis.
|
||||
* @throws InvalidArgumentException Invalid input parameters
|
||||
* @throws ResourceNotFoundException The Kinesis stream or shard was not found
|
||||
* @throws ExpiredIteratorException The iterator has expired
|
||||
*/
|
||||
GetRecordsResult get(String shardIterator, int maxRecords)
|
||||
throws ResourceNotFoundException, InvalidArgumentException, ExpiredIteratorException;
|
||||
|
||||
/**
|
||||
* Fetch information about stream. Useful for fetching the list of shards in a stream.
|
||||
*
|
||||
* @param startShardId exclusive start shardId - used when paginating the list of shards.
|
||||
* @return DescribeStreamOutput object containing a description of the stream.
|
||||
* @throws ResourceNotFoundException The Kinesis stream was not found
|
||||
*/
|
||||
DescribeStreamResult getStreamInfo(String startShardId) throws ResourceNotFoundException;
|
||||
|
||||
/**
|
||||
* Fetch the shardIds of all shards in the stream.
|
||||
*
|
||||
* @return Set of all shardIds
|
||||
* @throws ResourceNotFoundException If the specified Kinesis stream was not found
|
||||
*/
|
||||
Set<String> getAllShardIds() throws ResourceNotFoundException;
|
||||
|
||||
/**
|
||||
* Fetch all the shards defined for the stream (e.g. obtained via calls to the DescribeStream API).
|
||||
* This can be used to discover new shards and consume data from them.
|
||||
*
|
||||
* @return List of all shards in the Kinesis stream.
|
||||
* @throws ResourceNotFoundException The Kinesis stream was not found.
|
||||
*/
|
||||
List<Shard> getShardList() throws ResourceNotFoundException;
|
||||
|
||||
/**
|
||||
* Fetch a shard iterator from the specified position in the shard.
|
||||
*
|
||||
* @param shardId Shard id
|
||||
* @param iteratorEnum one of: TRIM_HORIZON, LATEST, AT_SEQUENCE_NUMBER, AFTER_SEQUENCE_NUMBER
|
||||
* @param sequenceNumber the sequence number - must be null unless iteratorEnum is AT_SEQUENCE_NUMBER or
|
||||
* AFTER_SEQUENCE_NUMBER
|
||||
* @return shard iterator which can be used to read data from Kinesis.
|
||||
* @throws ResourceNotFoundException The Kinesis stream or shard was not found
|
||||
* @throws InvalidArgumentException Invalid input parameters
|
||||
*/
|
||||
String getIterator(String shardId, String iteratorEnum, String sequenceNumber)
|
||||
throws ResourceNotFoundException, InvalidArgumentException;
|
||||
|
||||
/**
|
||||
* @param sequenceNumberForOrdering (optional) used for record ordering
|
||||
* @param explicitHashKey optionally supplied transformation of partitionkey
|
||||
* @param partitionKey for this record
|
||||
* @param data payload
|
||||
* @return PutRecordResult (contains the Kinesis sequence number of the record).
|
||||
* @throws ResourceNotFoundException The Kinesis stream was not found.
|
||||
* @throws InvalidArgumentException InvalidArgumentException.
|
||||
*/
|
||||
PutRecordResult put(String sequenceNumberForOrdering,
|
||||
String explicitHashKey,
|
||||
String partitionKey,
|
||||
ByteBuffer data) throws ResourceNotFoundException, InvalidArgumentException;
|
||||
}
|
||||
|
|
@ -0,0 +1,30 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.proxies;
|
||||
|
||||
/**
|
||||
* Interface for a KinesisProxyFactory.
|
||||
*
|
||||
*/
|
||||
public interface IKinesisProxyFactory {
|
||||
|
||||
/**
|
||||
* Return an IKinesisProxy object for the specified stream.
|
||||
* @param streamName Stream from which data is consumed.
|
||||
* @return IKinesisProxy object.
|
||||
*/
|
||||
IKinesisProxy getProxy(String streamName);
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,261 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.proxies;
|
||||
|
||||
import java.nio.ByteBuffer;
|
||||
import java.util.ArrayList;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Set;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
|
||||
import com.amazonaws.auth.AWSCredentialsProvider;
|
||||
import com.amazonaws.services.kinesis.AmazonKinesisClient;
|
||||
import com.amazonaws.services.kinesis.model.DescribeStreamRequest;
|
||||
import com.amazonaws.services.kinesis.model.DescribeStreamResult;
|
||||
import com.amazonaws.services.kinesis.model.ExpiredIteratorException;
|
||||
import com.amazonaws.services.kinesis.model.GetRecordsRequest;
|
||||
import com.amazonaws.services.kinesis.model.GetRecordsResult;
|
||||
import com.amazonaws.services.kinesis.model.GetShardIteratorRequest;
|
||||
import com.amazonaws.services.kinesis.model.GetShardIteratorResult;
|
||||
import com.amazonaws.services.kinesis.model.InvalidArgumentException;
|
||||
import com.amazonaws.services.kinesis.model.LimitExceededException;
|
||||
import com.amazonaws.services.kinesis.model.PutRecordRequest;
|
||||
import com.amazonaws.services.kinesis.model.PutRecordResult;
|
||||
import com.amazonaws.services.kinesis.model.ResourceNotFoundException;
|
||||
import com.amazonaws.services.kinesis.model.Shard;
|
||||
import com.amazonaws.services.kinesis.model.StreamStatus;
|
||||
|
||||
/**
|
||||
* Kinesis proxy - used to make calls to Amazon Kinesis (e.g. fetch data records and list of shards).
|
||||
*/
|
||||
public class KinesisProxy implements IKinesisProxy {
|
||||
|
||||
private static final Log LOG = LogFactory.getLog(KinesisProxy.class);
|
||||
|
||||
private static String defaultServiceName = "kinesis";
|
||||
private static String defaultRegionId = "us-east-1";;
|
||||
|
||||
private AmazonKinesisClient client;
|
||||
private AWSCredentialsProvider credentialsProvider;
|
||||
|
||||
private final String streamName;
|
||||
|
||||
private static final long DEFAULT_DESCRIBE_STREAM_BACKOFF_MILLIS = 1000L;
|
||||
private static final int DEFAULT_DESCRIBE_STREAM_RETRY_TIMES = 50;
|
||||
private final long describeStreamBackoffTimeInMillis;
|
||||
private final int maxDescribeStreamRetryAttempts;
|
||||
|
||||
/**
|
||||
* Public constructor.
|
||||
*
|
||||
* @param streamName Data records will be fetched from this stream
|
||||
* @param credentialProvider Provides credentials for signing Kinesis requests
|
||||
* @param endpoint Kinesis endpoint
|
||||
*/
|
||||
|
||||
public KinesisProxy(final String streamName, AWSCredentialsProvider credentialProvider, String endpoint) {
|
||||
this(streamName, credentialProvider, endpoint, defaultServiceName, defaultRegionId,
|
||||
DEFAULT_DESCRIBE_STREAM_BACKOFF_MILLIS, DEFAULT_DESCRIBE_STREAM_RETRY_TIMES);
|
||||
}
|
||||
|
||||
/**
|
||||
* Public constructor.
|
||||
*
|
||||
* @param streamName Data records will be fetched from this stream
|
||||
* @param credentialProvider Provides credentials for signing Kinesis requests
|
||||
* @param endpoint Kinesis endpoint
|
||||
* @param serviceName service name
|
||||
* @param regionId region id
|
||||
* @param describeStreamBackoffTimeInMillis Backoff time for DescribeStream calls in milliseconds
|
||||
* @param maxDescribeStreamRetryAttempts Number of retry attempts for DescribeStream calls
|
||||
*/
|
||||
public KinesisProxy(final String streamName,
|
||||
AWSCredentialsProvider credentialProvider,
|
||||
String endpoint,
|
||||
String serviceName,
|
||||
String regionId,
|
||||
long describeStreamBackoffTimeInMillis,
|
||||
int maxDescribeStreamRetryAttempts) {
|
||||
this(streamName, credentialProvider, new AmazonKinesisClient(credentialProvider),
|
||||
describeStreamBackoffTimeInMillis, maxDescribeStreamRetryAttempts);
|
||||
client.setEndpoint(endpoint, serviceName, regionId);
|
||||
|
||||
LOG.debug("KinesisProxy has created a kinesisClient");
|
||||
}
|
||||
|
||||
/**
|
||||
* Public constructor.
|
||||
*
|
||||
* @param streamName Data records will be fetched from this stream
|
||||
* @param credentialProvider Provides credentials for signing Kinesis requests
|
||||
* @param kinesisClient Kinesis client (used to fetch data from Kinesis)
|
||||
* @param describeStreamBackoffTimeInMillis Backoff time for DescribeStream calls in milliseconds
|
||||
* @param maxDescribeStreamRetryAttempts Number of retry attempts for DescribeStream calls
|
||||
*/
|
||||
public KinesisProxy(final String streamName,
|
||||
AWSCredentialsProvider credentialProvider,
|
||||
AmazonKinesisClient kinesisClient,
|
||||
long describeStreamBackoffTimeInMillis,
|
||||
int maxDescribeStreamRetryAttempts) {
|
||||
this.streamName = streamName;
|
||||
this.credentialsProvider = credentialProvider;
|
||||
this.describeStreamBackoffTimeInMillis = describeStreamBackoffTimeInMillis;
|
||||
this.maxDescribeStreamRetryAttempts = maxDescribeStreamRetryAttempts;
|
||||
this.client = kinesisClient;
|
||||
|
||||
LOG.debug("KinesisProxy( " + streamName + ")");
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public GetRecordsResult get(String shardIterator, int maxRecords)
|
||||
throws ResourceNotFoundException, InvalidArgumentException, ExpiredIteratorException {
|
||||
|
||||
final GetRecordsRequest getRecordsRequest = new GetRecordsRequest();
|
||||
getRecordsRequest.setRequestCredentials(credentialsProvider.getCredentials());
|
||||
getRecordsRequest.setShardIterator(shardIterator);
|
||||
getRecordsRequest.setLimit(maxRecords);
|
||||
final GetRecordsResult response = client.getRecords(getRecordsRequest);
|
||||
return response;
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public DescribeStreamResult getStreamInfo(String startShardId)
|
||||
throws ResourceNotFoundException, LimitExceededException {
|
||||
final DescribeStreamRequest describeStreamRequest = new DescribeStreamRequest();
|
||||
describeStreamRequest.setRequestCredentials(credentialsProvider.getCredentials());
|
||||
describeStreamRequest.setStreamName(streamName);
|
||||
describeStreamRequest.setExclusiveStartShardId(startShardId);
|
||||
DescribeStreamResult response = null;
|
||||
int remainingRetryTimes = this.maxDescribeStreamRetryAttempts;
|
||||
// Call DescribeStream, with backoff and retries (if we get LimitExceededException).
|
||||
while ((remainingRetryTimes >= 0) && (response == null)) {
|
||||
try {
|
||||
response = client.describeStream(describeStreamRequest);
|
||||
} catch (LimitExceededException le) {
|
||||
LOG.info("Got LimitExceededException when describing stream " + streamName + ". Backing off for "
|
||||
+ this.describeStreamBackoffTimeInMillis + " millis.");
|
||||
try {
|
||||
Thread.sleep(this.describeStreamBackoffTimeInMillis);
|
||||
} catch (InterruptedException ie) {
|
||||
LOG.debug("Stream " + streamName + " : Sleep was interrupted ", ie);
|
||||
}
|
||||
}
|
||||
remainingRetryTimes--;
|
||||
}
|
||||
|
||||
if (StreamStatus.ACTIVE.toString().equals(response.getStreamDescription().getStreamStatus())
|
||||
|| StreamStatus.UPDATING.toString().equals(response.getStreamDescription().getStreamStatus())) {
|
||||
return response;
|
||||
} else {
|
||||
LOG.info("Stream is in status " + response.getStreamDescription().getStreamStatus()
|
||||
+ ", KinesisProxy.DescribeStream returning null (wait until stream is Active or Updating");
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public List<Shard> getShardList() {
|
||||
List<Shard> result = new ArrayList<Shard>();
|
||||
|
||||
DescribeStreamResult response = null;
|
||||
String lastShardId = null;
|
||||
|
||||
do {
|
||||
response = getStreamInfo(lastShardId);
|
||||
|
||||
if (response == null) {
|
||||
/*
|
||||
* If getStreamInfo ever returns null, we should bail and return null. This indicates the stream is not
|
||||
* in ACTIVE or UPDATING state and we may not have accurate/consistent information about the stream.
|
||||
*/
|
||||
return null;
|
||||
} else {
|
||||
List<Shard> shards = response.getStreamDescription().getShards();
|
||||
result.addAll(shards);
|
||||
lastShardId = shards.get(shards.size() - 1).getShardId();
|
||||
}
|
||||
} while (response.getStreamDescription().isHasMoreShards());
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public Set<String> getAllShardIds() throws ResourceNotFoundException {
|
||||
List<Shard> shards = getShardList();
|
||||
if (shards == null) {
|
||||
return null;
|
||||
} else {
|
||||
Set<String> shardIds = new HashSet<String>();
|
||||
|
||||
for (Shard shard : getShardList()) {
|
||||
shardIds.add(shard.getShardId());
|
||||
}
|
||||
|
||||
return shardIds;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public String getIterator(String shardId, String iteratorType, String sequenceNumber) {
|
||||
final GetShardIteratorRequest getShardIteratorRequest = new GetShardIteratorRequest();
|
||||
getShardIteratorRequest.setRequestCredentials(credentialsProvider.getCredentials());
|
||||
getShardIteratorRequest.setStreamName(streamName);
|
||||
getShardIteratorRequest.setShardId(shardId);
|
||||
getShardIteratorRequest.setShardIteratorType(iteratorType);
|
||||
getShardIteratorRequest.setStartingSequenceNumber(sequenceNumber);
|
||||
final GetShardIteratorResult response = client.getShardIterator(getShardIteratorRequest);
|
||||
return response.getShardIterator();
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public PutRecordResult put(String exclusiveMinimumSequenceNumber,
|
||||
String explicitHashKey,
|
||||
String partitionKey,
|
||||
ByteBuffer data) throws ResourceNotFoundException, InvalidArgumentException {
|
||||
final PutRecordRequest putRecordRequest = new PutRecordRequest();
|
||||
putRecordRequest.setRequestCredentials(credentialsProvider.getCredentials());
|
||||
putRecordRequest.setStreamName(streamName);
|
||||
putRecordRequest.setSequenceNumberForOrdering(exclusiveMinimumSequenceNumber);
|
||||
putRecordRequest.setExplicitHashKey(explicitHashKey);
|
||||
putRecordRequest.setPartitionKey(partitionKey);
|
||||
putRecordRequest.setData(data);
|
||||
|
||||
final PutRecordResult response = client.putRecord(putRecordRequest);
|
||||
return response;
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,125 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.proxies;
|
||||
|
||||
import com.amazonaws.ClientConfiguration;
|
||||
import com.amazonaws.auth.AWSCredentialsProvider;
|
||||
import com.amazonaws.services.kinesis.AmazonKinesisClient;
|
||||
|
||||
/**
|
||||
* Factory used for instantiating KinesisProxy objects (to fetch data from Kinesis).
|
||||
*/
|
||||
public class KinesisProxyFactory implements IKinesisProxyFactory {
|
||||
|
||||
private final AWSCredentialsProvider credentialProvider;
|
||||
private static String defaultServiceName = "kinesis";
|
||||
private static String defaultRegionId = "us-east-1";
|
||||
private static final long DEFAULT_DESCRIBE_STREAM_BACKOFF_MILLIS = 1000L;
|
||||
private static final int DEFAULT_DESCRIBE_STREAM_RETRY_TIMES = 50;
|
||||
private final AmazonKinesisClient kinesisClient;
|
||||
private final long describeStreamBackoffTimeInMillis;
|
||||
private final int maxDescribeStreamRetryAttempts;
|
||||
|
||||
/**
|
||||
* Constructor for creating a KinesisProxy factory, using the specified credentials provider and endpoint.
|
||||
*
|
||||
* @param credentialProvider credentials provider used to sign requests
|
||||
* @param endpoint Amazon Kinesis endpoint to use
|
||||
*/
|
||||
public KinesisProxyFactory(AWSCredentialsProvider credentialProvider, String endpoint) {
|
||||
this(credentialProvider, new ClientConfiguration(), endpoint, defaultServiceName, defaultRegionId,
|
||||
DEFAULT_DESCRIBE_STREAM_BACKOFF_MILLIS, DEFAULT_DESCRIBE_STREAM_RETRY_TIMES);
|
||||
}
|
||||
|
||||
/**
|
||||
* Constructor for KinesisProxy factory using the client configuration to use when interacting with Kinesis.
|
||||
*
|
||||
* @param credentialProvider credentials provider used to sign requests
|
||||
* @param clientConfig Client Configuration used when instantiating an AmazonKinesisClient
|
||||
* @param endpoint Amazon Kinesis endpoint to use
|
||||
*/
|
||||
public KinesisProxyFactory(AWSCredentialsProvider credentialProvider,
|
||||
ClientConfiguration clientConfig,
|
||||
String endpoint) {
|
||||
this(credentialProvider, clientConfig, endpoint, defaultServiceName, defaultRegionId,
|
||||
DEFAULT_DESCRIBE_STREAM_BACKOFF_MILLIS, DEFAULT_DESCRIBE_STREAM_RETRY_TIMES);
|
||||
this.kinesisClient.setConfiguration(clientConfig);
|
||||
}
|
||||
|
||||
/**
|
||||
* This constructor may be used to specify the AmazonKinesisClient to use.
|
||||
*
|
||||
* @param credentialProvider credentials provider used to sign requests
|
||||
* @param client AmazonKinesisClient used to fetch data from Kinesis
|
||||
*/
|
||||
public KinesisProxyFactory(AWSCredentialsProvider credentialProvider, AmazonKinesisClient client) {
|
||||
this(credentialProvider, client, DEFAULT_DESCRIBE_STREAM_BACKOFF_MILLIS, DEFAULT_DESCRIBE_STREAM_RETRY_TIMES);
|
||||
}
|
||||
|
||||
/**
|
||||
* Used internally and for development/testing.
|
||||
*
|
||||
* @param credentialProvider credentials provider used to sign requests
|
||||
* @param clientConfig Client Configuration used when instantiating an AmazonKinesisClient
|
||||
* @param endpoint Amazon Kinesis endpoint to use
|
||||
* @param serviceName service name
|
||||
* @param regionId region id
|
||||
* @param describeStreamBackoffTimeInMillis backoff time for describing stream in millis
|
||||
* @param maxDescribeStreamRetryAttempts Number of retry attempts for DescribeStream calls
|
||||
*/
|
||||
KinesisProxyFactory(AWSCredentialsProvider credentialProvider,
|
||||
ClientConfiguration clientConfig,
|
||||
String endpoint,
|
||||
String serviceName,
|
||||
String regionId,
|
||||
long describeStreamBackoffTimeInMillis,
|
||||
int maxDescribeStreamRetryAttempts) {
|
||||
this(credentialProvider, new AmazonKinesisClient(credentialProvider, clientConfig),
|
||||
describeStreamBackoffTimeInMillis, maxDescribeStreamRetryAttempts);
|
||||
this.kinesisClient.setEndpoint(endpoint, serviceName, regionId);
|
||||
}
|
||||
|
||||
/**
|
||||
* Used internally in the class (and for development/testing).
|
||||
*
|
||||
* @param credentialProvider credentials provider used to sign requests
|
||||
* @param client AmazonKinesisClient used to fetch data from Kinesis
|
||||
* @param describeStreamBackoffTimeInMillis backoff time for describing stream in millis
|
||||
* @param maxDescribeStreamRetryAttempts Number of retry attempts for DescribeStream calls
|
||||
*/
|
||||
KinesisProxyFactory(AWSCredentialsProvider credentialProvider,
|
||||
AmazonKinesisClient client,
|
||||
long describeStreamBackoffTimeInMillis,
|
||||
int maxDescribeStreamRetryAttempts) {
|
||||
super();
|
||||
this.kinesisClient = client;
|
||||
this.credentialProvider = credentialProvider;
|
||||
this.describeStreamBackoffTimeInMillis = describeStreamBackoffTimeInMillis;
|
||||
this.maxDescribeStreamRetryAttempts = maxDescribeStreamRetryAttempts;
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public IKinesisProxy getProxy(String streamName) {
|
||||
return new KinesisProxy(streamName,
|
||||
credentialProvider,
|
||||
kinesisClient,
|
||||
describeStreamBackoffTimeInMillis,
|
||||
maxDescribeStreamRetryAttempts);
|
||||
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,163 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.proxies;
|
||||
|
||||
import java.nio.ByteBuffer;
|
||||
import java.util.List;
|
||||
import java.util.Set;
|
||||
|
||||
import com.amazonaws.services.kinesis.model.DescribeStreamResult;
|
||||
import com.amazonaws.services.kinesis.model.ExpiredIteratorException;
|
||||
import com.amazonaws.services.kinesis.model.GetRecordsResult;
|
||||
import com.amazonaws.services.kinesis.model.InvalidArgumentException;
|
||||
import com.amazonaws.services.kinesis.model.PutRecordResult;
|
||||
import com.amazonaws.services.kinesis.model.ResourceNotFoundException;
|
||||
import com.amazonaws.services.kinesis.model.Shard;
|
||||
import com.amazonaws.services.kinesis.metrics.impl.MetricsHelper;
|
||||
|
||||
/**
|
||||
* IKinesisProxy implementation that wraps another implementation and collects metrics.
|
||||
*/
|
||||
public class MetricsCollectingKinesisProxyDecorator implements IKinesisProxy {
|
||||
|
||||
private static final String SEP = ".";
|
||||
|
||||
private final String getIteratorMetric;
|
||||
private final String getRecordsMetric;
|
||||
private final String getStreamInfoMetric;
|
||||
private final String getShardListMetric;
|
||||
private final String putRecordMetric;
|
||||
private final String getRecordsShardId;
|
||||
|
||||
private IKinesisProxy other;
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*
|
||||
* @param prefix prefix for generated metrics
|
||||
* @param other Kinesis proxy to decorate
|
||||
* @param shardId shardId will be included in the metrics.
|
||||
*/
|
||||
public MetricsCollectingKinesisProxyDecorator(String prefix, IKinesisProxy other, String shardId) {
|
||||
this.other = other;
|
||||
getRecordsShardId = shardId;
|
||||
getIteratorMetric = prefix + SEP + "getIterator";
|
||||
getRecordsMetric = prefix + SEP + "getRecords";
|
||||
getStreamInfoMetric = prefix + SEP + "getStreamInfo";
|
||||
getShardListMetric = prefix + SEP + "getShardList";
|
||||
putRecordMetric = prefix + SEP + "putRecord";
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public GetRecordsResult get(String shardIterator, int maxRecords)
|
||||
throws ResourceNotFoundException, InvalidArgumentException, ExpiredIteratorException {
|
||||
long startTime = System.currentTimeMillis();
|
||||
boolean success = false;
|
||||
try {
|
||||
GetRecordsResult response = other.get(shardIterator, maxRecords);
|
||||
success = true;
|
||||
return response;
|
||||
} finally {
|
||||
MetricsHelper.addSuccessAndLatencyPerShard(getRecordsShardId, getRecordsMetric, startTime, success);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public DescribeStreamResult getStreamInfo(String startingShardId) throws ResourceNotFoundException {
|
||||
long startTime = System.currentTimeMillis();
|
||||
boolean success = false;
|
||||
try {
|
||||
DescribeStreamResult response = other.getStreamInfo(startingShardId);
|
||||
success = true;
|
||||
return response;
|
||||
} finally {
|
||||
MetricsHelper.addSuccessAndLatency(getStreamInfoMetric, startTime, success);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public Set<String> getAllShardIds() throws ResourceNotFoundException {
|
||||
long startTime = System.currentTimeMillis();
|
||||
boolean success = false;
|
||||
try {
|
||||
Set<String> response = other.getAllShardIds();
|
||||
success = true;
|
||||
return response;
|
||||
} finally {
|
||||
MetricsHelper.addSuccessAndLatency(getStreamInfoMetric, startTime, success);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public String getIterator(String shardId, String iteratorEnum, String sequenceNumber)
|
||||
throws ResourceNotFoundException, InvalidArgumentException {
|
||||
long startTime = System.currentTimeMillis();
|
||||
boolean success = false;
|
||||
try {
|
||||
String response = other.getIterator(shardId, iteratorEnum, sequenceNumber);
|
||||
success = true;
|
||||
return response;
|
||||
} finally {
|
||||
MetricsHelper.addSuccessAndLatency(getIteratorMetric, startTime, success);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public List<Shard> getShardList() throws ResourceNotFoundException {
|
||||
long startTime = System.currentTimeMillis();
|
||||
boolean success = false;
|
||||
try {
|
||||
List<Shard> response = other.getShardList();
|
||||
success = true;
|
||||
return response;
|
||||
} finally {
|
||||
MetricsHelper.addSuccessAndLatency(getShardListMetric, startTime, success);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public PutRecordResult put(String sequenceNumberForOrdering,
|
||||
String explicitHashKey,
|
||||
String partitionKey,
|
||||
ByteBuffer data) throws ResourceNotFoundException, InvalidArgumentException {
|
||||
long startTime = System.currentTimeMillis();
|
||||
boolean success = false;
|
||||
try {
|
||||
PutRecordResult response = other.put(sequenceNumberForOrdering, explicitHashKey, partitionKey, data);
|
||||
success = true;
|
||||
return response;
|
||||
} finally {
|
||||
MetricsHelper.addSuccessAndLatency(putRecordMetric, startTime, success);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,40 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.clientlibrary.types;
|
||||
|
||||
/**
|
||||
* Reason the RecordProcessor is being shutdown.
|
||||
* Used to distinguish between a fail-over vs. a termination (shard is closed and all records have been delivered).
|
||||
* In case of a fail over, applications should NOT checkpoint as part of shutdown,
|
||||
* since another record processor may have already started processing records for that shard.
|
||||
* In case of termination (resharding use case), applications SHOULD checkpoint their progress to indicate
|
||||
* that they have successfully processed all the records (processing of child shards can then begin).
|
||||
*/
|
||||
public enum ShutdownReason {
|
||||
/**
|
||||
* Processing will be moved to a different record processor (fail over, load balancing use cases).
|
||||
* Applications SHOULD NOT checkpoint their progress (as another record processor may have already started
|
||||
* processing data).
|
||||
*/
|
||||
ZOMBIE,
|
||||
|
||||
/**
|
||||
* Terminate processing for this RecordProcessor (resharding use case).
|
||||
* Indicates that the shard is closed and all records from the shard have been delivered to the application.
|
||||
* Applications SHOULD checkpoint their progress to indicate that they have successfully processed all records
|
||||
* from this shard and processing of child shards can be started.
|
||||
*/
|
||||
TERMINATE
|
||||
}
|
||||
|
|
@ -0,0 +1,34 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.leases.exceptions;
|
||||
|
||||
/**
|
||||
* Indicates that a lease operation has failed because a dependency of the leasing system has failed. This will happen
|
||||
* if DynamoDB throws an InternalServerException or a generic AmazonClientException (the specific subclasses of
|
||||
* AmazonClientException are all handled more gracefully).
|
||||
*/
|
||||
public class DependencyException extends LeasingException {
|
||||
|
||||
private static final long serialVersionUID = 1L;
|
||||
|
||||
public DependencyException(Throwable e) {
|
||||
super(e);
|
||||
}
|
||||
|
||||
public DependencyException(String message, Throwable e) {
|
||||
super(message, e);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,37 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.leases.exceptions;
|
||||
|
||||
/**
|
||||
* Indicates that a lease operation has failed because DynamoDB is an invalid state. The most common example is failing
|
||||
* to create the DynamoDB table before doing any lease operations.
|
||||
*/
|
||||
public class InvalidStateException extends LeasingException {
|
||||
|
||||
private static final long serialVersionUID = 1L;
|
||||
|
||||
public InvalidStateException(Throwable e) {
|
||||
super(e);
|
||||
}
|
||||
|
||||
public InvalidStateException(String message, Throwable e) {
|
||||
super(message, e);
|
||||
}
|
||||
|
||||
public InvalidStateException(String message) {
|
||||
super(message);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,36 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.leases.exceptions;
|
||||
|
||||
/**
|
||||
* Top-level exception type for all exceptions thrown by the leasing code.
|
||||
*/
|
||||
public class LeasingException extends Exception {
|
||||
|
||||
public LeasingException(Throwable e) {
|
||||
super(e);
|
||||
}
|
||||
|
||||
public LeasingException(String message, Throwable e) {
|
||||
super(message, e);
|
||||
}
|
||||
|
||||
public LeasingException(String message) {
|
||||
super(message);
|
||||
}
|
||||
|
||||
private static final long serialVersionUID = 1L;
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,32 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.leases.exceptions;
|
||||
|
||||
/**
|
||||
* Indicates that a lease operation has failed due to lack of provisioned throughput for a DynamoDB table.
|
||||
*/
|
||||
public class ProvisionedThroughputException extends LeasingException {
|
||||
|
||||
private static final long serialVersionUID = 1L;
|
||||
|
||||
public ProvisionedThroughputException(Throwable e) {
|
||||
super(e);
|
||||
}
|
||||
|
||||
public ProvisionedThroughputException(String message, Throwable e) {
|
||||
super(message, e);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,168 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.leases.impl;
|
||||
|
||||
import java.util.Collection;
|
||||
import java.util.HashSet;
|
||||
import java.util.Set;
|
||||
|
||||
/**
|
||||
* A Lease subclass containing KinesisClientLibrary related fields for checkpoints.
|
||||
*/
|
||||
public class KinesisClientLease extends Lease {
|
||||
|
||||
private String checkpoint;
|
||||
private Long ownerSwitchesSinceCheckpoint = 0L;
|
||||
private Set<String> parentShardIds = new HashSet<String>();
|
||||
|
||||
public KinesisClientLease() {
|
||||
|
||||
}
|
||||
|
||||
public KinesisClientLease(KinesisClientLease other) {
|
||||
super(other);
|
||||
this.checkpoint = other.getCheckpoint();
|
||||
this.ownerSwitchesSinceCheckpoint = other.getOwnerSwitchesSinceCheckpoint();
|
||||
this.parentShardIds.addAll(other.getParentShardIds());
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public <T extends Lease> void update(T other) {
|
||||
super.update(other);
|
||||
if (!(other instanceof KinesisClientLease)) {
|
||||
throw new IllegalArgumentException("Must pass KinesisClientLease object to KinesisClientLease.update(Lease)");
|
||||
}
|
||||
KinesisClientLease casted = (KinesisClientLease) other;
|
||||
|
||||
// Do not update ownerSwitchesSinceCheckpoint here - that field is maintained by the leasing library.
|
||||
setCheckpoint(casted.checkpoint);
|
||||
setParentShardIds(casted.parentShardIds);
|
||||
}
|
||||
|
||||
/**
|
||||
* @return most recently application-supplied checkpoint value. During fail over, the new worker will pick up after
|
||||
* the old worker's last checkpoint.
|
||||
*/
|
||||
public String getCheckpoint() {
|
||||
return checkpoint;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return count of distinct lease holders between checkpoints.
|
||||
*/
|
||||
public Long getOwnerSwitchesSinceCheckpoint() {
|
||||
return ownerSwitchesSinceCheckpoint;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return shardIds that parent this lease. Used for resharding.
|
||||
*/
|
||||
public Set<String> getParentShardIds() {
|
||||
return new HashSet<String>(parentShardIds);
|
||||
}
|
||||
|
||||
/**
|
||||
* Sets checkpoint.
|
||||
*
|
||||
* @param checkpoint may not be null
|
||||
*/
|
||||
public void setCheckpoint(String checkpoint) {
|
||||
verifyNotNull(checkpoint, "Checkpoint should not be null");
|
||||
|
||||
this.checkpoint = checkpoint;
|
||||
}
|
||||
|
||||
/**
|
||||
* Sets ownerSwitchesSinceCheckpoint.
|
||||
*
|
||||
* @param ownerSwitchesSinceCheckpoint may not be null
|
||||
*/
|
||||
public void setOwnerSwitchesSinceCheckpoint(Long ownerSwitchesSinceCheckpoint) {
|
||||
verifyNotNull(ownerSwitchesSinceCheckpoint, "ownerSwitchesSinceCheckpoint should not be null");
|
||||
|
||||
this.ownerSwitchesSinceCheckpoint = ownerSwitchesSinceCheckpoint;
|
||||
}
|
||||
|
||||
/**
|
||||
* Sets parentShardIds.
|
||||
*
|
||||
* @param parentShardIds may not be null
|
||||
*/
|
||||
public void setParentShardIds(Collection<String> parentShardIds) {
|
||||
verifyNotNull(parentShardIds, "parentShardIds should not be null");
|
||||
|
||||
this.parentShardIds.clear();
|
||||
this.parentShardIds.addAll(parentShardIds);
|
||||
}
|
||||
|
||||
private void verifyNotNull(Object object, String message) {
|
||||
if (object == null) {
|
||||
throw new IllegalArgumentException(message);
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
final int prime = 31;
|
||||
int result = super.hashCode();
|
||||
result = prime * result + ((checkpoint == null) ? 0 : checkpoint.hashCode());
|
||||
result =
|
||||
prime * result + ((ownerSwitchesSinceCheckpoint == null) ? 0 : ownerSwitchesSinceCheckpoint.hashCode());
|
||||
result = prime * result + ((parentShardIds == null) ? 0 : parentShardIds.hashCode());
|
||||
return result;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (this == obj)
|
||||
return true;
|
||||
if (!super.equals(obj))
|
||||
return false;
|
||||
if (getClass() != obj.getClass())
|
||||
return false;
|
||||
KinesisClientLease other = (KinesisClientLease) obj;
|
||||
if (checkpoint == null) {
|
||||
if (other.checkpoint != null)
|
||||
return false;
|
||||
} else if (!checkpoint.equals(other.checkpoint))
|
||||
return false;
|
||||
if (ownerSwitchesSinceCheckpoint == null) {
|
||||
if (other.ownerSwitchesSinceCheckpoint != null)
|
||||
return false;
|
||||
} else if (!ownerSwitchesSinceCheckpoint.equals(other.ownerSwitchesSinceCheckpoint))
|
||||
return false;
|
||||
if (parentShardIds == null) {
|
||||
if (other.parentShardIds != null)
|
||||
return false;
|
||||
} else if (!parentShardIds.equals(other.parentShardIds))
|
||||
return false;
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns a deep copy of this object. Type-unsafe - there aren't good mechanisms for copy-constructing generics.
|
||||
*
|
||||
* @return A deep copy of this object.
|
||||
*/
|
||||
@Override
|
||||
@SuppressWarnings("unchecked")
|
||||
public <T extends Lease> T copy() {
|
||||
return (T) new KinesisClientLease(this);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,86 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.leases.impl;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
|
||||
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.DependencyException;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.InvalidStateException;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.ProvisionedThroughputException;
|
||||
import com.amazonaws.services.kinesis.leases.interfaces.IKinesisClientLeaseManager;
|
||||
|
||||
/**
|
||||
* An implementation of LeaseManager for the KinesisClientLibrary - takeLease updates the ownerSwitchesSinceCheckpoint field.
|
||||
*/
|
||||
public class KinesisClientLeaseManager extends LeaseManager<KinesisClientLease> implements IKinesisClientLeaseManager {
|
||||
|
||||
@SuppressWarnings("unused")
|
||||
private static final Log LOG = LogFactory.getLog(KinesisClientLeaseManager.class);
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*
|
||||
* @param table Leases table
|
||||
* @param dynamoDBClient DynamoDB client to use
|
||||
*/
|
||||
public KinesisClientLeaseManager(String table, AmazonDynamoDB dynamoDBClient) {
|
||||
this(table, dynamoDBClient, false);
|
||||
}
|
||||
|
||||
/**
|
||||
* Constructor for integration tests - see comment on superclass for documentation on setting the consistentReads
|
||||
* flag.
|
||||
*
|
||||
* @param table leases table
|
||||
* @param dynamoDBClient DynamoDB client to use
|
||||
* @param consistentReads true if we want consistent reads for testing purposes.
|
||||
*/
|
||||
public KinesisClientLeaseManager(String table, AmazonDynamoDB dynamoDBClient, boolean consistentReads) {
|
||||
super(table, dynamoDBClient, new KinesisClientLeaseSerializer(), consistentReads);
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public boolean takeLease(KinesisClientLease lease, String newOwner)
|
||||
throws DependencyException, InvalidStateException, ProvisionedThroughputException {
|
||||
String oldOwner = lease.getLeaseOwner();
|
||||
|
||||
boolean result = super.takeLease(lease, newOwner);
|
||||
|
||||
if (oldOwner != null && !oldOwner.equals(newOwner)) {
|
||||
lease.setOwnerSwitchesSinceCheckpoint(lease.getOwnerSwitchesSinceCheckpoint() + 1);
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public String getCheckpoint(String shardId)
|
||||
throws ProvisionedThroughputException, InvalidStateException, DependencyException {
|
||||
String checkpoint = null;
|
||||
KinesisClientLease lease = getLease(shardId);
|
||||
if (lease != null) {
|
||||
checkpoint = lease.getCheckpoint();
|
||||
}
|
||||
return checkpoint;
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,139 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.leases.impl;
|
||||
|
||||
import java.util.Collection;
|
||||
import java.util.Map;
|
||||
|
||||
import com.amazonaws.services.dynamodbv2.model.AttributeAction;
|
||||
import com.amazonaws.services.dynamodbv2.model.AttributeDefinition;
|
||||
import com.amazonaws.services.dynamodbv2.model.AttributeValue;
|
||||
import com.amazonaws.services.dynamodbv2.model.AttributeValueUpdate;
|
||||
import com.amazonaws.services.dynamodbv2.model.ExpectedAttributeValue;
|
||||
import com.amazonaws.services.dynamodbv2.model.KeySchemaElement;
|
||||
import com.amazonaws.services.kinesis.leases.interfaces.ILeaseSerializer;
|
||||
import com.amazonaws.services.kinesis.leases.util.DynamoUtils;
|
||||
|
||||
/**
|
||||
* An implementation of ILeaseSerializer for KinesisClientLease objects.
|
||||
*/
|
||||
public class KinesisClientLeaseSerializer implements ILeaseSerializer<KinesisClientLease> {
|
||||
|
||||
private static final String OWNER_SWITCHES_KEY = "ownerSwitchesSinceCheckpoint";
|
||||
private static final String CHECKPOINT_KEY = "checkpoint";
|
||||
public final String PARENT_SHARD_ID_KEY = "parentShardId";
|
||||
|
||||
private final LeaseSerializer baseSerializer = new LeaseSerializer(KinesisClientLease.class);
|
||||
|
||||
@Override
|
||||
public Map<String, AttributeValue> toDynamoRecord(KinesisClientLease lease) {
|
||||
Map<String, AttributeValue> result = baseSerializer.toDynamoRecord(lease);
|
||||
|
||||
result.put(OWNER_SWITCHES_KEY, DynamoUtils.createAttributeValue(lease.getOwnerSwitchesSinceCheckpoint()));
|
||||
result.put(CHECKPOINT_KEY, DynamoUtils.createAttributeValue(lease.getCheckpoint()));
|
||||
if (lease.getParentShardIds() != null && !lease.getParentShardIds().isEmpty()) {
|
||||
result.put(PARENT_SHARD_ID_KEY, DynamoUtils.createAttributeValue(lease.getParentShardIds()));
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
@Override
|
||||
public KinesisClientLease fromDynamoRecord(Map<String, AttributeValue> dynamoRecord) {
|
||||
KinesisClientLease result = (KinesisClientLease) baseSerializer.fromDynamoRecord(dynamoRecord);
|
||||
|
||||
result.setOwnerSwitchesSinceCheckpoint(DynamoUtils.safeGetLong(dynamoRecord, OWNER_SWITCHES_KEY));
|
||||
result.setCheckpoint(DynamoUtils.safeGetString(dynamoRecord, CHECKPOINT_KEY));
|
||||
result.setParentShardIds(DynamoUtils.safeGetSS(dynamoRecord, PARENT_SHARD_ID_KEY));
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Map<String, AttributeValue> getDynamoHashKey(KinesisClientLease lease) {
|
||||
return baseSerializer.getDynamoHashKey(lease);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Map<String, AttributeValue> getDynamoHashKey(String shardId) {
|
||||
return baseSerializer.getDynamoHashKey(shardId);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Map<String, ExpectedAttributeValue> getDynamoLeaseCounterExpectation(KinesisClientLease lease) {
|
||||
return baseSerializer.getDynamoLeaseCounterExpectation(lease);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Map<String, ExpectedAttributeValue> getDynamoLeaseOwnerExpectation(KinesisClientLease lease) {
|
||||
return baseSerializer.getDynamoLeaseOwnerExpectation(lease);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Map<String, ExpectedAttributeValue> getDynamoNonexistantExpectation() {
|
||||
return baseSerializer.getDynamoNonexistantExpectation();
|
||||
}
|
||||
|
||||
@Override
|
||||
public Map<String, AttributeValueUpdate> getDynamoLeaseCounterUpdate(KinesisClientLease lease) {
|
||||
return baseSerializer.getDynamoLeaseCounterUpdate(lease);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Map<String, AttributeValueUpdate> getDynamoTakeLeaseUpdate(KinesisClientLease lease, String newOwner) {
|
||||
Map<String, AttributeValueUpdate> result = baseSerializer.getDynamoTakeLeaseUpdate(lease, newOwner);
|
||||
|
||||
Long ownerSwitchesSinceCheckpoint = lease.getOwnerSwitchesSinceCheckpoint();
|
||||
String oldOwner = lease.getLeaseOwner();
|
||||
if (oldOwner != null && !oldOwner.equals(newOwner)) {
|
||||
ownerSwitchesSinceCheckpoint++;
|
||||
}
|
||||
|
||||
result.put(OWNER_SWITCHES_KEY,
|
||||
new AttributeValueUpdate(DynamoUtils.createAttributeValue(ownerSwitchesSinceCheckpoint),
|
||||
AttributeAction.PUT));
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Map<String, AttributeValueUpdate> getDynamoEvictLeaseUpdate(KinesisClientLease lease) {
|
||||
return baseSerializer.getDynamoEvictLeaseUpdate(lease);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Map<String, AttributeValueUpdate> getDynamoUpdateLeaseUpdate(KinesisClientLease lease) {
|
||||
Map<String, AttributeValueUpdate> result = baseSerializer.getDynamoUpdateLeaseUpdate(lease);
|
||||
|
||||
result.put(CHECKPOINT_KEY, new AttributeValueUpdate(DynamoUtils.createAttributeValue(lease.getCheckpoint()),
|
||||
AttributeAction.PUT));
|
||||
result.put(OWNER_SWITCHES_KEY,
|
||||
new AttributeValueUpdate(DynamoUtils.createAttributeValue(lease.getOwnerSwitchesSinceCheckpoint()),
|
||||
AttributeAction.PUT));
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Collection<KeySchemaElement> getKeySchema() {
|
||||
return baseSerializer.getKeySchema();
|
||||
}
|
||||
|
||||
@Override
|
||||
public Collection<AttributeDefinition> getAttributeDefinitions() {
|
||||
return baseSerializer.getAttributeDefinitions();
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,250 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.leases.impl;
|
||||
|
||||
import java.util.UUID;
|
||||
|
||||
import com.amazonaws.util.json.JSONObject;
|
||||
|
||||
/**
|
||||
* This class contains data pertaining to a Lease. Distributed systems may use leases to partition work across a
|
||||
* fleet of workers. Each unit of work (identified by a leaseKey) has a corresponding Lease. Every worker will contend
|
||||
* for all leases - only one worker will successfully take each one. The worker should hold the lease until it is ready to stop
|
||||
* processing the corresponding unit of work, or until it fails. When the worker stops holding the lease, another worker will
|
||||
* take and hold the lease.
|
||||
*/
|
||||
public class Lease {
|
||||
/*
|
||||
* See javadoc for System.nanoTime - summary:
|
||||
*
|
||||
* Sometimes System.nanoTime's return values will wrap due to overflow. When they do, the difference between two
|
||||
* values will be very large. We will consider leases to be expired if they are more than a year old.
|
||||
*
|
||||
* 365 days per year * 24 hours per day * 60 minutes per hour * 60 seconds per minute * 1000000000
|
||||
* nanoseconds/second
|
||||
*/
|
||||
private static final long MAX_ABS_AGE_NANOS = 365 * 24 * 60 * 60 * 1000000000L;
|
||||
|
||||
private String leaseKey;
|
||||
private String leaseOwner;
|
||||
private Long leaseCounter = 0L;
|
||||
|
||||
/*
|
||||
* This field is used to prevent updates to leases that we have lost and re-acquired. It is deliberately not
|
||||
* persisted in DynamoDB and excluded from hashCode and equals.
|
||||
*/
|
||||
private UUID concurrencyToken;
|
||||
|
||||
/*
|
||||
* This field is used by LeaseRenewer and LeaseTaker to track the last time a lease counter was incremented. It is
|
||||
* deliberately not persisted in DynamoDB and excluded from hashCode and equals.
|
||||
*/
|
||||
private Long lastCounterIncrementNanos;
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*/
|
||||
public Lease() {
|
||||
}
|
||||
|
||||
/**
|
||||
* Copy constructor, used by clone().
|
||||
*
|
||||
* @param lease lease to copy
|
||||
*/
|
||||
protected Lease(Lease lease) {
|
||||
this.leaseKey = lease.getLeaseKey();
|
||||
this.leaseOwner = lease.getLeaseOwner();
|
||||
this.leaseCounter = lease.getLeaseCounter();
|
||||
this.concurrencyToken = lease.getConcurrencyToken();
|
||||
this.lastCounterIncrementNanos = lease.getLastCounterIncrementNanos();
|
||||
}
|
||||
|
||||
/**
|
||||
* Updates this Lease's mutable, application-specific fields based on the passed-in lease object. Does not update
|
||||
* fields that are internal to the leasing library (leaseKey, leaseOwner, leaseCounter).
|
||||
*
|
||||
* @param other
|
||||
*/
|
||||
public <T extends Lease> void update(T other) {
|
||||
// The default implementation (no application-specific fields) has nothing to do.
|
||||
}
|
||||
|
||||
/**
|
||||
* @return leaseKey - identifies the unit of work associated with this lease.
|
||||
*/
|
||||
public String getLeaseKey() {
|
||||
return leaseKey;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return leaseCounter is incremented periodically by the holder of the lease. Used for optimistic locking.
|
||||
*/
|
||||
public Long getLeaseCounter() {
|
||||
return leaseCounter;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return current owner of the lease, may be null.
|
||||
*/
|
||||
public String getLeaseOwner() {
|
||||
return leaseOwner;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return concurrency token
|
||||
*/
|
||||
public UUID getConcurrencyToken() {
|
||||
return concurrencyToken;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return last update in nanoseconds since the epoch
|
||||
*/
|
||||
public Long getLastCounterIncrementNanos() {
|
||||
return lastCounterIncrementNanos;
|
||||
}
|
||||
|
||||
/**
|
||||
* @param leaseDurationNanos duration of lease in nanoseconds
|
||||
* @param asOfNanos time in nanoseconds to check expiration as-of
|
||||
* @return true if lease is expired as-of given time, false otherwise
|
||||
*/
|
||||
public boolean isExpired(long leaseDurationNanos, long asOfNanos) {
|
||||
if (lastCounterIncrementNanos == null) {
|
||||
return true;
|
||||
}
|
||||
|
||||
long age = asOfNanos - lastCounterIncrementNanos;
|
||||
// see comment on MAX_ABS_AGE_NANOS
|
||||
if (Math.abs(age) > MAX_ABS_AGE_NANOS) {
|
||||
return true;
|
||||
} else {
|
||||
return age > leaseDurationNanos;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Sets lastCounterIncrementNanos
|
||||
*
|
||||
* @param lastCounterIncrementNanos last renewal in nanoseconds since the epoch
|
||||
*/
|
||||
public void setLastCounterIncrementNanos(Long lastCounterIncrementNanos) {
|
||||
this.lastCounterIncrementNanos = lastCounterIncrementNanos;
|
||||
}
|
||||
|
||||
/**
|
||||
* Sets concurrencyToken.
|
||||
*
|
||||
* @param concurrencyToken may not be null
|
||||
*/
|
||||
public void setConcurrencyToken(UUID concurrencyToken) {
|
||||
verifyNotNull(concurrencyToken, "concurencyToken cannot be null");
|
||||
this.concurrencyToken = concurrencyToken;
|
||||
}
|
||||
|
||||
/**
|
||||
* Sets leaseKey. LeaseKey is immutable once set.
|
||||
*
|
||||
* @param leaseKey may not be null.
|
||||
*/
|
||||
public void setLeaseKey(String leaseKey) {
|
||||
if (this.leaseKey != null) {
|
||||
throw new IllegalArgumentException("LeaseKey is immutable once set");
|
||||
}
|
||||
verifyNotNull(leaseKey, "LeaseKey cannot be set to null");
|
||||
|
||||
this.leaseKey = leaseKey;
|
||||
}
|
||||
|
||||
/**
|
||||
* Sets leaseCounter.
|
||||
*
|
||||
* @param leaseCounter may not be null
|
||||
*/
|
||||
public void setLeaseCounter(Long leaseCounter) {
|
||||
verifyNotNull(leaseCounter, "leaseCounter must not be null");
|
||||
|
||||
this.leaseCounter = leaseCounter;
|
||||
}
|
||||
|
||||
/**
|
||||
* Sets leaseOwner.
|
||||
*
|
||||
* @param leaseOwner may be null.
|
||||
*/
|
||||
public void setLeaseOwner(String leaseOwner) {
|
||||
this.leaseOwner = leaseOwner;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
final int prime = 31;
|
||||
int result = 1;
|
||||
result = prime * result + ((leaseCounter == null) ? 0 : leaseCounter.hashCode());
|
||||
result = prime * result + ((leaseOwner == null) ? 0 : leaseOwner.hashCode());
|
||||
result = prime * result + ((leaseKey == null) ? 0 : leaseKey.hashCode());
|
||||
return result;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (this == obj)
|
||||
return true;
|
||||
if (obj == null)
|
||||
return false;
|
||||
if (getClass() != obj.getClass())
|
||||
return false;
|
||||
Lease other = (Lease) obj;
|
||||
if (leaseCounter == null) {
|
||||
if (other.leaseCounter != null)
|
||||
return false;
|
||||
} else if (!leaseCounter.equals(other.leaseCounter))
|
||||
return false;
|
||||
if (leaseOwner == null) {
|
||||
if (other.leaseOwner != null)
|
||||
return false;
|
||||
} else if (!leaseOwner.equals(other.leaseOwner))
|
||||
return false;
|
||||
if (leaseKey == null) {
|
||||
if (other.leaseKey != null)
|
||||
return false;
|
||||
} else if (!leaseKey.equals(other.leaseKey))
|
||||
return false;
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String toString() {
|
||||
return new JSONObject(this).toString();
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns a deep copy of this object. Type-unsafe - there aren't good mechanisms for copy-constructing generics.
|
||||
*
|
||||
* @return A deep copy of this object.
|
||||
*/
|
||||
@SuppressWarnings("unchecked")
|
||||
public <T extends Lease> T copy() {
|
||||
return (T) new Lease(this);
|
||||
}
|
||||
|
||||
private void verifyNotNull(Object object, String message) {
|
||||
if (object == null) {
|
||||
throw new IllegalArgumentException(message);
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,272 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.leases.impl;
|
||||
|
||||
import java.util.Collection;
|
||||
import java.util.Map;
|
||||
import java.util.UUID;
|
||||
import java.util.concurrent.Executors;
|
||||
import java.util.concurrent.ScheduledExecutorService;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.DependencyException;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.InvalidStateException;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.LeasingException;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.ProvisionedThroughputException;
|
||||
import com.amazonaws.services.kinesis.leases.interfaces.ILeaseManager;
|
||||
import com.amazonaws.services.kinesis.leases.interfaces.ILeaseRenewer;
|
||||
import com.amazonaws.services.kinesis.leases.interfaces.ILeaseTaker;
|
||||
import com.amazonaws.services.kinesis.metrics.impl.LogMetricsFactory;
|
||||
import com.amazonaws.services.kinesis.metrics.impl.MetricsHelper;
|
||||
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsFactory;
|
||||
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsScope;
|
||||
|
||||
/**
|
||||
* LeaseCoordinator abstracts away LeaseTaker and LeaseRenewer from the application code that's using leasing. It owns
|
||||
* the scheduling of the two previously mentioned components as well as informing LeaseRenewer when LeaseTaker takes new
|
||||
* leases.
|
||||
*
|
||||
*/
|
||||
public class LeaseCoordinator<T extends Lease> {
|
||||
|
||||
/*
|
||||
* Name of the dimension used when setting worker identifier on IMetricsScopes. Exposed so that users of this class
|
||||
* can easily create InterceptingMetricsFactories that rename this dimension to suit the destination metrics system.
|
||||
*/
|
||||
public static final String WORKER_IDENTIFIER_METRIC = "WorkerIdentifier";
|
||||
|
||||
private static final Log LOG = LogFactory.getLog(LeaseCoordinator.class);
|
||||
|
||||
// Time to wait for in-flight Runnables to finish when calling .stop();
|
||||
private static final long STOP_WAIT_TIME_MILLIS = 2000L;
|
||||
|
||||
private final ILeaseRenewer<T> leaseRenewer;
|
||||
private final ILeaseTaker<T> leaseTaker;
|
||||
private final long renewerIntervalMillis;
|
||||
private final long takerIntervalMillis;
|
||||
|
||||
protected final IMetricsFactory metricsFactory;
|
||||
|
||||
private ScheduledExecutorService threadpool;
|
||||
private boolean running = false;
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*
|
||||
* @param leaseManager LeaseManager instance to use
|
||||
* @param workerIdentifier Identifies the worker (e.g. useful to track lease ownership)
|
||||
* @param leaseDurationMillis Duration of a lease
|
||||
* @param epsilonMillis Allow for some variance when calculating lease expirations
|
||||
*/
|
||||
public LeaseCoordinator(ILeaseManager<T> leaseManager,
|
||||
String workerIdentifier,
|
||||
long leaseDurationMillis,
|
||||
long epsilonMillis) {
|
||||
this(leaseManager, workerIdentifier, leaseDurationMillis, epsilonMillis, new LogMetricsFactory());
|
||||
}
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*
|
||||
* @param leaseManager LeaseManager instance to use
|
||||
* @param workerIdentifier Identifies the worker (e.g. useful to track lease ownership)
|
||||
* @param leaseDurationMillis Duration of a lease
|
||||
* @param epsilonMillis Allow for some variance when calculating lease expirations
|
||||
* @param metricsFactory Used to publish metrics about lease operations
|
||||
*/
|
||||
public LeaseCoordinator(ILeaseManager<T> leaseManager,
|
||||
String workerIdentifier,
|
||||
long leaseDurationMillis,
|
||||
long epsilonMillis,
|
||||
IMetricsFactory metricsFactory) {
|
||||
this.leaseTaker = new LeaseTaker<T>(leaseManager, workerIdentifier, leaseDurationMillis);
|
||||
this.leaseRenewer = new LeaseRenewer<T>(leaseManager, workerIdentifier, leaseDurationMillis);
|
||||
this.renewerIntervalMillis = leaseDurationMillis / 3 - epsilonMillis;
|
||||
this.takerIntervalMillis = (leaseDurationMillis + epsilonMillis) * 2;
|
||||
this.metricsFactory = metricsFactory;
|
||||
|
||||
LOG.info(String.format("With failover time %dms and epsilon %dms, LeaseCoordinator will renew leases every %dms and take leases every %dms",
|
||||
leaseDurationMillis,
|
||||
epsilonMillis,
|
||||
renewerIntervalMillis,
|
||||
takerIntervalMillis));
|
||||
}
|
||||
|
||||
private class TakerRunnable implements Runnable {
|
||||
|
||||
@Override
|
||||
public void run() {
|
||||
try {
|
||||
runTaker();
|
||||
} catch (LeasingException e) {
|
||||
LOG.error("LeasingException encountered in lease taking thread", e);
|
||||
} catch (Throwable t) {
|
||||
LOG.error("Throwable encountered in lease taking thread", t);
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
private class RenewerRunnable implements Runnable {
|
||||
|
||||
@Override
|
||||
public void run() {
|
||||
try {
|
||||
runRenewer();
|
||||
} catch (LeasingException e) {
|
||||
LOG.error("LeasingException encountered in lease renewing thread", e);
|
||||
} catch (Throwable t) {
|
||||
LOG.error("Throwable encountered in lease renewing thread", t);
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* Start background LeaseHolder and LeaseTaker threads.
|
||||
* @throws ProvisionedThroughputException If we can't talk to DynamoDB due to insufficient capacity.
|
||||
* @throws InvalidStateException If the lease table doesn't exist
|
||||
* @throws DependencyException If we encountered exception taking to DynamoDB
|
||||
*/
|
||||
public void start() throws DependencyException, InvalidStateException, ProvisionedThroughputException {
|
||||
leaseRenewer.initialize();
|
||||
|
||||
// 2 because we know we'll have at most 2 concurrent tasks at a time.
|
||||
threadpool = Executors.newScheduledThreadPool(2);
|
||||
|
||||
// Taker runs with fixed DELAY because we want it to run slower in the event of performance degredation.
|
||||
threadpool.scheduleWithFixedDelay(new TakerRunnable(), 0L, takerIntervalMillis, TimeUnit.MILLISECONDS);
|
||||
// Renewer runs at fixed INTERVAL because we want it to run at the same rate in the event of degredation.
|
||||
threadpool.scheduleAtFixedRate(new RenewerRunnable(), 0L, renewerIntervalMillis, TimeUnit.MILLISECONDS);
|
||||
running = true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Runs a single iteration of the lease taker - used by integration tests.
|
||||
*
|
||||
* @throws InvalidStateException
|
||||
* @throws DependencyException
|
||||
*/
|
||||
protected void runTaker() throws DependencyException, InvalidStateException {
|
||||
IMetricsScope scope = MetricsHelper.startScope(metricsFactory, "TakeLeases");
|
||||
long startTime = System.currentTimeMillis();
|
||||
boolean success = false;
|
||||
|
||||
try {
|
||||
Map<String, T> takenLeases = leaseTaker.takeLeases();
|
||||
|
||||
leaseRenewer.addLeasesToRenew(takenLeases.values());
|
||||
success = true;
|
||||
} finally {
|
||||
scope.addDimension(WORKER_IDENTIFIER_METRIC, getWorkerIdentifier());
|
||||
MetricsHelper.addSuccessAndLatency(startTime, success);
|
||||
MetricsHelper.endScope();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Runs a single iteration of the lease renewer - used by integration tests.
|
||||
*
|
||||
* @throws InvalidStateException
|
||||
* @throws DependencyException
|
||||
*/
|
||||
protected void runRenewer() throws DependencyException, InvalidStateException {
|
||||
IMetricsScope scope = MetricsHelper.startScope(metricsFactory, "RenewAllLeases");
|
||||
long startTime = System.currentTimeMillis();
|
||||
boolean success = false;
|
||||
|
||||
try {
|
||||
leaseRenewer.renewLeases();
|
||||
success = true;
|
||||
} finally {
|
||||
scope.addDimension(WORKER_IDENTIFIER_METRIC, getWorkerIdentifier());
|
||||
MetricsHelper.addSuccessAndLatency(startTime, success);
|
||||
MetricsHelper.endScope();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @return currently held leases
|
||||
*/
|
||||
public Collection<T> getAssignments() {
|
||||
return leaseRenewer.getCurrentlyHeldLeases().values();
|
||||
}
|
||||
|
||||
/**
|
||||
* @param leaseKey lease key to fetch currently held lease for
|
||||
*
|
||||
* @return deep copy of currently held Lease for given key, or null if we don't hold the lease for that key
|
||||
*/
|
||||
public T getCurrentlyHeldLease(String leaseKey) {
|
||||
return leaseRenewer.getCurrentlyHeldLease(leaseKey);
|
||||
}
|
||||
|
||||
/**
|
||||
* @return workerIdentifier
|
||||
*/
|
||||
public String getWorkerIdentifier() {
|
||||
return leaseTaker.getWorkerIdentifier();
|
||||
}
|
||||
|
||||
/**
|
||||
* Stops background threads.
|
||||
*/
|
||||
public void stop() {
|
||||
threadpool.shutdown();
|
||||
try {
|
||||
if (threadpool.awaitTermination(STOP_WAIT_TIME_MILLIS, TimeUnit.MILLISECONDS)) {
|
||||
LOG.info(String.format("Worker %s has successfully stopped lease-tracking threads", leaseTaker.getWorkerIdentifier()));
|
||||
} else {
|
||||
threadpool.shutdownNow();
|
||||
LOG.info(String.format("Worker %s stopped lease-tracking threads %dms after stop",
|
||||
leaseTaker.getWorkerIdentifier(),
|
||||
STOP_WAIT_TIME_MILLIS));
|
||||
}
|
||||
} catch (InterruptedException e) {
|
||||
LOG.debug("Encountered InterruptedException when awaiting threadpool termination");
|
||||
}
|
||||
|
||||
leaseRenewer.clearCurrentlyHeldLeases();
|
||||
running = false;
|
||||
}
|
||||
|
||||
/**
|
||||
* @return true if this LeaseCoordinator is running
|
||||
*/
|
||||
public boolean isRunning() {
|
||||
return running;
|
||||
}
|
||||
|
||||
/**
|
||||
* Updates application-specific lease values in DynamoDB.
|
||||
*
|
||||
* @param lease lease object containing updated values
|
||||
* @param concurrencyToken obtained by calling Lease.getConcurrencyToken for a currently held lease
|
||||
*
|
||||
* @return true if update succeeded, false otherwise
|
||||
*
|
||||
* @throws InvalidStateException if lease table does not exist
|
||||
* @throws ProvisionedThroughputException if DynamoDB update fails due to lack of capacity
|
||||
* @throws DependencyException if DynamoDB update fails in an unexpected way
|
||||
*/
|
||||
public boolean updateLease(T lease, UUID concurrencyToken)
|
||||
throws DependencyException, InvalidStateException, ProvisionedThroughputException {
|
||||
return leaseRenewer.updateLease(lease, concurrencyToken);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,567 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.leases.impl;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
|
||||
import com.amazonaws.AmazonClientException;
|
||||
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
|
||||
import com.amazonaws.services.dynamodbv2.model.AttributeValue;
|
||||
import com.amazonaws.services.dynamodbv2.model.AttributeValueUpdate;
|
||||
import com.amazonaws.services.dynamodbv2.model.ConditionalCheckFailedException;
|
||||
import com.amazonaws.services.dynamodbv2.model.CreateTableRequest;
|
||||
import com.amazonaws.services.dynamodbv2.model.DeleteItemRequest;
|
||||
import com.amazonaws.services.dynamodbv2.model.DescribeTableRequest;
|
||||
import com.amazonaws.services.dynamodbv2.model.DescribeTableResult;
|
||||
import com.amazonaws.services.dynamodbv2.model.GetItemRequest;
|
||||
import com.amazonaws.services.dynamodbv2.model.GetItemResult;
|
||||
import com.amazonaws.services.dynamodbv2.model.LimitExceededException;
|
||||
import com.amazonaws.services.dynamodbv2.model.ProvisionedThroughput;
|
||||
import com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException;
|
||||
import com.amazonaws.services.dynamodbv2.model.PutItemRequest;
|
||||
import com.amazonaws.services.dynamodbv2.model.ResourceInUseException;
|
||||
import com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException;
|
||||
import com.amazonaws.services.dynamodbv2.model.ScanRequest;
|
||||
import com.amazonaws.services.dynamodbv2.model.ScanResult;
|
||||
import com.amazonaws.services.dynamodbv2.model.TableStatus;
|
||||
import com.amazonaws.services.dynamodbv2.model.UpdateItemRequest;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.DependencyException;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.InvalidStateException;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.ProvisionedThroughputException;
|
||||
import com.amazonaws.services.kinesis.leases.interfaces.ILeaseManager;
|
||||
import com.amazonaws.services.kinesis.leases.interfaces.ILeaseSerializer;
|
||||
|
||||
/**
|
||||
* An implementation of ILeaseManager that uses DynamoDB.
|
||||
*/
|
||||
public class LeaseManager<T extends Lease> implements ILeaseManager<T> {
|
||||
|
||||
private static final Log LOG = LogFactory.getLog(LeaseManager.class);
|
||||
|
||||
protected String table;
|
||||
protected AmazonDynamoDB dynamoDBClient;
|
||||
protected ILeaseSerializer<T> serializer;
|
||||
protected boolean consistentReads;
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*
|
||||
* @param table leases table
|
||||
* @param dynamoDBClient DynamoDB client to use
|
||||
* @param serializer LeaseSerializer to use to convert to/from DynamoDB objects.
|
||||
*/
|
||||
public LeaseManager(String table, AmazonDynamoDB dynamoDBClient, ILeaseSerializer<T> serializer) {
|
||||
this(table, dynamoDBClient, serializer, false);
|
||||
}
|
||||
|
||||
/**
|
||||
* Constructor for test cases - allows control of consistent reads. Consistent reads should only be used for testing
|
||||
* - our code is meant to be resilient to inconsistent reads. Using consistent reads during testing speeds up
|
||||
* execution of simple tests (you don't have to wait out the consistency window). Test cases that want to experience
|
||||
* eventual consistency should not set consistentReads=true.
|
||||
*
|
||||
* @param table leases table
|
||||
* @param dynamoDBClient DynamoDB client to use
|
||||
* @param serializer lease serializer to use
|
||||
* @param consistentReads true if we want consistent reads for testing purposes.
|
||||
*/
|
||||
public LeaseManager(String table, AmazonDynamoDB dynamoDBClient, ILeaseSerializer<T> serializer, boolean consistentReads) {
|
||||
verifyNotNull(table, "Table name cannot be null");
|
||||
verifyNotNull(dynamoDBClient, "dynamoDBClient cannot be null");
|
||||
verifyNotNull(serializer, "ILeaseSerializer cannot be null");
|
||||
|
||||
this.table = table;
|
||||
this.dynamoDBClient = dynamoDBClient;
|
||||
this.consistentReads = consistentReads;
|
||||
this.serializer = serializer;
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public boolean createLeaseTableIfNotExists(Long readCapacity, Long writeCapacity)
|
||||
throws ProvisionedThroughputException, DependencyException {
|
||||
verifyNotNull(readCapacity, "readCapacity cannot be null");
|
||||
verifyNotNull(writeCapacity, "writeCapacity cannot be null");
|
||||
|
||||
boolean tableDidNotExist = true;
|
||||
CreateTableRequest request = new CreateTableRequest();
|
||||
request.setTableName(table);
|
||||
request.setKeySchema(serializer.getKeySchema());
|
||||
request.setAttributeDefinitions(serializer.getAttributeDefinitions());
|
||||
|
||||
ProvisionedThroughput throughput = new ProvisionedThroughput();
|
||||
throughput.setReadCapacityUnits(readCapacity);
|
||||
throughput.setWriteCapacityUnits(writeCapacity);
|
||||
request.setProvisionedThroughput(throughput);
|
||||
|
||||
try {
|
||||
dynamoDBClient.createTable(request);
|
||||
} catch (ResourceInUseException e) {
|
||||
tableDidNotExist = false;
|
||||
LOG.info("Table " + table + " already exists.");
|
||||
} catch (LimitExceededException e) {
|
||||
throw new ProvisionedThroughputException("Capacity exceeded when creating table " + table, e);
|
||||
} catch (AmazonClientException e) {
|
||||
throw new DependencyException(e);
|
||||
}
|
||||
return tableDidNotExist;
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public boolean leaseTableExists() throws DependencyException {
|
||||
DescribeTableRequest request = new DescribeTableRequest();
|
||||
|
||||
request.setTableName(table);
|
||||
|
||||
DescribeTableResult result;
|
||||
try {
|
||||
result = dynamoDBClient.describeTable(request);
|
||||
} catch (ResourceNotFoundException e) {
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug(String.format("Got ResourceNotFoundException for table %s in leaseTableExists, returning false.",
|
||||
table));
|
||||
}
|
||||
|
||||
return false;
|
||||
} catch (AmazonClientException e) {
|
||||
throw new DependencyException(e);
|
||||
}
|
||||
|
||||
String tableStatus = result.getTable().getTableStatus();
|
||||
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug("Lease table exists and is in status " + tableStatus);
|
||||
}
|
||||
|
||||
return TableStatus.ACTIVE.name().equals(tableStatus);
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean waitUntilLeaseTableExists(long secondsBetweenPolls, long timeoutSeconds) throws DependencyException {
|
||||
long sleepTimeRemaining = timeoutSeconds * 1000;
|
||||
|
||||
while (!leaseTableExists()) {
|
||||
if (sleepTimeRemaining <= 0) {
|
||||
return false;
|
||||
}
|
||||
|
||||
long timeToSleepMillis = Math.min(1000 * secondsBetweenPolls, sleepTimeRemaining);
|
||||
|
||||
sleepTimeRemaining -= sleep(timeToSleepMillis);
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Exposed for testing purposes.
|
||||
*
|
||||
* @param timeToSleepMillis time to sleep in milliseconds
|
||||
*
|
||||
* @return actual time slept in millis
|
||||
*/
|
||||
long sleep(long timeToSleepMillis) {
|
||||
long startTime = System.currentTimeMillis();
|
||||
|
||||
try {
|
||||
Thread.sleep(timeToSleepMillis);
|
||||
} catch (InterruptedException e) {
|
||||
LOG.debug("Interrupted while sleeping");
|
||||
}
|
||||
|
||||
return System.currentTimeMillis() - startTime;
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public List<T> listLeases() throws DependencyException, InvalidStateException, ProvisionedThroughputException {
|
||||
return list(null);
|
||||
}
|
||||
|
||||
/**
|
||||
* List with the given page size. Package access for integration testing.
|
||||
*
|
||||
* @param limit number of items to consider at a time - used by integration tests to force paging.
|
||||
* @return list of leases
|
||||
* @throws InvalidStateException if table does not exist
|
||||
* @throws DependencyException if DynamoDB scan fail in an unexpected way
|
||||
* @throws ProvisionedThroughputException if DynamoDB scan fail due to exceeded capacity
|
||||
*/
|
||||
List<T> list(Integer limit) throws DependencyException, InvalidStateException, ProvisionedThroughputException {
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug("Listing leases from table " + table);
|
||||
}
|
||||
|
||||
ScanRequest scanRequest = new ScanRequest();
|
||||
scanRequest.setTableName(table);
|
||||
if (limit != null) {
|
||||
scanRequest.setLimit(limit);
|
||||
}
|
||||
|
||||
try {
|
||||
ScanResult scanResult = dynamoDBClient.scan(scanRequest);
|
||||
List<T> result = new ArrayList<T>();
|
||||
|
||||
while (scanResult != null) {
|
||||
for (Map<String, AttributeValue> item : scanResult.getItems()) {
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug("Got item " + item.toString() + " from DynamoDB.");
|
||||
}
|
||||
|
||||
result.add(serializer.fromDynamoRecord(item));
|
||||
}
|
||||
|
||||
Map<String, AttributeValue> lastEvaluatedKey = scanResult.getLastEvaluatedKey();
|
||||
if (lastEvaluatedKey == null) {
|
||||
// Signify that we're done.
|
||||
scanResult = null;
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug("lastEvaluatedKey was null - scan finished.");
|
||||
}
|
||||
} else {
|
||||
// Make another request, picking up where we left off.
|
||||
scanRequest.setExclusiveStartKey(lastEvaluatedKey);
|
||||
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug("lastEvaluatedKey was " + lastEvaluatedKey + ", continuing scan.");
|
||||
}
|
||||
|
||||
scanResult = dynamoDBClient.scan(scanRequest);
|
||||
}
|
||||
}
|
||||
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug("Listed " + result.size() + " leases from table " + table);
|
||||
}
|
||||
|
||||
return result;
|
||||
} catch (ResourceNotFoundException e) {
|
||||
throw new InvalidStateException("Cannot scan lease table " + table + " because it does not exist.", e);
|
||||
} catch (ProvisionedThroughputExceededException e) {
|
||||
throw new ProvisionedThroughputException(e);
|
||||
} catch (AmazonClientException e) {
|
||||
throw new DependencyException(e);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public boolean createLeaseIfNotExists(T lease)
|
||||
throws DependencyException, InvalidStateException, ProvisionedThroughputException {
|
||||
verifyNotNull(lease, "lease cannot be null");
|
||||
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug("Creating lease " + lease);
|
||||
}
|
||||
|
||||
PutItemRequest request = new PutItemRequest();
|
||||
request.setTableName(table);
|
||||
request.setItem(serializer.toDynamoRecord(lease));
|
||||
request.setExpected(serializer.getDynamoNonexistantExpectation());
|
||||
|
||||
try {
|
||||
dynamoDBClient.putItem(request);
|
||||
} catch (ConditionalCheckFailedException e) {
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug("Did not create lease " + lease + " because it already existed");
|
||||
}
|
||||
|
||||
return false;
|
||||
} catch (AmazonClientException e) {
|
||||
throw convertAndRethrowExceptions("create", lease.getLeaseKey(), e);
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public T getLease(String leaseKey)
|
||||
throws DependencyException, InvalidStateException, ProvisionedThroughputException {
|
||||
verifyNotNull(leaseKey, "leaseKey cannot be null");
|
||||
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug("Getting lease with key " + leaseKey);
|
||||
}
|
||||
|
||||
GetItemRequest request = new GetItemRequest();
|
||||
request.setTableName(table);
|
||||
request.setKey(serializer.getDynamoHashKey(leaseKey));
|
||||
request.setConsistentRead(consistentReads);
|
||||
|
||||
try {
|
||||
GetItemResult result = dynamoDBClient.getItem(request);
|
||||
|
||||
Map<String, AttributeValue> dynamoRecord = result.getItem();
|
||||
if (dynamoRecord == null) {
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug("No lease found with key " + leaseKey + ", returning null.");
|
||||
}
|
||||
|
||||
return null;
|
||||
} else {
|
||||
T lease = serializer.fromDynamoRecord(dynamoRecord);
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug("Got lease " + lease);
|
||||
}
|
||||
|
||||
return lease;
|
||||
}
|
||||
} catch (AmazonClientException e) {
|
||||
throw convertAndRethrowExceptions("get", leaseKey, e);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public boolean renewLease(T lease)
|
||||
throws DependencyException, InvalidStateException, ProvisionedThroughputException {
|
||||
verifyNotNull(lease, "lease cannot be null");
|
||||
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug("Renewing lease with key " + lease.getLeaseKey());
|
||||
}
|
||||
|
||||
UpdateItemRequest request = new UpdateItemRequest();
|
||||
request.setTableName(table);
|
||||
request.setKey(serializer.getDynamoHashKey(lease));
|
||||
request.setExpected(serializer.getDynamoLeaseCounterExpectation(lease));
|
||||
request.setAttributeUpdates(serializer.getDynamoLeaseCounterUpdate(lease));
|
||||
|
||||
try {
|
||||
dynamoDBClient.updateItem(request);
|
||||
} catch (ConditionalCheckFailedException e) {
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug("Lease renewal failed for lease with key " + lease.getLeaseKey()
|
||||
+ " because the lease counter was not " + lease.getLeaseCounter());
|
||||
}
|
||||
|
||||
return false;
|
||||
} catch (AmazonClientException e) {
|
||||
throw convertAndRethrowExceptions("renew", lease.getLeaseKey(), e);
|
||||
}
|
||||
|
||||
lease.setLeaseCounter(lease.getLeaseCounter() + 1);
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public boolean takeLease(T lease, String owner)
|
||||
throws DependencyException, InvalidStateException, ProvisionedThroughputException {
|
||||
verifyNotNull(lease, "lease cannot be null");
|
||||
verifyNotNull(owner, "owner cannot be null");
|
||||
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug(String.format("Taking lease with shardId %s from %s to %s",
|
||||
lease.getLeaseKey(),
|
||||
lease.getLeaseOwner() == null ? "nobody" : lease.getLeaseOwner(),
|
||||
owner));
|
||||
}
|
||||
|
||||
UpdateItemRequest request = new UpdateItemRequest();
|
||||
request.setTableName(table);
|
||||
request.setKey(serializer.getDynamoHashKey(lease));
|
||||
request.setExpected(serializer.getDynamoLeaseCounterExpectation(lease));
|
||||
|
||||
Map<String, AttributeValueUpdate> updates = serializer.getDynamoLeaseCounterUpdate(lease);
|
||||
updates.putAll(serializer.getDynamoTakeLeaseUpdate(lease, owner));
|
||||
request.setAttributeUpdates(updates);
|
||||
|
||||
try {
|
||||
dynamoDBClient.updateItem(request);
|
||||
} catch (ConditionalCheckFailedException e) {
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug("Lease renewal failed for lease with key " + lease.getLeaseKey()
|
||||
+ " because the lease counter was not " + lease.getLeaseCounter());
|
||||
}
|
||||
|
||||
return false;
|
||||
} catch (AmazonClientException e) {
|
||||
throw convertAndRethrowExceptions("take", lease.getLeaseKey(), e);
|
||||
}
|
||||
|
||||
lease.setLeaseCounter(lease.getLeaseCounter() + 1);
|
||||
lease.setLeaseOwner(owner);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public boolean evictLease(T lease)
|
||||
throws DependencyException, InvalidStateException, ProvisionedThroughputException {
|
||||
verifyNotNull(lease, "lease cannot be null");
|
||||
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug(String.format("Voiding lease with shardId %s owned by %s",
|
||||
lease.getLeaseKey(),
|
||||
lease.getLeaseOwner()));
|
||||
}
|
||||
|
||||
UpdateItemRequest request = new UpdateItemRequest();
|
||||
request.setTableName(table);
|
||||
request.setKey(serializer.getDynamoHashKey(lease));
|
||||
request.setExpected(serializer.getDynamoLeaseOwnerExpectation(lease));
|
||||
|
||||
Map<String, AttributeValueUpdate> updates = serializer.getDynamoLeaseCounterUpdate(lease);
|
||||
updates.putAll(serializer.getDynamoEvictLeaseUpdate(lease));
|
||||
request.setAttributeUpdates(updates);
|
||||
|
||||
try {
|
||||
dynamoDBClient.updateItem(request);
|
||||
} catch (ConditionalCheckFailedException e) {
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug("Lease eviction failed for lease with key " + lease.getLeaseKey()
|
||||
+ " because the lease owner was not " + lease.getLeaseOwner());
|
||||
}
|
||||
|
||||
return false;
|
||||
} catch (AmazonClientException e) {
|
||||
throw convertAndRethrowExceptions("evict", lease.getLeaseKey(), e);
|
||||
}
|
||||
|
||||
lease.setLeaseOwner(null);
|
||||
lease.setLeaseCounter(lease.getLeaseCounter() + 1);
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
public void deleteAll() throws DependencyException, InvalidStateException, ProvisionedThroughputException {
|
||||
List<T> allLeases = listLeases();
|
||||
|
||||
LOG.warn("Deleting " + allLeases.size() + " items from table " + table);
|
||||
|
||||
for (T lease : allLeases) {
|
||||
DeleteItemRequest deleteRequest = new DeleteItemRequest();
|
||||
deleteRequest.setTableName(table);
|
||||
deleteRequest.setKey(serializer.getDynamoHashKey(lease));
|
||||
|
||||
dynamoDBClient.deleteItem(deleteRequest);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public void deleteLease(T lease) throws DependencyException, InvalidStateException, ProvisionedThroughputException {
|
||||
verifyNotNull(lease, "lease cannot be null");
|
||||
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug(String.format("Deleting lease with shardId %s", lease.getLeaseKey()));
|
||||
}
|
||||
|
||||
DeleteItemRequest deleteRequest = new DeleteItemRequest();
|
||||
deleteRequest.setTableName(table);
|
||||
deleteRequest.setKey(serializer.getDynamoHashKey(lease));
|
||||
|
||||
try {
|
||||
dynamoDBClient.deleteItem(deleteRequest);
|
||||
} catch (AmazonClientException e) {
|
||||
throw convertAndRethrowExceptions("delete", lease.getLeaseKey(), e);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public boolean updateLease(T lease)
|
||||
throws DependencyException, InvalidStateException, ProvisionedThroughputException {
|
||||
verifyNotNull(lease, "lease cannot be null");
|
||||
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug(String.format("Updating lease %s", lease));
|
||||
}
|
||||
|
||||
UpdateItemRequest request = new UpdateItemRequest();
|
||||
request.setTableName(table);
|
||||
request.setKey(serializer.getDynamoHashKey(lease));
|
||||
request.setExpected(serializer.getDynamoLeaseCounterExpectation(lease));
|
||||
|
||||
Map<String, AttributeValueUpdate> updates = serializer.getDynamoLeaseCounterUpdate(lease);
|
||||
updates.putAll(serializer.getDynamoUpdateLeaseUpdate(lease));
|
||||
request.setAttributeUpdates(updates);
|
||||
|
||||
try {
|
||||
dynamoDBClient.updateItem(request);
|
||||
} catch (ConditionalCheckFailedException e) {
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug("Lease update failed for lease with key " + lease.getLeaseKey()
|
||||
+ " because the lease counter was not " + lease.getLeaseCounter());
|
||||
}
|
||||
|
||||
return false;
|
||||
} catch (AmazonClientException e) {
|
||||
throw convertAndRethrowExceptions("update", lease.getLeaseKey(), e);
|
||||
}
|
||||
|
||||
lease.setLeaseCounter(lease.getLeaseCounter() + 1);
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* This method contains boilerplate exception handling - it throws or returns something to be thrown. The
|
||||
* inconsistency there exists to satisfy the compiler when this method is used at the end of non-void methods.
|
||||
*/
|
||||
protected DependencyException convertAndRethrowExceptions(String operation, String leaseKey, AmazonClientException e)
|
||||
throws ProvisionedThroughputException, InvalidStateException {
|
||||
if (e instanceof ProvisionedThroughputExceededException) {
|
||||
throw new ProvisionedThroughputException(e);
|
||||
} else if (e instanceof ResourceNotFoundException) {
|
||||
// @formatter:on
|
||||
throw new InvalidStateException(String.format("Cannot %s lease with key %s because table %s does not exist.",
|
||||
operation,
|
||||
leaseKey,
|
||||
table),
|
||||
e);
|
||||
//@formatter:off
|
||||
} else {
|
||||
return new DependencyException(e);
|
||||
}
|
||||
}
|
||||
|
||||
private void verifyNotNull(Object object, String message) {
|
||||
if (object == null) {
|
||||
throw new IllegalArgumentException(message);
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,325 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.leases.impl;
|
||||
|
||||
import java.util.Collection;
|
||||
import java.util.HashMap;
|
||||
import java.util.LinkedList;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.UUID;
|
||||
import java.util.concurrent.ConcurrentNavigableMap;
|
||||
import java.util.concurrent.ConcurrentSkipListMap;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
|
||||
import com.amazonaws.services.cloudwatch.model.StandardUnit;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.DependencyException;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.InvalidStateException;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.ProvisionedThroughputException;
|
||||
import com.amazonaws.services.kinesis.leases.interfaces.ILeaseManager;
|
||||
import com.amazonaws.services.kinesis.leases.interfaces.ILeaseRenewer;
|
||||
import com.amazonaws.services.kinesis.metrics.impl.MetricsHelper;
|
||||
|
||||
/**
|
||||
* An implementation of ILeaseRenewer that uses DynamoDB via LeaseManager.
|
||||
*/
|
||||
public class LeaseRenewer<T extends Lease> implements ILeaseRenewer<T> {
|
||||
|
||||
private static final Log LOG = LogFactory.getLog(LeaseRenewer.class);
|
||||
private static final int RENEWAL_RETRIES = 2;
|
||||
|
||||
private final ILeaseManager<T> leaseManager;
|
||||
private final ConcurrentNavigableMap<String, T> ownedLeases = new ConcurrentSkipListMap<String, T>();
|
||||
private final String workerIdentifier;
|
||||
private final long leaseDurationNanos;
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*
|
||||
* @param leaseManager LeaseManager to use
|
||||
* @param workerIdentifier identifier of this worker
|
||||
* @param leaseDurationMillis duration of a lease in milliseconds
|
||||
*/
|
||||
public LeaseRenewer(ILeaseManager<T> leaseManager, String workerIdentifier, long leaseDurationMillis) {
|
||||
this.leaseManager = leaseManager;
|
||||
this.workerIdentifier = workerIdentifier;
|
||||
this.leaseDurationNanos = leaseDurationMillis * 1000000L;
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public void renewLeases() throws DependencyException, InvalidStateException {
|
||||
if (LOG.isDebugEnabled()) {
|
||||
// Due to the eventually consistent nature of ConcurrentNavigableMap iterators, this log entry may become
|
||||
// inaccurate during iteration.
|
||||
LOG.debug(String.format("Worker %s holding %d leases: %s",
|
||||
workerIdentifier,
|
||||
ownedLeases.size(),
|
||||
ownedLeases));
|
||||
}
|
||||
|
||||
/*
|
||||
* We iterate in descending order here so that the synchronized(lease) inside renewLease doesn't "lead" calls
|
||||
* to getCurrentlyHeldLeases. They'll still cross paths, but they won't interleave their executions.
|
||||
*/
|
||||
int lostLeases = 0;
|
||||
for (T lease : ownedLeases.descendingMap().values()) {
|
||||
if (!renewLease(lease)) {
|
||||
lostLeases++;
|
||||
}
|
||||
}
|
||||
|
||||
MetricsHelper.getMetricsScope().addData("LostLeases", lostLeases, StandardUnit.Count);
|
||||
MetricsHelper.getMetricsScope().addData("CurrentLeases", ownedLeases.size(), StandardUnit.Count);
|
||||
}
|
||||
|
||||
private boolean renewLease(T lease) throws DependencyException, InvalidStateException {
|
||||
String leaseKey = lease.getLeaseKey();
|
||||
|
||||
boolean success = false;
|
||||
boolean renewedLease = false;
|
||||
long startTime = System.currentTimeMillis();
|
||||
try {
|
||||
for (int i = 1; i <= RENEWAL_RETRIES; i++) {
|
||||
try {
|
||||
synchronized (lease) {
|
||||
renewedLease = leaseManager.renewLease(lease);
|
||||
if (renewedLease) {
|
||||
lease.setLastCounterIncrementNanos(System.nanoTime());
|
||||
}
|
||||
}
|
||||
|
||||
if (renewedLease) {
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug(String.format("Worker %s successfully renewed lease with key %s",
|
||||
workerIdentifier,
|
||||
leaseKey));
|
||||
}
|
||||
} else {
|
||||
LOG.info(String.format("Worker %s lost lease with key %s", workerIdentifier, leaseKey));
|
||||
ownedLeases.remove(leaseKey);
|
||||
}
|
||||
|
||||
success = true;
|
||||
break;
|
||||
} catch (ProvisionedThroughputException e) {
|
||||
LOG.info(String.format("Worker %s could not renew lease with key %s on try %d out of %d due to capacity",
|
||||
workerIdentifier,
|
||||
leaseKey,
|
||||
i,
|
||||
RENEWAL_RETRIES));
|
||||
}
|
||||
}
|
||||
} finally {
|
||||
MetricsHelper.addSuccessAndLatency("RenewLease", startTime, success);
|
||||
}
|
||||
|
||||
return renewedLease;
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public Map<String, T> getCurrentlyHeldLeases() {
|
||||
Map<String, T> result = new HashMap<String, T>();
|
||||
long now = System.nanoTime();
|
||||
|
||||
for (String leaseKey : ownedLeases.keySet()) {
|
||||
T copy = getCopyOfHeldLease(leaseKey, now);
|
||||
if (copy != null) {
|
||||
result.put(copy.getLeaseKey(), copy);
|
||||
}
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public T getCurrentlyHeldLease(String leaseKey) {
|
||||
return getCopyOfHeldLease(leaseKey, System.nanoTime());
|
||||
}
|
||||
|
||||
/**
|
||||
* Internal method to return a lease with a specific lease key only if we currently hold it.
|
||||
*
|
||||
* @param leaseKey key of lease to return
|
||||
* @param now current timestamp for old-ness checking
|
||||
* @return non-authoritative copy of the held lease, or null if we don't currently hold it
|
||||
*/
|
||||
private T getCopyOfHeldLease(String leaseKey, long now) {
|
||||
T authoritativeLease = ownedLeases.get(leaseKey);
|
||||
if (authoritativeLease == null) {
|
||||
return null;
|
||||
} else {
|
||||
T copy = null;
|
||||
synchronized (authoritativeLease) {
|
||||
copy = authoritativeLease.copy();
|
||||
}
|
||||
|
||||
if (copy.isExpired(leaseDurationNanos, now)) {
|
||||
LOG.info(String.format("getCurrentlyHeldLease not returning lease with key %s because it is expired",
|
||||
copy.getLeaseKey()));
|
||||
return null;
|
||||
} else {
|
||||
return copy;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public boolean updateLease(T lease, UUID concurrencyToken)
|
||||
throws DependencyException, InvalidStateException, ProvisionedThroughputException {
|
||||
verifyNotNull(lease, "lease cannot be null");
|
||||
verifyNotNull(lease.getLeaseKey(), "leaseKey cannot be null");
|
||||
verifyNotNull(concurrencyToken, "concurrencyToken cannot be null");
|
||||
|
||||
String leaseKey = lease.getLeaseKey();
|
||||
T authoritativeLease = ownedLeases.get(leaseKey);
|
||||
|
||||
if (authoritativeLease == null) {
|
||||
LOG.info(String.format("Worker %s could not update lease with key %s because it does not hold it",
|
||||
workerIdentifier,
|
||||
leaseKey));
|
||||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
* If the passed-in concurrency token doesn't match the concurrency token of the authoritative lease, it means
|
||||
* the lease was lost and regained between when the caller acquired his concurrency token and when the caller
|
||||
* called update.
|
||||
*/
|
||||
if (!authoritativeLease.getConcurrencyToken().equals(concurrencyToken)) {
|
||||
LOG.info(String.format("Worker %s refusing to update lease with key %s because"
|
||||
+ " concurrency tokens don't match", workerIdentifier, leaseKey));
|
||||
return false;
|
||||
}
|
||||
|
||||
long startTime = System.currentTimeMillis();
|
||||
boolean success = false;
|
||||
try {
|
||||
synchronized (authoritativeLease) {
|
||||
authoritativeLease.update(lease);
|
||||
boolean updatedLease = leaseManager.updateLease(authoritativeLease);
|
||||
if (updatedLease) {
|
||||
// Updates increment the counter
|
||||
authoritativeLease.setLastCounterIncrementNanos(System.nanoTime());
|
||||
} else {
|
||||
/*
|
||||
* If updateLease returns false, it means someone took the lease from us. Remove the lease
|
||||
* from our set of owned leases pro-actively rather than waiting for a run of renewLeases().
|
||||
*/
|
||||
LOG.info(String.format("Worker %s lost lease with key %s - discovered during update",
|
||||
workerIdentifier,
|
||||
leaseKey));
|
||||
|
||||
/*
|
||||
* Remove only if the value currently in the map is the same as the authoritative lease. We're
|
||||
* guarding against a pause after the concurrency token check above. It plays out like so:
|
||||
*
|
||||
* 1) Concurrency token check passes
|
||||
* 2) Pause. Lose lease, re-acquire lease. This requires at least one lease counter update.
|
||||
* 3) Unpause. leaseManager.updateLease fails conditional write due to counter updates, returns
|
||||
* false.
|
||||
* 4) ownedLeases.remove(key, value) doesn't do anything because authoritativeLease does not
|
||||
* .equals() the re-acquired version in the map on the basis of lease counter. This is what we want.
|
||||
* If we just used ownedLease.remove(key), we would have pro-actively removed a lease incorrectly.
|
||||
*
|
||||
* Note that there is a subtlety here - Lease.equals() deliberately does not check the concurrency
|
||||
* token, but it does check the lease counter, so this scheme works.
|
||||
*/
|
||||
ownedLeases.remove(leaseKey, authoritativeLease);
|
||||
}
|
||||
|
||||
success = true;
|
||||
return updatedLease;
|
||||
}
|
||||
} finally {
|
||||
MetricsHelper.addSuccessAndLatency("UpdateLease", startTime, success);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public void addLeasesToRenew(Collection<T> newLeases) {
|
||||
verifyNotNull(newLeases, "newLeases cannot be null");
|
||||
|
||||
for (T lease : newLeases) {
|
||||
if (lease.getLastCounterIncrementNanos() == null) {
|
||||
LOG.info(String.format("addLeasesToRenew ignoring lease with key %s because it does not have lastRenewalNanos set",
|
||||
lease.getLeaseKey()));
|
||||
continue;
|
||||
}
|
||||
|
||||
T authoritativeLease = lease.copy();
|
||||
|
||||
/*
|
||||
* Assign a concurrency token when we add this to the set of currently owned leases. This ensures that
|
||||
* every time we acquire a lease, it gets a new concurrency token.
|
||||
*/
|
||||
authoritativeLease.setConcurrencyToken(UUID.randomUUID());
|
||||
ownedLeases.put(authoritativeLease.getLeaseKey(), authoritativeLease);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public void clearCurrentlyHeldLeases() {
|
||||
ownedLeases.clear();
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public void initialize() throws DependencyException, InvalidStateException, ProvisionedThroughputException {
|
||||
Collection<T> leases = leaseManager.listLeases();
|
||||
List<T> myLeases = new LinkedList<T>();
|
||||
|
||||
for (T lease : leases) {
|
||||
if (workerIdentifier.equals(lease.getLeaseOwner())) {
|
||||
LOG.info(String.format(" Worker %s found lease %s", workerIdentifier, lease));
|
||||
if (renewLease(lease)) {
|
||||
myLeases.add(lease);
|
||||
}
|
||||
} else {
|
||||
LOG.debug(String.format("Worker %s ignoring lease %s ", workerIdentifier, lease));
|
||||
}
|
||||
}
|
||||
|
||||
addLeasesToRenew(myLeases);
|
||||
}
|
||||
|
||||
private void verifyNotNull(Object object, String message) {
|
||||
if (object == null) {
|
||||
throw new IllegalArgumentException(message);
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,196 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.leases.impl;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collection;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
import com.amazonaws.services.dynamodbv2.model.AttributeAction;
|
||||
import com.amazonaws.services.dynamodbv2.model.AttributeDefinition;
|
||||
import com.amazonaws.services.dynamodbv2.model.AttributeValue;
|
||||
import com.amazonaws.services.dynamodbv2.model.AttributeValueUpdate;
|
||||
import com.amazonaws.services.dynamodbv2.model.ExpectedAttributeValue;
|
||||
import com.amazonaws.services.dynamodbv2.model.KeySchemaElement;
|
||||
import com.amazonaws.services.dynamodbv2.model.KeyType;
|
||||
import com.amazonaws.services.dynamodbv2.model.ScalarAttributeType;
|
||||
import com.amazonaws.services.kinesis.leases.interfaces.ILeaseSerializer;
|
||||
import com.amazonaws.services.kinesis.leases.util.DynamoUtils;
|
||||
|
||||
/**
|
||||
* An implementation of ILeaseSerializer for basic Lease objects. Can also instantiate subclasses of Lease so that
|
||||
* LeaseSerializer can be decorated by other classes if you need to add fields to leases.
|
||||
*/
|
||||
public class LeaseSerializer implements ILeaseSerializer<Lease> {
|
||||
|
||||
public final String LEASE_KEY_KEY = "leaseKey";
|
||||
public final String LEASE_OWNER_KEY = "leaseOwner";
|
||||
public final String LEASE_COUNTER_KEY = "leaseCounter";
|
||||
public final Class<? extends Lease> clazz;
|
||||
|
||||
public LeaseSerializer() {
|
||||
this.clazz = Lease.class;
|
||||
}
|
||||
|
||||
public LeaseSerializer(Class<? extends Lease> clazz) {
|
||||
this.clazz = clazz;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Map<String, AttributeValue> toDynamoRecord(Lease lease) {
|
||||
Map<String, AttributeValue> result = new HashMap<String, AttributeValue>();
|
||||
|
||||
result.put(LEASE_KEY_KEY, DynamoUtils.createAttributeValue(lease.getLeaseKey()));
|
||||
result.put(LEASE_COUNTER_KEY, DynamoUtils.createAttributeValue(lease.getLeaseCounter()));
|
||||
|
||||
if (lease.getLeaseOwner() != null) {
|
||||
result.put(LEASE_OWNER_KEY, DynamoUtils.createAttributeValue(lease.getLeaseOwner()));
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Lease fromDynamoRecord(Map<String, AttributeValue> dynamoRecord) {
|
||||
Lease result;
|
||||
try {
|
||||
result = clazz.newInstance();
|
||||
} catch (InstantiationException e) {
|
||||
throw new RuntimeException(e);
|
||||
} catch (IllegalAccessException e) {
|
||||
throw new RuntimeException(e);
|
||||
}
|
||||
|
||||
result.setLeaseKey(DynamoUtils.safeGetString(dynamoRecord, LEASE_KEY_KEY));
|
||||
result.setLeaseOwner(DynamoUtils.safeGetString(dynamoRecord, LEASE_OWNER_KEY));
|
||||
result.setLeaseCounter(DynamoUtils.safeGetLong(dynamoRecord, LEASE_COUNTER_KEY));
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Map<String, AttributeValue> getDynamoHashKey(String leaseKey) {
|
||||
Map<String, AttributeValue> result = new HashMap<String, AttributeValue>();
|
||||
|
||||
result.put(LEASE_KEY_KEY, DynamoUtils.createAttributeValue(leaseKey));
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Map<String, AttributeValue> getDynamoHashKey(Lease lease) {
|
||||
return getDynamoHashKey(lease.getLeaseKey());
|
||||
}
|
||||
|
||||
@Override
|
||||
public Map<String, ExpectedAttributeValue> getDynamoLeaseCounterExpectation(Lease lease) {
|
||||
return getDynamoLeaseCounterExpectation(lease.getLeaseCounter());
|
||||
}
|
||||
|
||||
public Map<String, ExpectedAttributeValue> getDynamoLeaseCounterExpectation(Long leaseCounter) {
|
||||
Map<String, ExpectedAttributeValue> result = new HashMap<String, ExpectedAttributeValue>();
|
||||
|
||||
ExpectedAttributeValue eav = new ExpectedAttributeValue(DynamoUtils.createAttributeValue(leaseCounter));
|
||||
result.put(LEASE_COUNTER_KEY, eav);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Map<String, ExpectedAttributeValue> getDynamoLeaseOwnerExpectation(Lease lease) {
|
||||
Map<String, ExpectedAttributeValue> result = new HashMap<String, ExpectedAttributeValue>();
|
||||
|
||||
ExpectedAttributeValue eav = null;
|
||||
|
||||
if (lease.getLeaseOwner() == null) {
|
||||
eav = new ExpectedAttributeValue(false);
|
||||
} else {
|
||||
new ExpectedAttributeValue(DynamoUtils.createAttributeValue(lease.getLeaseOwner()));
|
||||
}
|
||||
|
||||
result.put(LEASE_OWNER_KEY, eav);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Map<String, ExpectedAttributeValue> getDynamoNonexistantExpectation() {
|
||||
Map<String, ExpectedAttributeValue> result = new HashMap<String, ExpectedAttributeValue>();
|
||||
|
||||
ExpectedAttributeValue expectedAV = new ExpectedAttributeValue(false);
|
||||
result.put(LEASE_KEY_KEY, expectedAV);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Map<String, AttributeValueUpdate> getDynamoLeaseCounterUpdate(Lease lease) {
|
||||
return getDynamoLeaseCounterUpdate(lease.getLeaseCounter());
|
||||
}
|
||||
|
||||
public Map<String, AttributeValueUpdate> getDynamoLeaseCounterUpdate(Long leaseCounter) {
|
||||
Map<String, AttributeValueUpdate> result = new HashMap<String, AttributeValueUpdate>();
|
||||
|
||||
AttributeValueUpdate avu =
|
||||
new AttributeValueUpdate(DynamoUtils.createAttributeValue(leaseCounter + 1), AttributeAction.PUT);
|
||||
result.put(LEASE_COUNTER_KEY, avu);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Map<String, AttributeValueUpdate> getDynamoTakeLeaseUpdate(Lease lease, String owner) {
|
||||
Map<String, AttributeValueUpdate> result = new HashMap<String, AttributeValueUpdate>();
|
||||
|
||||
result.put(LEASE_OWNER_KEY, new AttributeValueUpdate(DynamoUtils.createAttributeValue(owner),
|
||||
AttributeAction.PUT));
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Map<String, AttributeValueUpdate> getDynamoEvictLeaseUpdate(Lease lease) {
|
||||
Map<String, AttributeValueUpdate> result = new HashMap<String, AttributeValueUpdate>();
|
||||
|
||||
result.put(LEASE_OWNER_KEY, new AttributeValueUpdate(null, AttributeAction.DELETE));
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Map<String, AttributeValueUpdate> getDynamoUpdateLeaseUpdate(Lease lease) {
|
||||
// There is no application-specific data in Lease - just return a map that increments the counter.
|
||||
return new HashMap<String, AttributeValueUpdate>();
|
||||
}
|
||||
|
||||
@Override
|
||||
public Collection<KeySchemaElement> getKeySchema() {
|
||||
List<KeySchemaElement> keySchema = new ArrayList<KeySchemaElement>();
|
||||
keySchema.add(new KeySchemaElement().withAttributeName(LEASE_KEY_KEY).withKeyType(KeyType.HASH));
|
||||
|
||||
return keySchema;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Collection<AttributeDefinition> getAttributeDefinitions() {
|
||||
List<AttributeDefinition> definitions = new ArrayList<AttributeDefinition>();
|
||||
definitions.add(new AttributeDefinition().withAttributeName(LEASE_KEY_KEY)
|
||||
.withAttributeType(ScalarAttributeType.S));
|
||||
|
||||
return definitions;
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,455 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.leases.impl;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collection;
|
||||
import java.util.Collections;
|
||||
import java.util.HashMap;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Map.Entry;
|
||||
import java.util.Random;
|
||||
import java.util.Set;
|
||||
import java.util.concurrent.Callable;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
|
||||
import com.amazonaws.services.cloudwatch.model.StandardUnit;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.DependencyException;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.InvalidStateException;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.ProvisionedThroughputException;
|
||||
import com.amazonaws.services.kinesis.leases.interfaces.ILeaseManager;
|
||||
import com.amazonaws.services.kinesis.leases.interfaces.ILeaseTaker;
|
||||
import com.amazonaws.services.kinesis.metrics.impl.MetricsHelper;
|
||||
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsScope;
|
||||
|
||||
/**
|
||||
* An implementation of ILeaseTaker that uses DynamoDB via LeaseManager.
|
||||
*/
|
||||
public class LeaseTaker<T extends Lease> implements ILeaseTaker<T> {
|
||||
|
||||
private static final Log LOG = LogFactory.getLog(LeaseTaker.class);
|
||||
|
||||
private static final int TAKE_RETRIES = 3;
|
||||
private static final int SCAN_RETRIES = 1;
|
||||
|
||||
// See note on takeLeases(Callable) for why we have this callable.
|
||||
private static final Callable<Long> SYSTEM_CLOCK_CALLABLE = new Callable<Long>() {
|
||||
|
||||
@Override
|
||||
public Long call() {
|
||||
return System.nanoTime();
|
||||
}
|
||||
};
|
||||
|
||||
private final ILeaseManager<T> leaseManager;
|
||||
private final String workerIdentifier;
|
||||
private final Map<String, T> allLeases = new HashMap<String, T>();
|
||||
private final long leaseDurationNanos;
|
||||
|
||||
private Random random = new Random();
|
||||
private long lastScanTimeNanos = 0L;
|
||||
|
||||
public LeaseTaker(ILeaseManager<T> leaseManager, String workerIdentifier, long leaseDurationMillis) {
|
||||
this.leaseManager = leaseManager;
|
||||
this.workerIdentifier = workerIdentifier;
|
||||
this.leaseDurationNanos = leaseDurationMillis * 1000000;
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public Map<String, T> takeLeases() throws DependencyException, InvalidStateException {
|
||||
return takeLeases(SYSTEM_CLOCK_CALLABLE);
|
||||
}
|
||||
|
||||
/**
|
||||
* Internal implementation of takeLeases. Takes a callable that can provide the time to enable test cases without
|
||||
* Thread.sleep. Takes a callable instead of a raw time value because the time needs to be computed as-of
|
||||
* immediately after the scan.
|
||||
*
|
||||
* @param timeProvider Callable that will supply the time
|
||||
*
|
||||
* @return map of lease key to taken lease
|
||||
*
|
||||
* @throws DependencyException
|
||||
* @throws InvalidStateException
|
||||
*/
|
||||
synchronized Map<String, T> takeLeases(Callable<Long> timeProvider)
|
||||
throws DependencyException, InvalidStateException {
|
||||
// Key is leaseKey
|
||||
Map<String, T> takenLeases = new HashMap<String, T>();
|
||||
|
||||
long startTime = System.currentTimeMillis();
|
||||
boolean success = false;
|
||||
|
||||
ProvisionedThroughputException lastException = null;
|
||||
|
||||
try {
|
||||
for (int i = 1; i <= SCAN_RETRIES; i++) {
|
||||
try {
|
||||
updateAllLeases(timeProvider);
|
||||
success = true;
|
||||
} catch (ProvisionedThroughputException e) {
|
||||
LOG.info(String.format("Worker %s could not find expired leases on try %d out of %d",
|
||||
workerIdentifier,
|
||||
i,
|
||||
TAKE_RETRIES));
|
||||
lastException = e;
|
||||
}
|
||||
}
|
||||
} finally {
|
||||
MetricsHelper.addSuccessAndLatency("ListLeases", startTime, success);
|
||||
}
|
||||
|
||||
if (lastException != null) {
|
||||
LOG.error("Worker " + workerIdentifier
|
||||
+ " could not scan leases table, aborting takeLeases. Exception caught by last retry:",
|
||||
lastException);
|
||||
return takenLeases;
|
||||
}
|
||||
|
||||
List<T> expiredLeases = getExpiredLeases();
|
||||
|
||||
Set<T> leasesToTake = computeLeasesToTake(expiredLeases);
|
||||
Set<String> untakenLeaseKeys = new HashSet<String>();
|
||||
|
||||
for (T lease : leasesToTake) {
|
||||
String leaseKey = lease.getLeaseKey();
|
||||
|
||||
startTime = System.currentTimeMillis();
|
||||
success = false;
|
||||
try {
|
||||
for (int i = 1; i <= TAKE_RETRIES; i++) {
|
||||
try {
|
||||
if (leaseManager.takeLease(lease, workerIdentifier)) {
|
||||
lease.setLastCounterIncrementNanos(System.nanoTime());
|
||||
takenLeases.put(leaseKey, lease);
|
||||
} else {
|
||||
untakenLeaseKeys.add(leaseKey);
|
||||
}
|
||||
|
||||
success = true;
|
||||
break;
|
||||
} catch (ProvisionedThroughputException e) {
|
||||
LOG.info(String.format("Could not take lease with key %s for worker %s on try %d out of %d due to capacity",
|
||||
leaseKey,
|
||||
workerIdentifier,
|
||||
i,
|
||||
TAKE_RETRIES));
|
||||
}
|
||||
}
|
||||
} finally {
|
||||
MetricsHelper.addSuccessAndLatency("TakeLease", startTime, success);
|
||||
}
|
||||
}
|
||||
|
||||
if (takenLeases.size() > 0) {
|
||||
LOG.info(String.format("Worker %s successfully took %d leases: %s",
|
||||
workerIdentifier,
|
||||
takenLeases.size(),
|
||||
stringJoin(takenLeases.keySet(), ", ")));
|
||||
}
|
||||
|
||||
if (untakenLeaseKeys.size() > 0) {
|
||||
LOG.info(String.format("Worker %s failed to take %d leases: %s",
|
||||
workerIdentifier,
|
||||
untakenLeaseKeys.size(),
|
||||
stringJoin(untakenLeaseKeys, ", ")));
|
||||
}
|
||||
|
||||
MetricsHelper.getMetricsScope().addData("TakenLeases", takenLeases.size(), StandardUnit.Count);
|
||||
|
||||
return takenLeases;
|
||||
}
|
||||
|
||||
/** Package access for testing purposes.
|
||||
*
|
||||
* @param strings
|
||||
* @param delimiter
|
||||
* @return Joined string.
|
||||
*/
|
||||
static String stringJoin(Collection<String> strings, String delimiter) {
|
||||
StringBuilder builder = new StringBuilder();
|
||||
boolean needDelimiter = false;
|
||||
for (String string : strings) {
|
||||
if (needDelimiter) {
|
||||
builder.append(delimiter);
|
||||
}
|
||||
builder.append(string);
|
||||
needDelimiter = true;
|
||||
}
|
||||
|
||||
return builder.toString();
|
||||
}
|
||||
|
||||
/**
|
||||
* Scan all leases and update lastRenewalTime. Add new leases and delete old leases.
|
||||
*
|
||||
* @param timeProvider callable that supplies the current time
|
||||
*
|
||||
* @return list of expired leases, possibly empty, never null.
|
||||
*
|
||||
* @throws ProvisionedThroughputException if listLeases fails due to lack of provisioned throughput
|
||||
* @throws InvalidStateException if the lease table does not exist
|
||||
* @throws DependencyException if listLeases fails in an unexpected way
|
||||
*/
|
||||
private void updateAllLeases(Callable<Long> timeProvider)
|
||||
throws DependencyException, InvalidStateException, ProvisionedThroughputException {
|
||||
List<T> freshList = leaseManager.listLeases();
|
||||
try {
|
||||
lastScanTimeNanos = timeProvider.call();
|
||||
} catch (Exception e) {
|
||||
throw new DependencyException("Exception caught from timeProvider", e);
|
||||
}
|
||||
|
||||
// This set will hold the lease keys not updated by the previous listLeases call.
|
||||
Set<String> notUpdated = new HashSet<String>(allLeases.keySet());
|
||||
|
||||
// Iterate over all leases, finding ones to try to acquire that haven't changed since the last iteration
|
||||
for (T lease : freshList) {
|
||||
String leaseKey = lease.getLeaseKey();
|
||||
|
||||
T oldLease = allLeases.get(leaseKey);
|
||||
allLeases.put(leaseKey, lease);
|
||||
notUpdated.remove(leaseKey);
|
||||
|
||||
if (oldLease != null) {
|
||||
// If we've seen this lease before...
|
||||
if (oldLease.getLeaseCounter().equals(lease.getLeaseCounter())) {
|
||||
// ...and the counter hasn't changed, propagate the lastRenewalNanos time from the old lease
|
||||
lease.setLastCounterIncrementNanos(oldLease.getLastCounterIncrementNanos());
|
||||
} else {
|
||||
// ...and the counter has changed, set lastRenewalNanos to the time of the scan.
|
||||
lease.setLastCounterIncrementNanos(lastScanTimeNanos);
|
||||
}
|
||||
} else {
|
||||
if (lease.getLeaseOwner() == null) {
|
||||
// if this new lease is unowned, it's never been renewed.
|
||||
lease.setLastCounterIncrementNanos(0L);
|
||||
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug("Treating new lease with key " + leaseKey
|
||||
+ " as never renewed because it is new and unowned.");
|
||||
}
|
||||
} else {
|
||||
// if this new lease is owned, treat it as renewed as of the scan
|
||||
lease.setLastCounterIncrementNanos(lastScanTimeNanos);
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug("Treating new lease with key " + leaseKey
|
||||
+ " as recently renewed because it is new and owned.");
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Remove dead leases from allLeases
|
||||
for (String key : notUpdated) {
|
||||
allLeases.remove(key);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @return list of leases that were expired as of our last scan.
|
||||
*/
|
||||
private List<T> getExpiredLeases() {
|
||||
List<T> expiredLeases = new ArrayList<T>();
|
||||
|
||||
for (T lease : allLeases.values()) {
|
||||
if (lease.isExpired(leaseDurationNanos, lastScanTimeNanos)) {
|
||||
expiredLeases.add(lease);
|
||||
}
|
||||
}
|
||||
|
||||
return expiredLeases;
|
||||
}
|
||||
|
||||
/**
|
||||
* Compute the number of leases I should try to take based on the state of the system.
|
||||
*
|
||||
* @param allLeases map of shardId to lease containing all leases
|
||||
* @param expiredLeases list of leases we determined to be expired
|
||||
* @return set of leases to take.
|
||||
*/
|
||||
private Set<T> computeLeasesToTake(List<T> expiredLeases) {
|
||||
Map<String, Integer> leaseCounts = computeLeaseCounts(expiredLeases);
|
||||
Set<T> leasesToTake = new HashSet<T>();
|
||||
|
||||
int numLeases = allLeases.size();
|
||||
int numWorkers = leaseCounts.size();
|
||||
|
||||
if (numLeases == 0) {
|
||||
// If there are no leases, I shouldn't try to take any.
|
||||
return leasesToTake;
|
||||
}
|
||||
|
||||
int target;
|
||||
if (numWorkers >= numLeases) {
|
||||
// If we have n leases and n or more workers, each worker can have up to 1 lease, including myself.
|
||||
target = 1;
|
||||
} else {
|
||||
/*
|
||||
* numWorkers must be < numLeases.
|
||||
*
|
||||
* Our target for each worker is numLeases / numWorkers (+1 if numWorkers doesn't evenly divide numLeases)
|
||||
*/
|
||||
target = numLeases / numWorkers + (numLeases % numWorkers == 0 ? 0 : 1);
|
||||
}
|
||||
|
||||
int myCount = leaseCounts.get(workerIdentifier);
|
||||
int numLeasesToReachTarget = target - myCount;
|
||||
|
||||
if (numLeasesToReachTarget <= 0) {
|
||||
// If we don't need anything, return the empty set.
|
||||
return leasesToTake;
|
||||
}
|
||||
|
||||
// Shuffle expiredLeases so workers don't all try to contend for the same leases.
|
||||
Collections.shuffle(expiredLeases);
|
||||
|
||||
int originalExpiredLeasesSize = expiredLeases.size();
|
||||
if (expiredLeases.size() > 0) {
|
||||
// If we have expired leases, get up to <needed> leases from expiredLeases
|
||||
for (; numLeasesToReachTarget > 0 && expiredLeases.size() > 0; numLeasesToReachTarget--) {
|
||||
leasesToTake.add(expiredLeases.remove(0));
|
||||
}
|
||||
} else {
|
||||
// If there are no expired leases and we need a lease, consider stealing one
|
||||
T leaseToSteal = chooseLeaseToSteal(leaseCounts, numLeasesToReachTarget, target);
|
||||
if (leaseToSteal != null) {
|
||||
LOG.info(String.format("Worker %s needed %d leases but none were expired, so it will steal lease %s from %s",
|
||||
workerIdentifier,
|
||||
numLeasesToReachTarget,
|
||||
leaseToSteal.getLeaseKey(),
|
||||
leaseToSteal.getLeaseOwner()));
|
||||
leasesToTake.add(leaseToSteal);
|
||||
}
|
||||
}
|
||||
|
||||
if (!leasesToTake.isEmpty()) {
|
||||
LOG.info(String.format("Worker %s saw %d total leases, %d available leases, %d "
|
||||
+ "workers. Target is %d leases, I have %d leases, I will take %d leases",
|
||||
workerIdentifier,
|
||||
numLeases,
|
||||
originalExpiredLeasesSize,
|
||||
numWorkers,
|
||||
target,
|
||||
myCount,
|
||||
leasesToTake.size()));
|
||||
}
|
||||
|
||||
IMetricsScope metrics = MetricsHelper.getMetricsScope();
|
||||
metrics.addData("TotalLeases", numLeases, StandardUnit.Count);
|
||||
metrics.addData("ExpiredLeases", originalExpiredLeasesSize, StandardUnit.Count);
|
||||
metrics.addData("NumWorkers", numWorkers, StandardUnit.Count);
|
||||
metrics.addData("NeededLeases", numLeasesToReachTarget, StandardUnit.Count);
|
||||
metrics.addData("LeasesToTake", leasesToTake.size(), StandardUnit.Count);
|
||||
|
||||
return leasesToTake;
|
||||
}
|
||||
|
||||
/**
|
||||
* Choose a lease to steal by randomly selecting one from the most loaded worker. Stealing rules:
|
||||
*
|
||||
* Steal one lease from the most loaded worker if
|
||||
* a) he has > target leases and I need >= 1 leases
|
||||
* b) he has == target leases and I need > 1 leases
|
||||
*
|
||||
* @param leaseCounts map of workerIdentifier to lease count
|
||||
* @param target target # of leases per worker
|
||||
* @return Lease to steal, or null if we should not steal
|
||||
*/
|
||||
private T chooseLeaseToSteal(Map<String, Integer> leaseCounts, int needed, int target) {
|
||||
Entry<String, Integer> mostLoadedWorker = null;
|
||||
// Find the most loaded worker
|
||||
for (Entry<String, Integer> worker : leaseCounts.entrySet()) {
|
||||
if (mostLoadedWorker == null || mostLoadedWorker.getValue() < worker.getValue()) {
|
||||
mostLoadedWorker = worker;
|
||||
}
|
||||
}
|
||||
|
||||
if (mostLoadedWorker.getValue() < target + (needed > 1 ? 0 : 1)) {
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug(String.format("Worker %s not stealing from most loaded worker %s. He has %d,"
|
||||
+ " target is %d, and I need %d",
|
||||
workerIdentifier,
|
||||
mostLoadedWorker.getKey(),
|
||||
mostLoadedWorker.getValue(),
|
||||
target,
|
||||
needed));
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
String mostLoadedWorkerIdentifier = mostLoadedWorker.getKey();
|
||||
List<T> candidates = new ArrayList<T>();
|
||||
// Collect leases belonging to that worker
|
||||
for (T lease : allLeases.values()) {
|
||||
if (mostLoadedWorkerIdentifier.equals(lease.getLeaseOwner())) {
|
||||
candidates.add(lease);
|
||||
}
|
||||
}
|
||||
|
||||
// Return a random one
|
||||
int randomIndex = random.nextInt(candidates.size());
|
||||
return candidates.get(randomIndex);
|
||||
}
|
||||
|
||||
/**
|
||||
* Count leases by host. Always includes myself, but otherwise only includes hosts that are currently holding
|
||||
* leases.
|
||||
*
|
||||
* @param expiredLeases list of leases that are currently expired
|
||||
* @return map of workerIdentifier to lease count
|
||||
*/
|
||||
private Map<String, Integer> computeLeaseCounts(List<T> expiredLeases) {
|
||||
Map<String, Integer> leaseCounts = new HashMap<String, Integer>();
|
||||
|
||||
// Compute the number of leases per worker by looking through allLeases and ignoring leases that have expired.
|
||||
for (T lease : allLeases.values()) {
|
||||
if (!expiredLeases.contains(lease)) {
|
||||
String leaseOwner = lease.getLeaseOwner();
|
||||
Integer oldCount = leaseCounts.get(leaseOwner);
|
||||
if (oldCount == null) {
|
||||
leaseCounts.put(leaseOwner, 1);
|
||||
} else {
|
||||
leaseCounts.put(leaseOwner, oldCount + 1);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If I have no leases, I wasn't represented in leaseCounts. Let's fix that.
|
||||
Integer myCount = leaseCounts.get(workerIdentifier);
|
||||
if (myCount == null) {
|
||||
myCount = 0;
|
||||
leaseCounts.put(workerIdentifier, myCount);
|
||||
}
|
||||
|
||||
return leaseCounts;
|
||||
}
|
||||
|
||||
/**
|
||||
* {@inheritDoc}
|
||||
*/
|
||||
@Override
|
||||
public String getWorkerIdentifier() {
|
||||
return workerIdentifier;
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,41 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.leases.interfaces;
|
||||
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.DependencyException;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.InvalidStateException;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.ProvisionedThroughputException;
|
||||
import com.amazonaws.services.kinesis.leases.impl.KinesisClientLease;
|
||||
|
||||
/**
|
||||
* A decoration of ILeaseManager that adds methods to get/update checkpoints.
|
||||
*/
|
||||
public interface IKinesisClientLeaseManager extends ILeaseManager<KinesisClientLease> {
|
||||
|
||||
/**
|
||||
* Gets the current checkpoint of the shard. This is useful in the resharding use case
|
||||
* where we will wait for the parent shard to complete before starting on the records from a child shard.
|
||||
*
|
||||
* @param shardId Checkpoint of this shard will be returned
|
||||
* @return Checkpoint of this shard, or null if the shard record doesn't exist.
|
||||
*
|
||||
* @throws ProvisionedThroughputException if DynamoDB update fails due to lack of capacity
|
||||
* @throws InvalidStateException if lease table does not exist
|
||||
* @throws DependencyException if DynamoDB update fails in an unexpected way
|
||||
*/
|
||||
public abstract String getCheckpoint(String shardId)
|
||||
throws ProvisionedThroughputException, InvalidStateException, DependencyException;
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,183 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.leases.interfaces;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.DependencyException;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.InvalidStateException;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.ProvisionedThroughputException;
|
||||
import com.amazonaws.services.kinesis.leases.impl.Lease;
|
||||
|
||||
/**
|
||||
* Supports basic CRUD operations for Leases.
|
||||
*
|
||||
* @param <T> Lease subclass, possibly Lease itself.
|
||||
*/
|
||||
public interface ILeaseManager<T extends Lease> {
|
||||
|
||||
/**
|
||||
* Creates the table that will store leases. Succeeds if table already exists.
|
||||
*
|
||||
* @param readCapacity
|
||||
* @param writeCapacity
|
||||
*
|
||||
* @return true if we created a new table (table didn't exist before)
|
||||
*
|
||||
* @throws ProvisionedThroughputException if we cannot create the lease table due to per-AWS-account capacity
|
||||
* restrictions.
|
||||
* @throws DependencyException if DynamoDB createTable fails in an unexpected way
|
||||
*/
|
||||
public boolean createLeaseTableIfNotExists(Long readCapacity, Long writeCapacity)
|
||||
throws ProvisionedThroughputException, DependencyException;
|
||||
|
||||
/**
|
||||
* @return true if the lease table already exists.
|
||||
*
|
||||
* @throws DependencyException if DynamoDB describeTable fails in an unexpected way
|
||||
*/
|
||||
public boolean leaseTableExists() throws DependencyException;
|
||||
|
||||
/**
|
||||
* Blocks until the lease table exists by polling leaseTableExists.
|
||||
*
|
||||
* @param secondsBetweenPolls time to wait between polls in seconds
|
||||
* @param timeoutSeconds total time to wait in seconds
|
||||
*
|
||||
* @return true if table exists, false if timeout was reached
|
||||
*
|
||||
* @throws DependencyException if DynamoDB describeTable fails in an unexpected way
|
||||
*/
|
||||
public boolean waitUntilLeaseTableExists(long secondsBetweenPolls, long timeoutSeconds) throws DependencyException;
|
||||
|
||||
/**
|
||||
* List all objects in table synchronously.
|
||||
*
|
||||
* @throws DependencyException if DynamoDB scan fails in an unexpected way
|
||||
* @throws InvalidStateException if lease table does not exist
|
||||
* @throws ProvisionedThroughputException if DynamoDB scan fails due to lack of capacity
|
||||
*
|
||||
* @return list of leases
|
||||
*/
|
||||
public List<T> listLeases() throws DependencyException, InvalidStateException, ProvisionedThroughputException;
|
||||
|
||||
/**
|
||||
* Create a new lease. Conditional on a lease not already existing with this shardId.
|
||||
*
|
||||
* @param lease the lease to create
|
||||
*
|
||||
* @return true if lease was created, false if lease already exists
|
||||
*
|
||||
* @throws DependencyException if DynamoDB put fails in an unexpected way
|
||||
* @throws InvalidStateException if lease table does not exist
|
||||
* @throws ProvisionedThroughputException if DynamoDB put fails due to lack of capacity
|
||||
*/
|
||||
public boolean createLeaseIfNotExists(T lease)
|
||||
throws DependencyException, InvalidStateException, ProvisionedThroughputException;
|
||||
|
||||
/**
|
||||
* @param shardId Get the lease for this shardId
|
||||
*
|
||||
* @throws InvalidStateException if lease table does not exist
|
||||
* @throws ProvisionedThroughputException if DynamoDB get fails due to lack of capacity
|
||||
* @throws DependencyException if DynamoDB get fails in an unexpected way
|
||||
*
|
||||
* @return lease for the specified shardId, or null if one doesn't exist
|
||||
*/
|
||||
public T getLease(String shardId) throws DependencyException, InvalidStateException, ProvisionedThroughputException;
|
||||
|
||||
/**
|
||||
* Renew a lease by incrementing the lease counter. Conditional on the leaseCounter in DynamoDB matching the leaseCounter
|
||||
* of the input. Mutates the leaseCounter of the passed-in lease object after updating the record in DynamoDB.
|
||||
*
|
||||
* @param lease the lease to renew
|
||||
*
|
||||
* @return true if renewal succeeded, false otherwise
|
||||
*
|
||||
* @throws InvalidStateException if lease table does not exist
|
||||
* @throws ProvisionedThroughputException if DynamoDB update fails due to lack of capacity
|
||||
* @throws DependencyException if DynamoDB update fails in an unexpected way
|
||||
*/
|
||||
public boolean renewLease(T lease)
|
||||
throws DependencyException, InvalidStateException, ProvisionedThroughputException;
|
||||
|
||||
/**
|
||||
* Take a lease for the given owner by incrementing its leaseCounter and setting its owner field. Conditional on
|
||||
* the leaseCounter in DynamoDB matching the leaseCounter of the input. Mutates the leaseCounter and owner of the
|
||||
* passed-in lease object after updating DynamoDB.
|
||||
*
|
||||
* @param lease the lease to take
|
||||
* @param owner the new owner
|
||||
*
|
||||
* @return true if lease was successfully taken, false otherwise
|
||||
*
|
||||
* @throws InvalidStateException if lease table does not exist
|
||||
* @throws ProvisionedThroughputException if DynamoDB update fails due to lack of capacity
|
||||
* @throws DependencyException if DynamoDB update fails in an unexpected way
|
||||
*/
|
||||
public boolean takeLease(T lease, String owner)
|
||||
throws DependencyException, InvalidStateException, ProvisionedThroughputException;
|
||||
|
||||
/**
|
||||
* Evict the current owner of lease by setting owner to null. Conditional on the owner in DynamoDB matching the owner of
|
||||
* the input. Mutates the lease counter and owner of the passed-in lease object after updating the record in DynamoDB.
|
||||
*
|
||||
* @param lease the lease to void
|
||||
*
|
||||
* @return true if eviction succeeded, false otherwise
|
||||
*
|
||||
* @throws InvalidStateException if lease table does not exist
|
||||
* @throws ProvisionedThroughputException if DynamoDB update fails due to lack of capacity
|
||||
* @throws DependencyException if DynamoDB update fails in an unexpected way
|
||||
*/
|
||||
public boolean evictLease(T lease)
|
||||
throws DependencyException, InvalidStateException, ProvisionedThroughputException;
|
||||
|
||||
/**
|
||||
* Delete the given lease from DynamoDB. Does nothing when passed a lease that does not exist in DynamoDB.
|
||||
*
|
||||
* @param lease the lease to delete
|
||||
*
|
||||
* @throws InvalidStateException if lease table does not exist
|
||||
* @throws ProvisionedThroughputException if DynamoDB delete fails due to lack of capacity
|
||||
* @throws DependencyException if DynamoDB delete fails in an unexpected way
|
||||
*/
|
||||
public void deleteLease(T lease) throws DependencyException, InvalidStateException, ProvisionedThroughputException;
|
||||
|
||||
/**
|
||||
* Delete all leases from DynamoDB. Useful for tools/utils and testing.
|
||||
*
|
||||
* @throws InvalidStateException if lease table does not exist
|
||||
* @throws ProvisionedThroughputException if DynamoDB scan or delete fail due to lack of capacity
|
||||
* @throws DependencyException if DynamoDB scan or delete fail in an unexpected way
|
||||
*/
|
||||
public void deleteAll() throws DependencyException, InvalidStateException, ProvisionedThroughputException;
|
||||
|
||||
/**
|
||||
* Update application-specific fields of the given lease in DynamoDB. Does not update fields managed by the leasing
|
||||
* library such as leaseCounter, leaseOwner, or leaseKey. Conditional on the leaseCounter in DynamoDB matching the
|
||||
* leaseCounter of the input. Increments the lease counter in DynamoDB so that updates can be contingent on other
|
||||
* updates. Mutates the lease counter of the passed-in lease object.
|
||||
*
|
||||
* @return true if update succeeded, false otherwise
|
||||
*
|
||||
* @throws InvalidStateException if lease table does not exist
|
||||
* @throws ProvisionedThroughputException if DynamoDB update fails due to lack of capacity
|
||||
* @throws DependencyException if DynamoDB update fails in an unexpected way
|
||||
*/
|
||||
public boolean updateLease(T lease)
|
||||
throws DependencyException, InvalidStateException, ProvisionedThroughputException;
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,93 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.leases.interfaces;
|
||||
|
||||
import java.util.Collection;
|
||||
import java.util.Map;
|
||||
import java.util.UUID;
|
||||
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.DependencyException;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.InvalidStateException;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.ProvisionedThroughputException;
|
||||
import com.amazonaws.services.kinesis.leases.impl.Lease;
|
||||
|
||||
/**
|
||||
* ILeaseRenewer objects are used by LeaseCoordinator to renew leases held by the LeaseCoordinator. Each
|
||||
* LeaseCoordinator instance corresponds to one worker, and uses exactly one ILeaseRenewer to manage lease renewal for
|
||||
* that worker.
|
||||
*/
|
||||
public interface ILeaseRenewer<T extends Lease> {
|
||||
|
||||
/**
|
||||
* Bootstrap initial set of leases from the LeaseManager (e.g. upon process restart, pick up leases we own)
|
||||
* @throws DependencyException on unexpected DynamoDB failures
|
||||
* @throws InvalidStateException if lease table doesn't exist
|
||||
* @throws ProvisionedThroughputException if DynamoDB reads fail due to insufficient capacity
|
||||
*/
|
||||
public void initialize() throws DependencyException, InvalidStateException, ProvisionedThroughputException;
|
||||
|
||||
/**
|
||||
* Attempt to renew all currently held leases.
|
||||
*
|
||||
* @throws DependencyException on unexpected DynamoDB failures
|
||||
* @throws InvalidStateException if lease table does not exist
|
||||
*/
|
||||
public void renewLeases() throws DependencyException, InvalidStateException;
|
||||
|
||||
/**
|
||||
* @return currently held leases. Key is shardId, value is corresponding Lease object. A lease is currently held if
|
||||
* we successfully renewed it on the last run of renewLeases(). Lease objects returned are deep copies -
|
||||
* their lease counters will not tick.
|
||||
*/
|
||||
public Map<String, T> getCurrentlyHeldLeases();
|
||||
|
||||
/**
|
||||
* @param leaseKey key of the lease to retrieve
|
||||
*
|
||||
* @return a deep copy of a currently held lease, or null if we don't hold the lease
|
||||
*/
|
||||
public T getCurrentlyHeldLease(String leaseKey);
|
||||
|
||||
/**
|
||||
* Adds leases to this LeaseRenewer's set of currently held leases. Leases must have lastRenewalNanos set to the
|
||||
* last time the lease counter was incremented before being passed to this method.
|
||||
*
|
||||
* @param newLeases new leases.
|
||||
*/
|
||||
public void addLeasesToRenew(Collection<T> newLeases);
|
||||
|
||||
/**
|
||||
* Clears this LeaseRenewer's set of currently held leases.
|
||||
*/
|
||||
public void clearCurrentlyHeldLeases();
|
||||
|
||||
/**
|
||||
* Update application-specific fields in a currently held lease. Cannot be used to update internal fields such as
|
||||
* leaseCounter, leaseOwner, etc. Fails if we do not hold the lease, or if the concurrency token does not match
|
||||
* the concurrency token on the internal authoritative copy of the lease (ie, if we lost and re-acquired the lease).
|
||||
*
|
||||
* @param lease lease object containing updated data
|
||||
* @param concurrencyToken obtained by calling Lease.getConcurrencyToken for a currently held lease
|
||||
*
|
||||
* @return true if update succeeds, false otherwise
|
||||
*
|
||||
* @throws InvalidStateException if lease table does not exist
|
||||
* @throws ProvisionedThroughputException if DynamoDB update fails due to lack of capacity
|
||||
* @throws DependencyException if DynamoDB update fails in an unexpected way
|
||||
*/
|
||||
boolean updateLease(T lease, UUID concurrencyToken)
|
||||
throws DependencyException, InvalidStateException, ProvisionedThroughputException;
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,116 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.leases.interfaces;
|
||||
|
||||
import java.util.Collection;
|
||||
import java.util.Map;
|
||||
|
||||
import com.amazonaws.services.dynamodbv2.model.AttributeDefinition;
|
||||
import com.amazonaws.services.dynamodbv2.model.AttributeValue;
|
||||
import com.amazonaws.services.dynamodbv2.model.AttributeValueUpdate;
|
||||
import com.amazonaws.services.dynamodbv2.model.ExpectedAttributeValue;
|
||||
import com.amazonaws.services.dynamodbv2.model.KeySchemaElement;
|
||||
import com.amazonaws.services.kinesis.leases.impl.Lease;
|
||||
|
||||
/**
|
||||
* Utility class that manages the mapping of Lease objects/operations to records in DynamoDB.
|
||||
*
|
||||
* @param <T> Lease subclass, possibly Lease itself
|
||||
*/
|
||||
public interface ILeaseSerializer<T extends Lease> {
|
||||
|
||||
/**
|
||||
* Construct a DynamoDB record out of a Lease object
|
||||
*
|
||||
* @param lease lease object to serialize
|
||||
* @return an attribute value map representing the lease object
|
||||
*/
|
||||
public Map<String, AttributeValue> toDynamoRecord(T lease);
|
||||
|
||||
/**
|
||||
* Construct a Lease object out of a DynamoDB record.
|
||||
*
|
||||
* @param dynamoRecord attribute value map from DynamoDB
|
||||
* @return a deserialized lease object representing the attribute value map
|
||||
*/
|
||||
public T fromDynamoRecord(Map<String, AttributeValue> dynamoRecord);
|
||||
|
||||
/**
|
||||
* @param lease
|
||||
* @return the attribute value map representing a Lease's hash key given a Lease object.
|
||||
*/
|
||||
public Map<String, AttributeValue> getDynamoHashKey(T lease);
|
||||
|
||||
/**
|
||||
* Special getDynamoHashKey implementation used by ILeaseManager.getLease().
|
||||
*
|
||||
* @param leaseKey
|
||||
* @return the attribute value map representing a Lease's hash key given a string.
|
||||
*/
|
||||
public Map<String, AttributeValue> getDynamoHashKey(String leaseKey);
|
||||
|
||||
/**
|
||||
* @param lease
|
||||
* @return the attribute value map asserting that a lease counter is what we expect.
|
||||
*/
|
||||
public Map<String, ExpectedAttributeValue> getDynamoLeaseCounterExpectation(T lease);
|
||||
|
||||
/**
|
||||
* @param lease
|
||||
* @return the attribute value map asserting that the lease owner is what we expect.
|
||||
*/
|
||||
public Map<String, ExpectedAttributeValue> getDynamoLeaseOwnerExpectation(T lease);
|
||||
|
||||
/**
|
||||
* @return the attribute value map asserting that a lease does not exist.
|
||||
*/
|
||||
public Map<String, ExpectedAttributeValue> getDynamoNonexistantExpectation();
|
||||
|
||||
/**
|
||||
* @param lease
|
||||
* @return the attribute value map that increments a lease counter
|
||||
*/
|
||||
public Map<String, AttributeValueUpdate> getDynamoLeaseCounterUpdate(T lease);
|
||||
|
||||
/**
|
||||
* @param lease
|
||||
* @param newOwner
|
||||
* @return the attribute value map that takes a lease for a new owner
|
||||
*/
|
||||
public Map<String, AttributeValueUpdate> getDynamoTakeLeaseUpdate(T lease, String newOwner);
|
||||
|
||||
/**
|
||||
* @param lease
|
||||
* @return the attribute value map that voids a lease
|
||||
*/
|
||||
public Map<String, AttributeValueUpdate> getDynamoEvictLeaseUpdate(T lease);
|
||||
|
||||
/**
|
||||
* @param lease
|
||||
* @return the attribute value map that updates application-specific data for a lease and increments the lease
|
||||
* counter
|
||||
*/
|
||||
public Map<String, AttributeValueUpdate> getDynamoUpdateLeaseUpdate(T lease);
|
||||
|
||||
/**
|
||||
* @return the key schema for creating a DynamoDB table to store leases
|
||||
*/
|
||||
public Collection<KeySchemaElement> getKeySchema();
|
||||
|
||||
/**
|
||||
* @return attribute definitions for creating a DynamoDB table to store leases
|
||||
*/
|
||||
public Collection<AttributeDefinition> getAttributeDefinitions();
|
||||
}
|
||||
|
|
@ -0,0 +1,49 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.leases.interfaces;
|
||||
|
||||
import java.util.Map;
|
||||
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.DependencyException;
|
||||
import com.amazonaws.services.kinesis.leases.exceptions.InvalidStateException;
|
||||
import com.amazonaws.services.kinesis.leases.impl.Lease;
|
||||
|
||||
/**
|
||||
* ILeaseTaker is used by LeaseCoordinator to take new leases, or leases that other workers fail to renew. Each
|
||||
* LeaseCoordinator instance corresponds to one worker and uses exactly one ILeaseTaker to take leases for that worker.
|
||||
*/
|
||||
public interface ILeaseTaker<T extends Lease> {
|
||||
|
||||
/**
|
||||
* Compute the set of leases available to be taken and attempt to take them. Lease taking rules are:
|
||||
*
|
||||
* 1) If a lease's counter hasn't changed in long enough, try to take it.
|
||||
* 2) If we see a lease we've never seen before, take it only if owner == null. If it's owned, odds are the owner is
|
||||
* holding it. We can't tell until we see it more than once.
|
||||
* 3) For load balancing purposes, you may violate rules 1 and 2 for EXACTLY ONE lease per call of takeLeases().
|
||||
*
|
||||
* @return map of shardId to Lease object for leases we just successfully took.
|
||||
*
|
||||
* @throws DependencyException on unexpected DynamoDB failures
|
||||
* @throws InvalidStateException if lease table does not exist
|
||||
*/
|
||||
public abstract Map<String, T> takeLeases() throws DependencyException, InvalidStateException;
|
||||
|
||||
/**
|
||||
* @return workerIdentifier for this LeaseTaker
|
||||
*/
|
||||
public abstract String getWorkerIdentifier();
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,81 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.leases.util;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collection;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
import com.amazonaws.services.dynamodbv2.model.AttributeValue;
|
||||
|
||||
/**
|
||||
* Static utility functions used by our LeaseSerializers.
|
||||
*/
|
||||
public class DynamoUtils {
|
||||
|
||||
public static AttributeValue createAttributeValue(Collection<String> collectionValue) {
|
||||
if (collectionValue == null || collectionValue.isEmpty()) {
|
||||
throw new IllegalArgumentException("Collection attributeValues cannot be null or empty.");
|
||||
}
|
||||
|
||||
return new AttributeValue().withSS(collectionValue);
|
||||
}
|
||||
|
||||
public static AttributeValue createAttributeValue(String stringValue) {
|
||||
if (stringValue == null || stringValue.isEmpty()) {
|
||||
throw new IllegalArgumentException("String attributeValues cannot be null or empty.");
|
||||
}
|
||||
|
||||
return new AttributeValue().withS(stringValue);
|
||||
}
|
||||
|
||||
public static AttributeValue createAttributeValue(Long longValue) {
|
||||
if (longValue == null) {
|
||||
throw new IllegalArgumentException("Number AttributeValues cannot be null.");
|
||||
}
|
||||
|
||||
return new AttributeValue().withN(longValue.toString());
|
||||
}
|
||||
|
||||
public static Long safeGetLong(Map<String, AttributeValue> dynamoRecord, String key) {
|
||||
AttributeValue av = dynamoRecord.get(key);
|
||||
if (av == null) {
|
||||
return null;
|
||||
} else {
|
||||
return new Long(av.getN());
|
||||
}
|
||||
}
|
||||
|
||||
public static String safeGetString(Map<String, AttributeValue> dynamoRecord, String key) {
|
||||
AttributeValue av = dynamoRecord.get(key);
|
||||
if (av == null) {
|
||||
return null;
|
||||
} else {
|
||||
return av.getS();
|
||||
}
|
||||
}
|
||||
|
||||
public static List<String> safeGetSS(Map<String, AttributeValue> dynamoRecord, String key) {
|
||||
AttributeValue av = dynamoRecord.get(key);
|
||||
|
||||
if (av == null) {
|
||||
return new ArrayList<String>();
|
||||
} else {
|
||||
return av.getSS();
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.metrics.impl;
|
||||
|
||||
/**
|
||||
* This is a MetricScope with a KeyType of String. It provides the implementation of
|
||||
* getting the key based off of the String KeyType.
|
||||
*/
|
||||
|
||||
public class AccumulateByNameMetricsScope extends AccumulatingMetricsScope<String> {
|
||||
|
||||
@Override
|
||||
protected String getKey(String name) {
|
||||
return name;
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,95 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.metrics.impl;
|
||||
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
|
||||
import com.amazonaws.services.cloudwatch.model.MetricDatum;
|
||||
import com.amazonaws.services.cloudwatch.model.StandardUnit;
|
||||
import com.amazonaws.services.cloudwatch.model.StatisticSet;
|
||||
|
||||
/**
|
||||
* An IMetricsScope that accumulates data from multiple calls to addData with
|
||||
* the same name parameter. It tracks min, max, sample count, and sum for each
|
||||
* named metric.
|
||||
*
|
||||
* @param <KeyType> can be a class or object defined by the user that stores information about a MetricDatum needed
|
||||
* by the user.
|
||||
*
|
||||
* The following is a example of what a KeyType class might look like:
|
||||
* class SampleKeyType {
|
||||
* private long timeKeyCreated;
|
||||
* private MetricDatum datum;
|
||||
* public SampleKeyType(long timeKeyCreated, MetricDatum datum){
|
||||
* this.timeKeyCreated = timeKeyCreated;
|
||||
* this.datum = datum;
|
||||
* }
|
||||
* }
|
||||
*/
|
||||
public abstract class AccumulatingMetricsScope<KeyType> extends EndingMetricsScope {
|
||||
|
||||
protected Map<KeyType, MetricDatum> data = new HashMap<KeyType, MetricDatum>();
|
||||
|
||||
@Override
|
||||
public void addData(String name, double value, StandardUnit unit) {
|
||||
addData(getKey(name), name, value, unit);
|
||||
}
|
||||
|
||||
/**
|
||||
* @param name
|
||||
* key name for a metric
|
||||
* @return the name of the key
|
||||
*/
|
||||
protected abstract KeyType getKey(String name);
|
||||
|
||||
/**
|
||||
* Adds data points to an IMetricsScope. Multiple calls to IMetricsScopes that have the
|
||||
* same key will have their data accumulated.
|
||||
*
|
||||
* @param key
|
||||
* data point key
|
||||
* @param name
|
||||
* data point name
|
||||
* @param value
|
||||
* data point value
|
||||
* @param unit
|
||||
* data point unit
|
||||
*/
|
||||
public void addData(KeyType key, String name, double value, StandardUnit unit) {
|
||||
super.addData(name, value, unit);
|
||||
|
||||
MetricDatum datum = data.get(key);
|
||||
if (datum == null) {
|
||||
data.put(key,
|
||||
new MetricDatum().withMetricName(name)
|
||||
.withUnit(unit)
|
||||
.withStatisticValues(new StatisticSet().withMaximum(value)
|
||||
.withMinimum(value)
|
||||
.withSampleCount(1.0)
|
||||
.withSum(value)));
|
||||
} else {
|
||||
if (!datum.getUnit().equals(unit.name())) {
|
||||
throw new IllegalArgumentException("Cannot add to existing metric with different unit");
|
||||
}
|
||||
|
||||
StatisticSet statistics = datum.getStatisticValues();
|
||||
statistics.setMaximum(Math.max(value, statistics.getMaximum()));
|
||||
statistics.setMinimum(Math.min(value, statistics.getMinimum()));
|
||||
statistics.setSampleCount(statistics.getSampleCount() + 1);
|
||||
statistics.setSum(statistics.getSum() + value);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,59 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.metrics.impl;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.Objects;
|
||||
|
||||
import com.amazonaws.services.cloudwatch.model.Dimension;
|
||||
import com.amazonaws.services.cloudwatch.model.MetricDatum;
|
||||
|
||||
/*
|
||||
* A representation of a key of a MetricDatum. This class is useful when wanting to compare
|
||||
* whether 2 keys have the same MetricDatum. This feature will be used in MetricAccumulatingQueue
|
||||
* where we aggregate metrics across multiple MetricScopes.
|
||||
*/
|
||||
public class CWMetricKey {
|
||||
|
||||
private List<Dimension> dimensions;
|
||||
private String metricName;
|
||||
|
||||
/**
|
||||
* @param datum data point
|
||||
*/
|
||||
|
||||
public CWMetricKey(MetricDatum datum) {
|
||||
this.dimensions = datum.getDimensions();
|
||||
this.metricName = datum.getMetricName();
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(dimensions, metricName);
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (this == obj)
|
||||
return true;
|
||||
if (obj == null)
|
||||
return false;
|
||||
if (getClass() != obj.getClass())
|
||||
return false;
|
||||
CWMetricKey other = (CWMetricKey) obj;
|
||||
return Objects.equals(other.dimensions, dimensions) && Objects.equals(other.metricName, metricName);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,100 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.metrics.impl;
|
||||
|
||||
import com.amazonaws.ClientConfiguration;
|
||||
import com.amazonaws.auth.AWSCredentialsProvider;
|
||||
import com.amazonaws.services.cloudwatch.AmazonCloudWatchClient;
|
||||
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsFactory;
|
||||
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsScope;
|
||||
|
||||
/**
|
||||
* An IMetricsFactory that creates IMetricsScopes that output themselves via CloudWatch. Batches IMetricsScopes together
|
||||
* to reduce API calls.
|
||||
*/
|
||||
public class CWMetricsFactory implements IMetricsFactory {
|
||||
|
||||
/*
|
||||
* If the CWPublisherRunnable accumulates more than FLUSH_SIZE distinct metrics, it will call CloudWatch
|
||||
* immediately instead of waiting for the next scheduled call.
|
||||
*/
|
||||
private static final int FLUSH_SIZE = 200;
|
||||
private final CWPublisherRunnable<CWMetricKey> runnable;
|
||||
private final Thread publicationThread;
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*
|
||||
* @param credentialsProvider client credentials for CloudWatch
|
||||
* @param namespace the namespace under which the metrics will appear in the CloudWatch console
|
||||
* @param bufferTimeMillis time to buffer metrics before publishing to CloudWatch
|
||||
* @param maxQueueSize maximum number of metrics that we can have in a queue
|
||||
*/
|
||||
public CWMetricsFactory(AWSCredentialsProvider credentialsProvider,
|
||||
String namespace,
|
||||
long bufferTimeMillis,
|
||||
int maxQueueSize) {
|
||||
this(new AmazonCloudWatchClient(credentialsProvider), namespace, bufferTimeMillis, maxQueueSize);
|
||||
}
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*
|
||||
* @param credentialsProvider client credentials for CloudWatch
|
||||
* @param clientConfig Configuration to use with the AmazonCloudWatchClient
|
||||
* @param namespace the namespace under which the metrics will appear in the CloudWatch console
|
||||
* @param bufferTimeMillis time to buffer metrics before publishing to CloudWatch
|
||||
* @param maxQueueSize maximum number of metrics that we can have in a queue
|
||||
*/
|
||||
public CWMetricsFactory(AWSCredentialsProvider credentialsProvider,
|
||||
ClientConfiguration clientConfig,
|
||||
String namespace,
|
||||
long bufferTimeMillis,
|
||||
int maxQueueSize) {
|
||||
this(new AmazonCloudWatchClient(credentialsProvider, clientConfig), namespace, bufferTimeMillis, maxQueueSize);
|
||||
}
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*
|
||||
* @param cloudWatchClient Client used to make CloudWatch requests
|
||||
* @param namespace the namespace under which the metrics will appear in the CloudWatch console
|
||||
* @param bufferTimeMillis time to buffer metrics before publishing to CloudWatch
|
||||
* @param maxQueueSize maximum number of metrics that we can have in a queue
|
||||
*/
|
||||
public CWMetricsFactory(AmazonCloudWatchClient cloudWatchClient,
|
||||
String namespace,
|
||||
long bufferTimeMillis,
|
||||
int maxQueueSize) {
|
||||
DefaultCWMetricsPublisher metricPublisher = new DefaultCWMetricsPublisher(cloudWatchClient, namespace);
|
||||
|
||||
this.runnable =
|
||||
new CWPublisherRunnable<CWMetricKey>(metricPublisher, bufferTimeMillis, maxQueueSize, FLUSH_SIZE);
|
||||
|
||||
this.publicationThread = new Thread(runnable);
|
||||
publicationThread.setName("cw-metrics-publisher");
|
||||
publicationThread.start();
|
||||
}
|
||||
|
||||
@Override
|
||||
public IMetricsScope createMetrics() {
|
||||
return new CWMetricsScope(runnable);
|
||||
}
|
||||
|
||||
public void shutdown() {
|
||||
runnable.shutdown();
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,69 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.metrics.impl;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
|
||||
import com.amazonaws.services.cloudwatch.model.MetricDatum;
|
||||
import com.amazonaws.services.cloudwatch.model.StandardUnit;
|
||||
import com.amazonaws.services.kinesis.metrics.impl.AccumulateByNameMetricsScope;
|
||||
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsScope;
|
||||
|
||||
public class CWMetricsScope extends AccumulateByNameMetricsScope implements IMetricsScope {
|
||||
|
||||
private CWPublisherRunnable<CWMetricKey> publisher;
|
||||
|
||||
/**
|
||||
* Each CWMetricsScope takes a publisher which contains the logic of when to publish metrics.
|
||||
*
|
||||
* @param publisher publishing logic
|
||||
*/
|
||||
|
||||
public CWMetricsScope(CWPublisherRunnable<CWMetricKey> publisher) {
|
||||
this.publisher = publisher;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void addData(String name, double value, StandardUnit unit) {
|
||||
super.addData(name, value, unit);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void addDimension(String name, String value) {
|
||||
super.addDimension(name, value);
|
||||
}
|
||||
|
||||
/*
|
||||
* Once we call this method, all MetricDatums added to the scope will be enqueued to the publisher runnable.
|
||||
* We enqueue MetricDatumWithKey because the publisher will aggregate similar metrics (i.e. MetricDatum with the
|
||||
* same metricName) in the background thread. Hence aggregation using MetricDatumWithKey will be especially useful
|
||||
* when aggregating across multiple MetricScopes.
|
||||
*/
|
||||
@Override
|
||||
public void end() {
|
||||
super.end();
|
||||
|
||||
List<MetricDatumWithKey<CWMetricKey>> dataWithKeys = new ArrayList<MetricDatumWithKey<CWMetricKey>>();
|
||||
|
||||
for (MetricDatum datum : data.values()) {
|
||||
datum.setDimensions(getDimensions());
|
||||
dataWithKeys.add(new MetricDatumWithKey<CWMetricKey>(new CWMetricKey(datum), datum));
|
||||
}
|
||||
|
||||
publisher.enqueue(dataWithKeys);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,187 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.metrics.impl;
|
||||
|
||||
import java.util.Collection;
|
||||
import java.util.List;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
|
||||
/**
|
||||
* A CWPublisherRunnable contains the logic of when to publish metrics.
|
||||
*
|
||||
* @param <KeyType>
|
||||
*/
|
||||
|
||||
public class CWPublisherRunnable<KeyType> implements Runnable {
|
||||
|
||||
private static final Log LOG = LogFactory.getLog(CWPublisherRunnable.class);
|
||||
|
||||
private final ICWMetricsPublisher<KeyType> metricsPublisher;
|
||||
private final MetricAccumulatingQueue<KeyType> queue;
|
||||
private final long bufferTimeMillis;
|
||||
|
||||
/*
|
||||
* Number of metrics that will cause us to flush.
|
||||
*/
|
||||
private int flushSize;
|
||||
private boolean shuttingDown = false;
|
||||
private boolean shutdown = false;
|
||||
private long lastFlushTime = Long.MAX_VALUE;
|
||||
|
||||
/**
|
||||
* Constructor.
|
||||
*
|
||||
* @param metricsPublisher publishes metrics
|
||||
* @param bufferTimeMillis time between publishing metrics
|
||||
* @param maxQueueSize max size of metrics to publish
|
||||
* @param batchSize size of batch that can be published at a time
|
||||
*/
|
||||
|
||||
public CWPublisherRunnable(ICWMetricsPublisher<KeyType> metricsPublisher,
|
||||
long bufferTimeMillis,
|
||||
int maxQueueSize,
|
||||
int batchSize) {
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug(String.format("Constructing CWPublisherRunnable with maxBufferTimeMillis %d maxQueueSize %d batchSize %d",
|
||||
bufferTimeMillis,
|
||||
maxQueueSize,
|
||||
batchSize));
|
||||
}
|
||||
|
||||
this.metricsPublisher = metricsPublisher;
|
||||
this.bufferTimeMillis = bufferTimeMillis;
|
||||
this.queue = new MetricAccumulatingQueue<KeyType>(maxQueueSize);
|
||||
this.flushSize = batchSize;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void run() {
|
||||
while (!shutdown) {
|
||||
try {
|
||||
runOnce();
|
||||
} catch (Throwable t) {
|
||||
LOG.error("Encountered throwable in CWPublisherRunable", t);
|
||||
}
|
||||
}
|
||||
|
||||
LOG.info("CWPublication thread finished.");
|
||||
}
|
||||
|
||||
/**
|
||||
* Exposed for testing purposes.
|
||||
*/
|
||||
public void runOnce() {
|
||||
List<MetricDatumWithKey<KeyType>> dataToPublish = null;
|
||||
synchronized (queue) {
|
||||
/*
|
||||
* We should send if:
|
||||
*
|
||||
* it's been maxBufferTimeMillis since our last send
|
||||
* or if the queue contains > batchSize elements
|
||||
* or if we're shutting down
|
||||
*/
|
||||
long timeSinceFlush = Math.max(0, getTime() - lastFlushTime);
|
||||
if (timeSinceFlush >= bufferTimeMillis || queue.size() >= flushSize || shuttingDown) {
|
||||
dataToPublish = queue.drain(flushSize);
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug(String.format("Drained %d datums from queue", dataToPublish.size()));
|
||||
}
|
||||
|
||||
if (shuttingDown) {
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug(String.format("Shutting down with %d datums left on the queue", queue.size()));
|
||||
}
|
||||
|
||||
// If we're shutting down, we successfully shut down only when the queue is empty.
|
||||
shutdown = queue.isEmpty();
|
||||
}
|
||||
} else {
|
||||
long waitTime = bufferTimeMillis - timeSinceFlush;
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug(String.format("Waiting up to %dms for %d more datums to appear.", waitTime, flushSize
|
||||
- queue.size()));
|
||||
}
|
||||
|
||||
try {
|
||||
// Wait for enqueues for up to maxBufferTimeMillis.
|
||||
queue.wait(waitTime);
|
||||
} catch (InterruptedException e) {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (dataToPublish != null) {
|
||||
try {
|
||||
metricsPublisher.publishMetrics(dataToPublish);
|
||||
} catch (Throwable t) {
|
||||
LOG.error("Caught exception thrown by metrics Publisher in CWPublisherRunnable", t);
|
||||
}
|
||||
lastFlushTime = getTime();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Overrideable for testing purposes.
|
||||
*/
|
||||
long getTime() {
|
||||
return System.currentTimeMillis();
|
||||
}
|
||||
|
||||
public void shutdown() {
|
||||
LOG.info("Shutting down CWPublication thread.");
|
||||
synchronized (queue) {
|
||||
shuttingDown = true;
|
||||
queue.notify();
|
||||
}
|
||||
}
|
||||
|
||||
public boolean isShutdown() {
|
||||
return shutdown;
|
||||
}
|
||||
|
||||
/**
|
||||
* Enqueues metric data for publication.
|
||||
*
|
||||
* @param data collection of MetricDatum to enqueue
|
||||
*/
|
||||
public void enqueue(Collection<MetricDatumWithKey<KeyType>> data) {
|
||||
synchronized (queue) {
|
||||
if (shuttingDown) {
|
||||
LOG.warn(String.format("Dropping metrics %s because CWPublisherRunnable is shutting down.", data));
|
||||
return;
|
||||
}
|
||||
|
||||
if (LOG.isDebugEnabled()) {
|
||||
LOG.debug(String.format("Enqueueing %d datums for publication", data.size()));
|
||||
}
|
||||
|
||||
for (MetricDatumWithKey<KeyType> datumWithKey : data) {
|
||||
if (!queue.offer(datumWithKey.key, datumWithKey.datum)) {
|
||||
LOG.warn("Metrics queue full - dropping metric " + datumWithKey.datum);
|
||||
}
|
||||
}
|
||||
|
||||
// If this is the first enqueue, start buffering from now.
|
||||
if (lastFlushTime == Long.MAX_VALUE) {
|
||||
lastFlushTime = getTime();
|
||||
}
|
||||
|
||||
queue.notify();
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,71 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.metrics.impl;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
|
||||
import com.amazonaws.AmazonClientException;
|
||||
import com.amazonaws.services.cloudwatch.AmazonCloudWatch;
|
||||
import com.amazonaws.services.cloudwatch.model.MetricDatum;
|
||||
import com.amazonaws.services.cloudwatch.model.PutMetricDataRequest;
|
||||
|
||||
/**
|
||||
* Default implementation for publishing metrics to CloudWatch.
|
||||
*/
|
||||
|
||||
public class DefaultCWMetricsPublisher implements ICWMetricsPublisher<CWMetricKey> {
|
||||
|
||||
private static final Log LOG = LogFactory.getLog(CWPublisherRunnable.class);
|
||||
|
||||
// CloudWatch API has a limit of 20 MetricDatums per request
|
||||
private static final int BATCH_SIZE = 20;
|
||||
|
||||
private final String namespace;
|
||||
private final AmazonCloudWatch cloudWatchClient;
|
||||
|
||||
public DefaultCWMetricsPublisher(AmazonCloudWatch cloudWatchClient, String namespace) {
|
||||
this.cloudWatchClient = cloudWatchClient;
|
||||
this.namespace = namespace;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void publishMetrics(List<MetricDatumWithKey<CWMetricKey>> dataToPublish) {
|
||||
for (int startIndex = 0; startIndex < dataToPublish.size(); startIndex += BATCH_SIZE) {
|
||||
int endIndex = Math.min(dataToPublish.size(), startIndex + BATCH_SIZE);
|
||||
|
||||
PutMetricDataRequest request = new PutMetricDataRequest();
|
||||
request.setNamespace(namespace);
|
||||
|
||||
List<MetricDatum> metricData = new ArrayList<MetricDatum>();
|
||||
for (int i = startIndex; i < endIndex; i++) {
|
||||
metricData.add(dataToPublish.get(i).datum);
|
||||
}
|
||||
|
||||
request.setMetricData(metricData);
|
||||
|
||||
try {
|
||||
cloudWatchClient.putMetricData(request);
|
||||
|
||||
LOG.info(String.format("Successfully published %d datums.", endIndex - startIndex));
|
||||
} catch (AmazonClientException e) {
|
||||
LOG.warn(String.format("Could not publish %d datums to CloudWatch", endIndex - startIndex), e);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,53 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.metrics.impl;
|
||||
|
||||
import java.util.HashSet;
|
||||
import java.util.Set;
|
||||
|
||||
import com.amazonaws.services.cloudwatch.model.Dimension;
|
||||
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsScope;
|
||||
|
||||
/**
|
||||
* DimensionTrackingMetricsScope is where we provide functionality for dimensions.
|
||||
* Dimensions allow the user to be able view their metrics based off of the parameters they specify.
|
||||
*
|
||||
* The following examples show how to add dimensions if they would like to view their all metrics
|
||||
* pertaining to a particular stream or for a specific date.
|
||||
*
|
||||
* myScope.addDimension("StreamName", "myStreamName");
|
||||
* myScope.addDimension("Date", "Dec012013");
|
||||
*
|
||||
*
|
||||
*/
|
||||
|
||||
public abstract class DimensionTrackingMetricsScope implements IMetricsScope {
|
||||
|
||||
private Set<Dimension> dimensions = new HashSet<Dimension>();
|
||||
|
||||
@Override
|
||||
public void addDimension(String name, String value) {
|
||||
dimensions.add(new Dimension().withName(name).withValue(value));
|
||||
}
|
||||
|
||||
/**
|
||||
* @return a set of dimensions for an IMetricsScope
|
||||
*/
|
||||
|
||||
protected Set<Dimension> getDimensions() {
|
||||
return dimensions;
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,45 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.metrics.impl;
|
||||
|
||||
import com.amazonaws.services.cloudwatch.model.StandardUnit;
|
||||
|
||||
public abstract class EndingMetricsScope extends DimensionTrackingMetricsScope {
|
||||
|
||||
private boolean ended = false;
|
||||
|
||||
@Override
|
||||
public void addData(String name, double value, StandardUnit unit) {
|
||||
if (ended) {
|
||||
throw new IllegalArgumentException("Cannot call addData after calling IMetricsScope.end()");
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void addDimension(String name, String value) {
|
||||
super.addDimension(name, value);
|
||||
if (ended) {
|
||||
throw new IllegalArgumentException("Cannot call addDimension after calling IMetricsScope.end()");
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void end() {
|
||||
if (ended) {
|
||||
throw new IllegalArgumentException("Cannot call IMetricsScope.end() more than once on the same instance");
|
||||
}
|
||||
ended = true;
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,36 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.metrics.impl;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
/**
|
||||
* An ICWMetricsPublisher is a publisher that contains the logic to publish metrics.
|
||||
*
|
||||
* @param <KeyType> is a class that stores information about a MetricDatum. This is useful when wanting
|
||||
* to compare MetricDatums or aggregate similar MetricDatums.
|
||||
*/
|
||||
|
||||
public interface ICWMetricsPublisher<KeyType> {
|
||||
|
||||
/**
|
||||
* Given a list of MetricDatumWithKey, this method extracts the MetricDatum from each
|
||||
* MetricDatumWithKey and publishes those datums.
|
||||
*
|
||||
* @param dataToPublish a list containing all the MetricDatums to publish
|
||||
*/
|
||||
|
||||
public void publishMetrics(List<MetricDatumWithKey<KeyType>> dataToPublish);
|
||||
}
|
||||
|
|
@ -0,0 +1,77 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.metrics.impl;
|
||||
|
||||
import com.amazonaws.services.cloudwatch.model.StandardUnit;
|
||||
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsFactory;
|
||||
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsScope;
|
||||
|
||||
public abstract class InterceptingMetricsFactory implements IMetricsFactory {
|
||||
|
||||
private final IMetricsFactory other;
|
||||
|
||||
public InterceptingMetricsFactory(IMetricsFactory other) {
|
||||
this.other = other;
|
||||
}
|
||||
|
||||
@Override
|
||||
public IMetricsScope createMetrics() {
|
||||
IMetricsScope otherScope = other.createMetrics();
|
||||
interceptCreateMetrics(otherScope);
|
||||
return new InterceptingMetricsScope(otherScope);
|
||||
}
|
||||
|
||||
protected void interceptCreateMetrics(IMetricsScope scope) {
|
||||
// Default implementation does nothing;
|
||||
}
|
||||
|
||||
protected void interceptAddData(String name, double value, StandardUnit unit, IMetricsScope scope) {
|
||||
scope.addData(name, value, unit);
|
||||
}
|
||||
|
||||
protected void interceptAddDimension(String name, String value, IMetricsScope scope) {
|
||||
scope.addDimension(name, value);
|
||||
}
|
||||
|
||||
protected void interceptEnd(IMetricsScope scope) {
|
||||
scope.end();
|
||||
}
|
||||
|
||||
private class InterceptingMetricsScope implements IMetricsScope {
|
||||
|
||||
private IMetricsScope other;
|
||||
|
||||
public InterceptingMetricsScope(IMetricsScope other) {
|
||||
this.other = other;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void addData(String name, double value, StandardUnit unit) {
|
||||
interceptAddData(name, value, unit, other);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void addDimension(String name, String value) {
|
||||
interceptAddDimension(name, value, other);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void end() {
|
||||
interceptEnd(other);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.metrics.impl;
|
||||
|
||||
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsFactory;
|
||||
|
||||
/**
|
||||
* An IMetricsFactory that creates IMetricsScopes that output themselves via log4j.
|
||||
*/
|
||||
public class LogMetricsFactory implements IMetricsFactory {
|
||||
|
||||
@Override
|
||||
public LogMetricsScope createMetrics() {
|
||||
return new LogMetricsScope();
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,58 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.metrics.impl;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
|
||||
import com.amazonaws.services.cloudwatch.model.Dimension;
|
||||
import com.amazonaws.services.cloudwatch.model.MetricDatum;
|
||||
import com.amazonaws.services.cloudwatch.model.StatisticSet;
|
||||
|
||||
/**
|
||||
* An AccumulatingMetricsScope that outputs via log4j.
|
||||
*/
|
||||
public class LogMetricsScope extends AccumulateByNameMetricsScope {
|
||||
|
||||
private static final Log LOG = LogFactory.getLog(LogMetricsScope.class);
|
||||
|
||||
@Override
|
||||
public void end() {
|
||||
StringBuilder output = new StringBuilder();
|
||||
output.append("Metrics:\n");
|
||||
|
||||
output.append("Dimensions: ");
|
||||
boolean needsComma = false;
|
||||
for (Dimension dimension : getDimensions()) {
|
||||
output.append(String.format("%s[%s: %s]", needsComma ? ", " : "", dimension.getName(), dimension.getValue()));
|
||||
needsComma = true;
|
||||
}
|
||||
output.append("\n");
|
||||
|
||||
for (MetricDatum datum : data.values()) {
|
||||
StatisticSet statistics = datum.getStatisticValues();
|
||||
output.append(String.format("Name=%25s\tMin=%.2f\tMax=%.2f\tCount=%.2f\tSum=%.2f\tAvg=%.2f\tUnit=%s\n",
|
||||
datum.getMetricName(),
|
||||
statistics.getMinimum(),
|
||||
statistics.getMaximum(),
|
||||
statistics.getSampleCount(),
|
||||
statistics.getSum(),
|
||||
statistics.getSum() / statistics.getSampleCount(),
|
||||
datum.getUnit()));
|
||||
}
|
||||
|
||||
LOG.info(output.toString());
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,115 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.metrics.impl;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.concurrent.BlockingQueue;
|
||||
import java.util.concurrent.LinkedBlockingQueue;
|
||||
|
||||
import com.amazonaws.services.cloudwatch.model.MetricDatum;
|
||||
import com.amazonaws.services.cloudwatch.model.StatisticSet;
|
||||
|
||||
/**
|
||||
* Helper class for accumulating MetricDatums with the same name and dimensions.
|
||||
*
|
||||
* @param <KeyType> can be a class or object defined by the user that stores information about a MetricDatum needed
|
||||
* by the user.
|
||||
*
|
||||
* The following is a example of what a KeyType class might look like:
|
||||
* class SampleKeyType {
|
||||
* private long timeKeyCreated;
|
||||
* private MetricDatum datum;
|
||||
* public SampleKeyType(long timeKeyCreated, MetricDatum datum){
|
||||
* this.timeKeyCreated = timeKeyCreated;
|
||||
* this.datum = datum;
|
||||
* }
|
||||
* }
|
||||
*/
|
||||
public class MetricAccumulatingQueue<KeyType> {
|
||||
|
||||
// Queue is for first in first out behavior
|
||||
private BlockingQueue<MetricDatumWithKey<KeyType>> queue;
|
||||
// Map is for constant time lookup by key
|
||||
private Map<KeyType, MetricDatum> map;
|
||||
|
||||
public MetricAccumulatingQueue(int maxQueueSize) {
|
||||
queue = new LinkedBlockingQueue<MetricDatumWithKey<KeyType>>(maxQueueSize);
|
||||
map = new HashMap<KeyType, MetricDatum>();
|
||||
}
|
||||
|
||||
/**
|
||||
* @param maxItems number of items to remove from the queue.
|
||||
* @return a list of MetricDatums that are no longer contained within the queue or map.
|
||||
*/
|
||||
public synchronized List<MetricDatumWithKey<KeyType>> drain(int maxItems) {
|
||||
List<MetricDatumWithKey<KeyType>> drainedItems = new ArrayList<MetricDatumWithKey<KeyType>>(maxItems);
|
||||
|
||||
queue.drainTo(drainedItems, maxItems);
|
||||
|
||||
for (MetricDatumWithKey<KeyType> datumWithKey : drainedItems) {
|
||||
map.remove(datumWithKey.key);
|
||||
}
|
||||
|
||||
return drainedItems;
|
||||
}
|
||||
|
||||
public synchronized boolean isEmpty() {
|
||||
return queue.isEmpty();
|
||||
}
|
||||
|
||||
public synchronized int size() {
|
||||
return queue.size();
|
||||
}
|
||||
|
||||
/**
|
||||
* We use a queue and a map in this method. The reason for this is because, the queue will keep our metrics in
|
||||
* FIFO order and the map will provide us with constant time lookup to get the appropriate MetricDatum.
|
||||
*
|
||||
* @param key metric key to be inserted into queue
|
||||
* @param datum metric to be inserted into queue
|
||||
* @return a boolean depending on whether the datum was inserted into the queue
|
||||
*/
|
||||
public synchronized boolean offer(KeyType key, MetricDatum datum) {
|
||||
MetricDatum old = map.get(key);
|
||||
if (old == null) {
|
||||
boolean offered = queue.offer(new MetricDatumWithKey<KeyType>(key, datum));
|
||||
if (offered) {
|
||||
map.put(key, datum);
|
||||
}
|
||||
|
||||
return offered;
|
||||
} else {
|
||||
accumulate(old, datum);
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
private void accumulate(MetricDatum oldDatum, MetricDatum newDatum) {
|
||||
if (!oldDatum.getUnit().equals(newDatum.getUnit())) {
|
||||
throw new IllegalArgumentException("Unit mismatch for datum named " + oldDatum.getMetricName());
|
||||
}
|
||||
|
||||
StatisticSet oldStats = oldDatum.getStatisticValues();
|
||||
StatisticSet newStats = newDatum.getStatisticValues();
|
||||
|
||||
oldStats.setSampleCount(oldStats.getSampleCount() + newStats.getSampleCount());
|
||||
oldStats.setMaximum(Math.max(oldStats.getMaximum(), newStats.getMaximum()));
|
||||
oldStats.setMinimum(Math.min(oldStats.getMinimum(), newStats.getMinimum()));
|
||||
oldStats.setSum(oldStats.getSum() + newStats.getSum());
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,68 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.metrics.impl;
|
||||
|
||||
import java.util.Objects;
|
||||
import com.amazonaws.services.cloudwatch.model.MetricDatum;
|
||||
|
||||
/**
|
||||
* This class is used to store a MetricDatum as well as KeyType which stores specific information about
|
||||
* that particular MetricDatum.
|
||||
*
|
||||
* @param <KeyType> is a class that stores information about a MetricDatum. This is useful
|
||||
* to compare MetricDatums, aggregate similar MetricDatums or store information about a datum
|
||||
* that may be relevant to the user (i.e. MetricName, CustomerId, TimeStamp, etc).
|
||||
*
|
||||
* Example:
|
||||
*
|
||||
* Let SampleMetricKey be a KeyType that takes in the time in which the datum was created.
|
||||
*
|
||||
* MetricDatumWithKey<SampleMetricKey> sampleDatumWithKey = new MetricDatumWithKey<SampleMetricKey>(new
|
||||
* SampleMetricKey(System.currentTimeMillis()), datum)
|
||||
*
|
||||
*/
|
||||
public class MetricDatumWithKey<KeyType> {
|
||||
public KeyType key;
|
||||
public MetricDatum datum;
|
||||
|
||||
/**
|
||||
* @param key an object that stores relevant information about a MetricDatum (e.g. MetricName, accountId,
|
||||
* TimeStamp)
|
||||
* @param datum data point
|
||||
*/
|
||||
|
||||
public MetricDatumWithKey(KeyType key, MetricDatum datum) {
|
||||
this.key = key;
|
||||
this.datum = datum;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(key, datum);
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean equals(Object obj) {
|
||||
if (this == obj)
|
||||
return true;
|
||||
if (obj == null)
|
||||
return false;
|
||||
if (getClass() != obj.getClass())
|
||||
return false;
|
||||
MetricDatumWithKey<?> other = (MetricDatumWithKey<?>) obj;
|
||||
return Objects.equals(other.key, key) && Objects.equals(other.datum, datum);
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,116 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.metrics.impl;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
|
||||
import com.amazonaws.services.cloudwatch.model.StandardUnit;
|
||||
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsFactory;
|
||||
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsScope;
|
||||
|
||||
/**
|
||||
* MetricsHelper assists with common metrics operations, most notably the storage of IMetricsScopes objects in a
|
||||
* ThreadLocal so we don't have to pass one throughout the whole call stack.
|
||||
*/
|
||||
public class MetricsHelper {
|
||||
|
||||
private static final Log LOG = LogFactory.getLog(MetricsHelper.class);
|
||||
private static final NullMetricsScope NULL_METRICS_SCOPE = new NullMetricsScope();
|
||||
|
||||
private static final ThreadLocal<IMetricsScope> currentScope = new ThreadLocal<IMetricsScope>();
|
||||
private static final ThreadLocal<Integer> referenceCount = new ThreadLocal<Integer>();
|
||||
|
||||
/*
|
||||
* Constants used to publish metrics.
|
||||
*/
|
||||
public static final String OPERATION_DIMENSION_NAME = "Operation";
|
||||
public static final String TIME = "Time";
|
||||
public static final String SUCCESS = "Success";
|
||||
private static final String SEP = ".";
|
||||
|
||||
public static IMetricsScope startScope(IMetricsFactory factory) {
|
||||
return startScope(factory, null);
|
||||
}
|
||||
|
||||
public static IMetricsScope startScope(IMetricsFactory factory, String operation) {
|
||||
IMetricsScope result = currentScope.get();
|
||||
if (result == null) {
|
||||
result = factory.createMetrics();
|
||||
if (operation != null) {
|
||||
result.addDimension(OPERATION_DIMENSION_NAME, operation);
|
||||
}
|
||||
currentScope.set(result);
|
||||
referenceCount.set(1);
|
||||
} else {
|
||||
referenceCount.set(referenceCount.get() + 1);
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
public static IMetricsScope getMetricsScope() {
|
||||
IMetricsScope result = currentScope.get();
|
||||
if (result == null) {
|
||||
LOG.warn(String.format("No metrics scope set in thread %s, getMetricsScope returning NullMetricsScope.",
|
||||
Thread.currentThread().getName()));
|
||||
|
||||
return NULL_METRICS_SCOPE;
|
||||
} else {
|
||||
return result;
|
||||
}
|
||||
}
|
||||
|
||||
public static void addSuccessAndLatency(long startTimeMillis, boolean success) {
|
||||
addSuccessAndLatency(null, startTimeMillis, success);
|
||||
}
|
||||
|
||||
public static void addSuccessAndLatency(String prefix, long startTimeMillis, boolean success) {
|
||||
addSuccessAndLatencyPerShard(null, prefix, startTimeMillis, success);
|
||||
}
|
||||
|
||||
public static void addSuccessAndLatencyPerShard (
|
||||
String shardId,
|
||||
String prefix,
|
||||
long startTimeMillis,
|
||||
boolean success) {
|
||||
IMetricsScope scope = getMetricsScope();
|
||||
|
||||
String realPrefix = prefix == null ? "" : prefix + SEP;
|
||||
|
||||
if (shardId != null) {
|
||||
scope.addDimension("ShardId", shardId);
|
||||
}
|
||||
|
||||
scope.addData(realPrefix + MetricsHelper.SUCCESS, success ? 1 : 0, StandardUnit.Count);
|
||||
scope.addData(realPrefix + MetricsHelper.TIME,
|
||||
System.currentTimeMillis() - startTimeMillis,
|
||||
StandardUnit.Milliseconds);
|
||||
}
|
||||
|
||||
public static void endScope() {
|
||||
IMetricsScope scope = getMetricsScope();
|
||||
if (scope != null) {
|
||||
Integer refCount = referenceCount.get();
|
||||
refCount--;
|
||||
|
||||
if (refCount == 0) {
|
||||
scope.end();
|
||||
currentScope.remove();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.metrics.impl;
|
||||
|
||||
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsFactory;
|
||||
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsScope;
|
||||
|
||||
public class NullMetricsFactory implements IMetricsFactory {
|
||||
|
||||
private static final NullMetricsScope SCOPE = new NullMetricsScope();
|
||||
|
||||
@Override
|
||||
public IMetricsScope createMetrics() {
|
||||
return SCOPE;
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,37 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.metrics.impl;
|
||||
|
||||
import com.amazonaws.services.cloudwatch.model.StandardUnit;
|
||||
import com.amazonaws.services.kinesis.metrics.interfaces.IMetricsScope;
|
||||
|
||||
public class NullMetricsScope implements IMetricsScope {
|
||||
|
||||
@Override
|
||||
public void addData(String name, double value, StandardUnit unit) {
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
public void addDimension(String name, String value) {
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
public void end() {
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,25 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.metrics.interfaces;
|
||||
|
||||
/**
|
||||
* Factory for MetricsScope objects.
|
||||
*/
|
||||
public interface IMetricsFactory {
|
||||
/**
|
||||
* @return a new IMetricsScope object of the type constructed by this factory.
|
||||
*/
|
||||
public IMetricsScope createMetrics();
|
||||
}
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
/*
|
||||
* Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
||||
*
|
||||
* Licensed under the Amazon Software License (the "License").
|
||||
* You may not use this file except in compliance with the License.
|
||||
* A copy of the License is located at
|
||||
*
|
||||
* http://aws.amazon.com/asl/
|
||||
*
|
||||
* or in the "license" file accompanying this file. This file is distributed
|
||||
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
* express or implied. See the License for the specific language governing
|
||||
* permissions and limitations under the License.
|
||||
*/
|
||||
package com.amazonaws.services.kinesis.metrics.interfaces;
|
||||
|
||||
import com.amazonaws.services.cloudwatch.model.StandardUnit;
|
||||
|
||||
/**
|
||||
* An IMetricsScope represents a set of metric data that share a set of dimensions. IMetricsScopes know how to output
|
||||
* themselves (perhaps to disk, perhaps over service calls, etc).
|
||||
*/
|
||||
public interface IMetricsScope {
|
||||
|
||||
/**
|
||||
* Adds a data point to this IMetricsScope. Multiple calls against the same IMetricsScope with the same name
|
||||
* parameter will result in accumulation.
|
||||
*
|
||||
* @param name data point name
|
||||
* @param value data point value
|
||||
* @param unit unit of data point
|
||||
*/
|
||||
public void addData(String name, double value, StandardUnit unit);
|
||||
|
||||
/**
|
||||
* Adds a dimension that applies to all metrics in this IMetricsScope.
|
||||
*
|
||||
* @param name dimension name
|
||||
* @param value dimension value
|
||||
*/
|
||||
public void addDimension(String name, String value);
|
||||
|
||||
/**
|
||||
* Flushes the data from this IMetricsScope and causes future calls to addData and addDimension to fail.
|
||||
*/
|
||||
public void end();
|
||||
}
|
||||
Loading…
Reference in a new issue