Start line:  
End line:  

Snippet Preview

Snippet HTML Code

Stack Overflow Questions
  /*
   * Copyright 2010-2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
   * 
   * Licensed under the Apache License, Version 2.0 (the "License").
   * You may not use this file except in compliance with the License.
   * A copy of the License is located at
   * 
   *  http://aws.amazon.com/apache2.0
   * 
  * or in the "license" file accompanying this file. This file is distributed
  * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
  * express or implied. See the License for the specific language governing
  * permissions and limitations under the License.
  */
 package com.amazonaws.services.kinesis.model;
 

Represents the output for PutRecords .

 
 public class PutRecordsRequestEntry implements SerializableCloneable {

    
The data blob to put into the record, which is base64-encoded when the blob is serialized. The maximum size of the data blob (the payload before base64-encoding) is 50 kilobytes (KB)

Constraints:
Length: 0 - 51200

 
     private java.nio.ByteBuffer data;

    
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.

Constraints:
Pattern: 0|([1-9]\d{0,38})

 
     private String explicitHashKey;

    
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 bytes. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.

Constraints:
Length: 1 - 256

 
     private String partitionKey;

    
The data blob to put into the record, which is base64-encoded when the blob is serialized. The maximum size of the data blob (the payload before base64-encoding) is 50 kilobytes (KB)

Constraints:
Length: 0 - 51200

Returns:
The data blob to put into the record, which is base64-encoded when the blob is serialized. The maximum size of the data blob (the payload before base64-encoding) is 50 kilobytes (KB)
 
     public java.nio.ByteBuffer getData() {
         return ;
     }
    
    
The data blob to put into the record, which is base64-encoded when the blob is serialized. The maximum size of the data blob (the payload before base64-encoding) is 50 kilobytes (KB)

Constraints:
Length: 0 - 51200

Parameters:
data The data blob to put into the record, which is base64-encoded when the blob is serialized. The maximum size of the data blob (the payload before base64-encoding) is 50 kilobytes (KB)
 
     public void setData(java.nio.ByteBuffer data) {
         this. = data;
     }
    
    
The data blob to put into the record, which is base64-encoded when the blob is serialized. The maximum size of the data blob (the payload before base64-encoding) is 50 kilobytes (KB)

Returns a reference to this object so that method calls can be chained together.

Constraints:
Length: 0 - 51200

Parameters:
data The data blob to put into the record, which is base64-encoded when the blob is serialized. The maximum size of the data blob (the payload before base64-encoding) is 50 kilobytes (KB)
Returns:
A reference to this updated object so that method calls can be chained together.
        this. = data;
        return this;
    }

    
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.

Constraints:
Pattern: 0|([1-9]\d{0,38})

Returns:
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.
    public String getExplicitHashKey() {
        return ;
    }
    
    
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.

Constraints:
Pattern: 0|([1-9]\d{0,38})

Parameters:
explicitHashKey The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.
    public void setExplicitHashKey(String explicitHashKey) {
        this. = explicitHashKey;
    }
    
    
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.

Returns a reference to this object so that method calls can be chained together.

Constraints:
Pattern: 0|([1-9]\d{0,38})

Parameters:
explicitHashKey The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.
Returns:
A reference to this updated object so that method calls can be chained together.
    public PutRecordsRequestEntry withExplicitHashKey(String explicitHashKey) {
        this. = explicitHashKey;
        return this;
    }

    
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 bytes. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.

Constraints:
Length: 1 - 256

Returns:
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 bytes. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
    public String getPartitionKey() {
        return ;
    }
    
    
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 bytes. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.

Constraints:
Length: 1 - 256

Parameters:
partitionKey Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 bytes. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
    public void setPartitionKey(String partitionKey) {
        this. = partitionKey;
    }
    
    
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 bytes. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.

Returns a reference to this object so that method calls can be chained together.

Constraints:
Length: 1 - 256

Parameters:
partitionKey Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 bytes. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
Returns:
A reference to this updated object so that method calls can be chained together.
    public PutRecordsRequestEntry withPartitionKey(String partitionKey) {
        this. = partitionKey;
        return this;
    }

    
Returns a string representation of this object; useful for testing and debugging.

Returns:
A string representation of this object.
See also:
java.lang.Object.toString()
    @Override
    public String toString() {
        StringBuilder sb = new StringBuilder();
        sb.append("{");
        if (getData() != nullsb.append("Data: " + getData() + ",");
        if (getExplicitHashKey() != nullsb.append("ExplicitHashKey: " + getExplicitHashKey() + ",");
        if (getPartitionKey() != nullsb.append("PartitionKey: " + getPartitionKey() );
        sb.append("}");
        return sb.toString();
    }
    
    @Override
    public int hashCode() {
        final int prime = 31;
        int hashCode = 1;
        
        hashCode = prime * hashCode + ((getData() == null) ? 0 : getData().hashCode()); 
        hashCode = prime * hashCode + ((getExplicitHashKey() == null) ? 0 : getExplicitHashKey().hashCode()); 
        hashCode = prime * hashCode + ((getPartitionKey() == null) ? 0 : getPartitionKey().hashCode()); 
        return hashCode;
    }
    
    @Override
    public boolean equals(Object obj) {
        if (this == objreturn true;
        if (obj == nullreturn false;
        if (obj instanceof PutRecordsRequestEntry == falsereturn false;
        PutRecordsRequestEntry other = (PutRecordsRequestEntry)obj;
        
        if (other.getData() == null ^ this.getData() == nullreturn false;
        if (other.getData() != null && other.getData().equals(this.getData()) == falsereturn false
        if (other.getExplicitHashKey() == null ^ this.getExplicitHashKey() == nullreturn false;
        if (other.getExplicitHashKey() != null && other.getExplicitHashKey().equals(this.getExplicitHashKey()) == falsereturn false
        if (other.getPartitionKey() == null ^ this.getPartitionKey() == nullreturn false;
        if (other.getPartitionKey() != null && other.getPartitionKey().equals(this.getPartitionKey()) == falsereturn false
        return true;
    }
    
    @Override
    public PutRecordsRequestEntry clone() {
        try {
            return (PutRecordsRequestEntrysuper.clone();
        
        } catch (CloneNotSupportedException e) {
            throw new IllegalStateException(
                    "Got a CloneNotSupportedException from Object.clone() "
                    + "even though we're Cloneable!",
                    e);
        }
        
    }
}
    
New to GrepCode? Check out our FAQ X