Start line:  
End line:  

Snippet Preview

Snippet HTML Code

Stack Overflow Questions
  /*
   * Copyright 2010-2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
   * 
   * Licensed under the Apache License, Version 2.0 (the "License").
   * You may not use this file except in compliance with the License.
   * A copy of the License is located at
   * 
   *  http://aws.amazon.com/apache2.0
   * 
  * or in the "license" file accompanying this file. This file is distributed
  * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
  * express or implied. See the License for the specific language governing
  * permissions and limitations under the License.
  */
 package com.amazonaws.services.kinesis.model;
 
 
Container for the parameters to the PutRecord operation.

Puts (writes) a single data record from a producer into an Amazon Kinesis stream. Call PutRecord to send data from the producer into the Amazon Kinesis stream for real-time ingestion and subsequent processing, one record at a time. Each shard can support up to 1000 records written per second, up to a maximum total of 1 MB data written per second.

You must specify the name of the stream that captures, stores, and transports the data; a partition key; and the data blob itself.

The data blob can be any type of data; for example, a segment from a log file, geographic/location data, website clickstream data, and so on.

The partition key is used by Amazon Kinesis to distribute data across shards. Amazon Kinesis segregates the data records that belong to a data stream into multiple shards, using the partition key associated with each data record to determine which shard a given data record belongs to.

Partition keys are Unicode strings, with a maximum length limit of 256 bytes. An MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards using the hash key ranges of the shards. You can override hashing the partition key to determine the shard by explicitly specifying a hash value using the ExplicitHashKey parameter. For more information, see Partition Key in the Amazon Kinesis Developer Guide .

PutRecord returns the shard ID of where the data record was placed and the sequence number that was assigned to the data record.

Sequence numbers generally increase over time. To guarantee strictly increasing ordering, use the SequenceNumberForOrdering parameter. For more information, see Sequence Number in the Amazon Kinesis Developer Guide .

If a PutRecord request cannot be processed because of insufficient provisioned throughput on the shard involved in the request, PutRecord throws ProvisionedThroughputExceededException .

Data records are accessible for only 24 hours from the time that they are added to an Amazon Kinesis stream.

 
 public class PutRecordRequest extends AmazonWebServiceRequest implements SerializableCloneable {

    
The name of the stream to put the data record into.

Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+

 
     private String streamName;

    
The data blob to put into the record, which is base64-encoded when the blob is serialized. The maximum size of the data blob (the payload before base64-encoding) is 50 kilobytes (KB)

Constraints:
Length: 0 - 51200

    private java.nio.ByteBuffer data;

    
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 bytes. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key will map to the same shard within the stream.

Constraints:
Length: 1 - 256

    private String partitionKey;

    
The hash value used to explicitly determine the shard the data record is assigned to by overriding the partition key hash.

Constraints:
Pattern: 0|([1-9]\d{0,38})

    private String explicitHashKey;

    
Guarantees strictly increasing sequence numbers, for puts from the same client and to the same partition key. Usage: set the SequenceNumberForOrdering of record n to the sequence number of record n-1 (as returned in the PutRecordResult when putting record n-1). If this parameter is not set, records will be coarsely ordered based on arrival time.

Constraints:
Pattern: 0|([1-9]\d{0,128})

    private String sequenceNumberForOrdering;

    
The name of the stream to put the data record into.

Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+

Returns:
The name of the stream to put the data record into.
    public String getStreamName() {
        return ;
    }
    
    
The name of the stream to put the data record into.

Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+

Parameters:
streamName The name of the stream to put the data record into.
    public void setStreamName(String streamName) {
        this. = streamName;
    }
    
    
The name of the stream to put the data record into.

Returns a reference to this object so that method calls can be chained together.

Constraints:
Length: 1 - 128
Pattern: [a-zA-Z0-9_.-]+

Parameters:
streamName The name of the stream to put the data record into.
Returns:
A reference to this updated object so that method calls can be chained together.
    public PutRecordRequest withStreamName(String streamName) {
        this. = streamName;
        return this;
    }

    
The data blob to put into the record, which is base64-encoded when the blob is serialized. The maximum size of the data blob (the payload before base64-encoding) is 50 kilobytes (KB)

Constraints:
Length: 0 - 51200

Returns:
The data blob to put into the record, which is base64-encoded when the blob is serialized. The maximum size of the data blob (the payload before base64-encoding) is 50 kilobytes (KB)
    public java.nio.ByteBuffer getData() {
        return ;
    }
    
    
The data blob to put into the record, which is base64-encoded when the blob is serialized. The maximum size of the data blob (the payload before base64-encoding) is 50 kilobytes (KB)

Constraints:
Length: 0 - 51200

Parameters:
data The data blob to put into the record, which is base64-encoded when the blob is serialized. The maximum size of the data blob (the payload before base64-encoding) is 50 kilobytes (KB)
    public void setData(java.nio.ByteBuffer data) {
        this. = data;
    }
    
    
The data blob to put into the record, which is base64-encoded when the blob is serialized. The maximum size of the data blob (the payload before base64-encoding) is 50 kilobytes (KB)

Returns a reference to this object so that method calls can be chained together.

Constraints:
Length: 0 - 51200

Parameters:
data The data blob to put into the record, which is base64-encoded when the blob is serialized. The maximum size of the data blob (the payload before base64-encoding) is 50 kilobytes (KB)
Returns:
A reference to this updated object so that method calls can be chained together.
    public PutRecordRequest withData(java.nio.ByteBuffer data) {
        this. = data;
        return this;
    }

    
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 bytes. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key will map to the same shard within the stream.

Constraints:
Length: 1 - 256

Returns:
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 bytes. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key will map to the same shard within the stream.
    public String getPartitionKey() {
        return ;
    }
    
    
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 bytes. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key will map to the same shard within the stream.

Constraints:
Length: 1 - 256

Parameters:
partitionKey Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 bytes. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key will map to the same shard within the stream.
    public void setPartitionKey(String partitionKey) {
        this. = partitionKey;
    }
    
    
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 bytes. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key will map to the same shard within the stream.

Returns a reference to this object so that method calls can be chained together.

Constraints:
Length: 1 - 256

Parameters:
partitionKey Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 bytes. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key will map to the same shard within the stream.
Returns:
A reference to this updated object so that method calls can be chained together.
    public PutRecordRequest withPartitionKey(String partitionKey) {
        this. = partitionKey;
        return this;
    }

    
The hash value used to explicitly determine the shard the data record is assigned to by overriding the partition key hash.

Constraints:
Pattern: 0|([1-9]\d{0,38})

Returns:
The hash value used to explicitly determine the shard the data record is assigned to by overriding the partition key hash.
    public String getExplicitHashKey() {
        return ;
    }
    
    
The hash value used to explicitly determine the shard the data record is assigned to by overriding the partition key hash.

Constraints:
Pattern: 0|([1-9]\d{0,38})

Parameters:
explicitHashKey The hash value used to explicitly determine the shard the data record is assigned to by overriding the partition key hash.
    public void setExplicitHashKey(String explicitHashKey) {
        this. = explicitHashKey;
    }
    
    
The hash value used to explicitly determine the shard the data record is assigned to by overriding the partition key hash.

Returns a reference to this object so that method calls can be chained together.

Constraints:
Pattern: 0|([1-9]\d{0,38})

Parameters:
explicitHashKey The hash value used to explicitly determine the shard the data record is assigned to by overriding the partition key hash.
Returns:
A reference to this updated object so that method calls can be chained together.
    public PutRecordRequest withExplicitHashKey(String explicitHashKey) {
        this. = explicitHashKey;
        return this;
    }

    
Guarantees strictly increasing sequence numbers, for puts from the same client and to the same partition key. Usage: set the SequenceNumberForOrdering of record n to the sequence number of record n-1 (as returned in the PutRecordResult when putting record n-1). If this parameter is not set, records will be coarsely ordered based on arrival time.

Constraints:
Pattern: 0|([1-9]\d{0,128})

Returns:
Guarantees strictly increasing sequence numbers, for puts from the same client and to the same partition key. Usage: set the SequenceNumberForOrdering of record n to the sequence number of record n-1 (as returned in the PutRecordResult when putting record n-1). If this parameter is not set, records will be coarsely ordered based on arrival time.
        return ;
    }
    
    
Guarantees strictly increasing sequence numbers, for puts from the same client and to the same partition key. Usage: set the SequenceNumberForOrdering of record n to the sequence number of record n-1 (as returned in the PutRecordResult when putting record n-1). If this parameter is not set, records will be coarsely ordered based on arrival time.

Constraints:
Pattern: 0|([1-9]\d{0,128})

Parameters:
sequenceNumberForOrdering Guarantees strictly increasing sequence numbers, for puts from the same client and to the same partition key. Usage: set the SequenceNumberForOrdering of record n to the sequence number of record n-1 (as returned in the PutRecordResult when putting record n-1). If this parameter is not set, records will be coarsely ordered based on arrival time.
    public void setSequenceNumberForOrdering(String sequenceNumberForOrdering) {
        this. = sequenceNumberForOrdering;
    }
    
    
Guarantees strictly increasing sequence numbers, for puts from the same client and to the same partition key. Usage: set the SequenceNumberForOrdering of record n to the sequence number of record n-1 (as returned in the PutRecordResult when putting record n-1). If this parameter is not set, records will be coarsely ordered based on arrival time.

Returns a reference to this object so that method calls can be chained together.

Constraints:
Pattern: 0|([1-9]\d{0,128})

Parameters:
sequenceNumberForOrdering Guarantees strictly increasing sequence numbers, for puts from the same client and to the same partition key. Usage: set the SequenceNumberForOrdering of record n to the sequence number of record n-1 (as returned in the PutRecordResult when putting record n-1). If this parameter is not set, records will be coarsely ordered based on arrival time.
Returns:
A reference to this updated object so that method calls can be chained together.
    public PutRecordRequest withSequenceNumberForOrdering(String sequenceNumberForOrdering) {
        this. = sequenceNumberForOrdering;
        return this;
    }

    
Returns a string representation of this object; useful for testing and debugging.

Returns:
A string representation of this object.
See also:
java.lang.Object.toString()
    @Override
    public String toString() {
        StringBuilder sb = new StringBuilder();
        sb.append("{");
        if (getStreamName() != nullsb.append("StreamName: " + getStreamName() + ",");
        if (getData() != nullsb.append("Data: " + getData() + ",");
        if (getPartitionKey() != nullsb.append("PartitionKey: " + getPartitionKey() + ",");
        if (getExplicitHashKey() != nullsb.append("ExplicitHashKey: " + getExplicitHashKey() + ",");
        if (getSequenceNumberForOrdering() != nullsb.append("SequenceNumberForOrdering: " + getSequenceNumberForOrdering() );
        sb.append("}");
        return sb.toString();
    }
    
    @Override
    public int hashCode() {
        final int prime = 31;
        int hashCode = 1;
        
        hashCode = prime * hashCode + ((getStreamName() == null) ? 0 : getStreamName().hashCode()); 
        hashCode = prime * hashCode + ((getData() == null) ? 0 : getData().hashCode()); 
        hashCode = prime * hashCode + ((getPartitionKey() == null) ? 0 : getPartitionKey().hashCode()); 
        hashCode = prime * hashCode + ((getExplicitHashKey() == null) ? 0 : getExplicitHashKey().hashCode()); 
        hashCode = prime * hashCode + ((getSequenceNumberForOrdering() == null) ? 0 : getSequenceNumberForOrdering().hashCode()); 
        return hashCode;
    }
    
    @Override
    public boolean equals(Object obj) {
        if (this == objreturn true;
        if (obj == nullreturn false;
        if (obj instanceof PutRecordRequest == falsereturn false;
        PutRecordRequest other = (PutRecordRequest)obj;
        
        if (other.getStreamName() == null ^ this.getStreamName() == nullreturn false;
        if (other.getStreamName() != null && other.getStreamName().equals(this.getStreamName()) == falsereturn false
        if (other.getData() == null ^ this.getData() == nullreturn false;
        if (other.getData() != null && other.getData().equals(this.getData()) == falsereturn false
        if (other.getPartitionKey() == null ^ this.getPartitionKey() == nullreturn false;
        if (other.getPartitionKey() != null && other.getPartitionKey().equals(this.getPartitionKey()) == falsereturn false
        if (other.getExplicitHashKey() == null ^ this.getExplicitHashKey() == nullreturn false;
        if (other.getExplicitHashKey() != null && other.getExplicitHashKey().equals(this.getExplicitHashKey()) == falsereturn false
        if (other.getSequenceNumberForOrdering() == null ^ this.getSequenceNumberForOrdering() == nullreturn false;
        if (other.getSequenceNumberForOrdering() != null && other.getSequenceNumberForOrdering().equals(this.getSequenceNumberForOrdering()) == falsereturn false
        return true;
    }
    
    @Override
    public PutRecordRequest clone() {
        
            return (PutRecordRequestsuper.clone();
    }
}
    
New to GrepCode? Check out our FAQ X