Start line:  
End line:  

Snippet Preview

Snippet HTML Code

Stack Overflow Questions
   /*
    * Copyright 2010-2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
    *
    * Licensed under the Apache License, Version 2.0 (the "License").
    * You may not use this file except in compliance with the License.
    * A copy of the License is located at
    *
    *  http://aws.amazon.com/apache2.0
    *
   * or in the "license" file accompanying this file. This file is distributed
   * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
   * express or implied. See the License for the specific language governing
   * permissions and limitations under the License.
   */
  package com.amazonaws.services.s3.transfer;
  
  import static com.amazonaws.services.s3.internal.ServiceUtils.APPEND_MODE;
  import static com.amazonaws.services.s3.internal.ServiceUtils.OVERWRITE_MODE;
  
  import java.io.File;
  import java.util.Date;
  import java.util.List;
  import java.util.Stack;
  
  
High level utility for managing transfers to Amazon S3.

TransferManager provides a simple API for uploading content to Amazon S3, and makes extensive use of Amazon S3 multipart uploads to achieve enhanced throughput, performance and reliability.

When possible, TransferManager attempts to use multiple threads to upload multiple parts of a single upload at once. When dealing with large content sizes and high bandwidth, this can have a significant increase on throughput.

TransferManager is responsible for managing resources such as connections and threads; share a single instance of TransferManager whenever possible. TransferManager, like all the client classes in the AWS SDK for Java, is thread safe. Call TransferManager.shutdownNow() to release the resources once the transfer is complete.

Using TransferManager to upload options to Amazon S3 is easy:

 DefaultAWSCredentialsProviderChain credentialProviderChain = new DefaultAWSCredentialsProviderChain();
 TransferManager tx = new TransferManager(
   credentialProviderChain.getCredentials());
 Upload myUpload = tx.upload(myBucket, myFile.getName(), myFile);

 // You can poll your transfer's status to check its progress
 if (myUpload.isDone() == false) {
  System.out.println("Transfer: " + myUpload.getDescription());
  System.out.println("  - State: " + myUpload.getState());
  System.out.println("  - Progress: "
    + myUpload.getProgress().getBytesTransferred());
 }

 // Transfers also allow you to set a <code>ProgressListener</code> to receive
 // asynchronous notifications about your transfer's progress.
 myUpload.addProgressListener(myProgressListener);

 // Or you can block the current thread and wait for your transfer to
 // to complete. If the transfer fails, this method will throw an
 // AmazonClientException or AmazonServiceException detailing the reason.
 myUpload.waitForCompletion();

 // After the upload is complete, call shutdownNow to release the resources.
 tx.shutdownNow();
 

Transfers can be paused and resumed at a later time. It can also survive JVM crash, provided the information that is required to resume the transfer is given as input to the resume operation. For more information on pause and resume,

 
 public class TransferManager {

    
The low level client we use to make the actual calls to Amazon S3.
 
     private final AmazonS3 s3;

    
Configuration for how TransferManager processes requests.
 
     private TransferManagerConfiguration configuration;
    
The thread pool in which transfers are uploaded or downloaded.
 
     private final ExecutorService threadPool;

    
Thread used for periodicially checking transfers and updating thier state.
 
 
     private static final Log log = LogFactory.getLog(TransferManager.class);
 
     private final boolean shutDownThreadPools;

    
Constructs a new TransferManager and Amazon S3 client using the credentials from DefaultAWSCredentialsProviderChain

TransferManager and client objects may pool connections and threads. Reuse TransferManager and client objects and share them throughout applications.

TransferManager and all AWS client objects are thread safe.

 
     public TransferManager(){
         this(new AmazonS3Client(new DefaultAWSCredentialsProviderChain()));
     }

    
Constructs a new TransferManager and Amazon S3 client using the specified AWS security credentials provider.

TransferManager and client objects may pool connections and threads. Reuse TransferManager and client objects and share them throughout applications.

TransferManager and all AWS client objects are thread safe.

Parameters:
credentialsProvider The AWS security credentials provider to use when making authenticated requests.
 
     public TransferManager(AWSCredentialsProvider credentialsProvider) {
         this(new AmazonS3Client(credentialsProvider));
     }

    
Constructs a new TransferManager and Amazon S3 client using the specified AWS security credentials.

TransferManager and client objects may pool connections and threads. Reuse TransferManager and client objects and share them throughout applications.

TransferManager and all AWS client objects are thread safe.

Parameters:
credentials The AWS security credentials to use when making authenticated requests.
 
     public TransferManager(AWSCredentials credentials) {
         this(new AmazonS3Client(credentials));
     }

    
Constructs a new TransferManager, specifying the client to use when making requests to Amazon S3.

TransferManager and client objects may pool connections and threads. Reuse TransferManager and client objects and share them throughout applications.

TransferManager and all AWS client objects are thread safe.

Parameters:
s3 The client to use when making requests to Amazon S3.
 
     public TransferManager(AmazonS3 s3) {
         this(s3, TransferManagerUtils.createDefaultExecutorService());
     }

    
Constructs a new TransferManager specifying the client and thread pool to use when making requests to Amazon S3.

TransferManager and client objects may pool connections and threads. Reuse TransferManager and client objects and share them throughout applications.

TransferManager and all AWS client objects are thread safe.

By default, the thread pool will shutdown when the transfer manager instance is garbage collected.

Parameters:
s3 The client to use when making requests to Amazon S3.
threadPool The thread pool in which to execute requests.
See also:
TransferManager.TransferManager(com.amazonaws.services.s3.AmazonS3,java.util.concurrent.ExecutorService,boolean)
 
     public TransferManager(AmazonS3 s3ExecutorService threadPool) {
         this(s3,threadPool,true);
     }

    
Constructs a new TransferManager specifying the client and thread pool to use when making requests to Amazon S3.

TransferManager and client objects may pool connections and threads. Reuse TransferManager and client objects and share them throughout applications.

TransferManager and all AWS client objects are thread safe.

Parameters:
s3 The client to use when making requests to Amazon S3.
threadPool The thread pool in which to execute requests.
shutDownThreadPools If set to true, the thread pool will be shutdown when transfer manager instance is garbage collected.
 
     public TransferManager(AmazonS3 s3ExecutorService threadPoolboolean shutDownThreadPools) {
         this. = s3;
         this. = threadPool;
         this. = new TransferManagerConfiguration();
         this. = shutDownThreadPools;
     }


    
Sets the configuration which specifies how this TransferManager processes requests.

Parameters:
configuration The new configuration specifying how this TransferManager processes requests.
 
     public void setConfiguration(TransferManagerConfiguration configuration) {
         this. = configuration;
     }

    
Returns the configuration which specifies how this TransferManager processes requests.

Returns:
The configuration settings for this TransferManager.
 
         return ;
     }

    
Returns the underlying Amazon S3 client used to make requests to Amazon S3.

Returns:
The underlying Amazon S3 client used to make requests to Amazon S3.
 
     public AmazonS3 getAmazonS3Client() {
         return ;
     }

    

Schedules a new transfer to upload data to Amazon S3. This method is non-blocking and returns immediately (i.e. before the upload has finished).

When uploading options from a stream, callers must supply the size of options in the stream through the content length field in the ObjectMetadata parameter. If no content length is specified for the input stream, then TransferManager will attempt to buffer all the stream contents in memory and upload the options as a traditional, single part upload. Because the entire stream contents must be buffered in memory, this can be very expensive, and should be avoided whenever possible.

Use the returned Upload object to query the progress of the transfer, add listeners for progress events, and wait for the upload to complete.

If resources are available, the upload will begin immediately. Otherwise, the upload is scheduled and started as soon as resources become available.

If you are uploading AWS KMS-encrypted objects, you need to specify the correct region of the bucket on your client and configure AWS Signature Version 4 for added security. For more information on how to do this, see http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html# specify-signature-version

Parameters:
bucketName The name of the bucket to upload the new object to.
key The key in the specified bucket by which to store the new object.
input The input stream containing the options to upload to Amazon S3.
objectMetadata Additional information about the object being uploaded, including the size of the options, content type, additional custom user metadata, etc.
Returns:
A new Upload object to use to check the state of the upload, listen for progress notifications, and otherwise manage the upload.
Throws:
com.amazonaws.AmazonClientException If any errors are encountered in the client while making the request or handling the response.
com.amazonaws.AmazonServiceException If any errors occurred in Amazon S3 while processing the request.
 
     public Upload upload(final String bucketNamefinal String keyfinal InputStream inputObjectMetadata objectMetadata)
         throws AmazonServiceExceptionAmazonClientException {
         return upload(new PutObjectRequest(bucketNamekeyinputobjectMetadata));
     }

    
Schedules a new transfer to upload data to Amazon S3. This method is non-blocking and returns immediately (i.e. before the upload has finished).

The returned Upload object allows you to query the progress of the transfer, add listeners for progress events, and wait for the upload to complete.

If resources are available, the upload will begin immediately, otherwise it will be scheduled and started as soon as resources become available.

If you are uploading AWS KMS-encrypted objects, you need to specify the correct region of the bucket on your client and configure AWS Signature Version 4 for added security. For more information on how to do this, see http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html# specify-signature-version

Parameters:
bucketName The name of the bucket to upload the new object to.
key The key in the specified bucket by which to store the new object.
file The file to upload.
Returns:
A new Upload object which can be used to check state of the upload, listen for progress notifications, and otherwise manage the upload.
Throws:
com.amazonaws.AmazonClientException If any errors are encountered in the client while making the request or handling the response.
com.amazonaws.AmazonServiceException If any errors occurred in Amazon S3 while processing the request.
 
     public Upload upload(final String bucketNamefinal String keyfinal File file)
         throws AmazonServiceExceptionAmazonClientException {
         return upload(new PutObjectRequest(bucketNamekeyfile));
     }

    

Schedules a new transfer to upload data to Amazon S3. This method is non-blocking and returns immediately (i.e. before the upload has finished).

Use the returned Upload object to query the progress of the transfer, add listeners for progress events, and wait for the upload to complete.

If resources are available, the upload will begin immediately. Otherwise, the upload is scheduled and started as soon as resources become available.

If you are uploading AWS KMS-encrypted objects, you need to specify the correct region of the bucket on your client and configure AWS Signature Version 4 for added security. For more information on how to do this, see http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html# specify-signature-version

Parameters:
putObjectRequest The request containing all the parameters for the upload.
Returns:
A new Upload object to use to check the state of the upload, listen for progress notifications, and otherwise manage the upload.
Throws:
com.amazonaws.AmazonClientException If any errors are encountered in the client while making the request or handling the response.
com.amazonaws.AmazonServiceException If any errors occurred in Amazon S3 while processing the request.
 
     public Upload upload(final PutObjectRequest putObjectRequest)
             throws AmazonServiceExceptionAmazonClientException {
         return doUpload(putObjectRequestnullnullnull);
     }

    

Schedules a new transfer to upload data to Amazon S3. This method is non-blocking and returns immediately (i.e. before the upload has finished).

Use the returned Upload object to query the progress of the transfer, add listeners for progress events, and wait for the upload to complete.

If resources are available, the upload will begin immediately. Otherwise, the upload is scheduled and started as soon as resources become available.

If you are uploading AWS KMS-encrypted objects, you need to specify the correct region of the bucket on your client and configure AWS Signature Version 4 for added security. For more information on how to do this, see http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html# specify-signature-version

Parameters:
putObjectRequest The request containing all the parameters for the upload.
progressListener An optional callback listener to receive the progress of the upload.
Returns:
A new Upload object to use to check the state of the upload, listen for progress notifications, and otherwise manage the upload.
Throws:
com.amazonaws.AmazonClientException If any errors are encountered in the client while making the request or handling the response.
com.amazonaws.AmazonServiceException If any errors occurred in Amazon S3 while processing the request.
 
     public Upload upload(final PutObjectRequest putObjectRequest,
             final S3ProgressListener progressListener)
             throws AmazonServiceExceptionAmazonClientException {
         return doUpload(putObjectRequestnullprogressListenernull);
     }

    

Schedules a new transfer to upload data to Amazon S3. This method is non-blocking and returns immediately (i.e. before the upload has finished).

Use the returned Upload object to query the progress of the transfer, add listeners for progress events, and wait for the upload to complete.

If resources are available, the upload will begin immediately. Otherwise, the upload is scheduled and started as soon as resources become available.

If you are uploading AWS KMS-encrypted objects, you need to specify the correct region of the bucket on your client and configure AWS Signature Version 4 for added security. For more information on how to do this, see http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html# specify-signature-version

Parameters:
putObjectRequest The request containing all the parameters for the upload.
stateListener The transfer state change listener to monitor the upload.
progressListener An optional callback listener to receive the progress of the upload.
Returns:
A new Upload object to use to check the state of the upload, listen for progress notifications, and otherwise manage the upload.
Throws:
com.amazonaws.AmazonClientException If any errors are encountered in the client while making the request or handling the response.
com.amazonaws.AmazonServiceException If any errors occurred in Amazon S3 while processing the request.
 
     private Upload doUpload(final PutObjectRequest putObjectRequest,
             final TransferStateChangeListener stateListener,
             final S3ProgressListener progressListener,
             final PersistableUpload persistableUploadthrows AmazonServiceException,
             AmazonClientException {
 
         appendSingleObjectUserAgent(putObjectRequest);
 
         String multipartUploadId = persistableUpload != null ? persistableUpload
                 .getMultipartUploadId() : null;
 
         if (putObjectRequest.getMetadata() == null)
             putObjectRequest.setMetadata(new ObjectMetadata());
         ObjectMetadata metadata = putObjectRequest.getMetadata();
 
         File file = TransferManagerUtils.getRequestFile(putObjectRequest);
 
         if ( file != null ) {
             // Always set the content length, even if it's already set
             metadata.setContentLength(file.length());
 
             // Only set the content type if it hasn't already been set
             if ( metadata.getContentType() == null ) {
                 metadata.setContentType(Mimetypes.getInstance().getMimetype(file));
             }
         } else {
             if (multipartUploadId != null) {
                 throw new IllegalArgumentException(
                         "Unable to resume the upload. No file specified.");
             }
         }
 
         String description = "Uploading to " + putObjectRequest.getBucketName()
                 + "/" + putObjectRequest.getKey();
         TransferProgress transferProgress = new TransferProgress();
         transferProgress.setTotalBytesToTransfer(TransferManagerUtils
                 .getContentLength(putObjectRequest));
 
         S3ProgressListenerChain listenerChain = new S3ProgressListenerChain(
                 new TransferProgressUpdatingListener(transferProgress),
                 putObjectRequest.getGeneralProgressListener(), progressListener);
 
         putObjectRequest.setGeneralProgressListener(listenerChain);
 
         UploadImpl upload = new UploadImpl(descriptiontransferProgress,
                 listenerChainstateListener);
        
Since we use the same thread pool for uploading individual parts and complete multi part upload, there is a possibility that the tasks for complete multi-part upload will be added to end of queue in case of multiple parallel uploads submitted. This may result in a delay for processing the complete multi part upload request.
 
         UploadCallable uploadCallable = new UploadCallable(this,
                 uploadputObjectRequestlistenerChainmultipartUploadId,
                 transferProgress);
         UploadMonitor watcher = UploadMonitor.create(thisupload,
                 uploadCallableputObjectRequestlistenerChain);
         upload.setMonitor(watcher);
 
         return upload;
     }

    
Schedules a new transfer to download data from Amazon S3 and save it to the specified file. This method is non-blocking and returns immediately (i.e. before the data has been fully downloaded).

Use the returned Download object to query the progress of the transfer, add listeners for progress events, and wait for the download to complete.

If you are downloading AWS KMS-encrypted objects, you need to specify the correct region of the bucket on your client and configure AWS Signature Version 4 for added security. For more information on how to do this, see http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html# specify-signature-version

Parameters:
bucket The name of the bucket containing the object to download.
key The key under which the object to download is stored.
file The file to download the object's data to.
Returns:
A new Download object to use to check the state of the download, listen for progress notifications, and otherwise manage the download.
Throws:
com.amazonaws.AmazonClientException If any errors are encountered in the client while making the request or handling the response.
com.amazonaws.AmazonServiceException If any errors occurred in Amazon S3 while processing the request.
 
     public Download download(String bucketString keyFile file) {
         return download(new GetObjectRequest(bucketkey), file);
     }

    
Schedules a new transfer to download data from Amazon S3 and save it to the specified file. This method is non-blocking and returns immediately (i.e. before the data has been fully downloaded).

Use the returned Download object to query the progress of the transfer, add listeners for progress events, and wait for the download to complete.

If you are downloading AWS KMS-encrypted objects, you need to specify the correct region of the bucket on your client and configure AWS Signature Version 4 for added security. For more information on how to do this, see http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html# specify-signature-version

Parameters:
getObjectRequest The request containing all the parameters for the download.
file The file to download the object data to.
Returns:
A new Download object to use to check the state of the download, listen for progress notifications, and otherwise manage the download.
Throws:
com.amazonaws.AmazonClientException If any errors are encountered in the client while making the request or handling the response.
com.amazonaws.AmazonServiceException If any errors occurred in Amazon S3 while processing the request.
 
     public Download download(final GetObjectRequest getObjectRequestfinal File file) {
         return doDownload(getObjectRequestfilenullnull);
     }

    
Schedules a new transfer to download data from Amazon S3 and save it to the specified file. This method is non-blocking and returns immediately (i.e. before the data has been fully downloaded).

Use the returned Download object to query the progress of the transfer, add listeners for progress events, and wait for the download to complete.

If you are downloading AWS KMS-encrypted objects, you need to specify the correct region of the bucket on your client and configure AWS Signature Version 4 for added security. For more information on how to do this, see http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html# specify-signature-version

Parameters:
getObjectRequest The request containing all the parameters for the download.
file The file to download the object data to.
progressListener An optional callback listener to get the progress of the download.
Returns:
A new Download object to use to check the state of the download, listen for progress notifications, and otherwise manage the download.
Throws:
com.amazonaws.AmazonClientException If any errors are encountered in the client while making the request or handling the response.
com.amazonaws.AmazonServiceException If any errors occurred in Amazon S3 while processing the request.
 
     public Download download(final GetObjectRequest getObjectRequest,
             final File filefinal S3ProgressListener progressListener) {
         return doDownload(getObjectRequestfilenullprogressListener,
                 );
     }

    
Same as public interface, but adds a state listener so that callers can be notified of state changes to the download.

 
     private Download doDownload(final GetObjectRequest getObjectRequest,
             final File filefinal TransferStateChangeListener stateListener,
             final S3ProgressListener s3progressListener,
             final boolean resumeExistingDownload)
     {
         appendSingleObjectUserAgent(getObjectRequest);
         String description = "Downloading from " + getObjectRequest.getBucketName() + "/" + getObjectRequest.getKey();
 
         TransferProgress transferProgress = new TransferProgress();
         // S3 progress listener to capture the persistable transfer when available
         S3ProgressListenerChain listenerChain = new S3ProgressListenerChain(
             // The listener for updating transfer progress
             new TransferProgressUpdatingListener(transferProgress),
             getObjectRequest.getGeneralProgressListener(),
             s3progressListener);           // Listeners included in the original request
         // The listener chain used by the low-level GetObject request.
         // This listener chain ignores any COMPLETE event, so that we could
         // delay firing the signal until the high-level download fully finishes.
         getObjectRequest.setGeneralProgressListener(
             new ProgressListenerChain(
                 new TransferCompletionFilter(), listenerChain));
 
         long startingByte = 0;
         long lastByte;
 
         long[] range = getObjectRequest.getRange();
         if (range != null
                 && range.length == 2) {
             startingByte = range[0];
             lastByte = range[1];
         } else {
             GetObjectMetadataRequest getObjectMetadataRequest = new GetObjectMetadataRequest(
                     getObjectRequest.getBucketName(), getObjectRequest.getKey());
             if (getObjectRequest.getSSECustomerKey() != null)
                 getObjectMetadataRequest.setSSECustomerKey(getObjectRequest.getSSECustomerKey());
             if (getObjectRequest.getVersionId() != null)
                 getObjectMetadataRequest.setVersionId(getObjectRequest.getVersionId());
             final ObjectMetadata objectMetadata = .getObjectMetadata(getObjectMetadataRequest);
 
             lastByte = objectMetadata.getContentLength() - 1;
         }
         final long origStartingByte = startingByte;
         // We still pass the unfiltered listener chain into DownloadImpl
         final DownloadImpl download = new DownloadImpl(description,
                 transferProgresslistenerChainnullstateListener,
                 getObjectRequestfile);
 
         long totalBytesToDownload = lastByte - startingByte + 1;
         transferProgress.setTotalBytesToTransfer(totalBytesToDownload);
 
         long fileLength = -1;
         if (resumeExistingDownload) {
             if (!FileLocks.lock(file)) {
                 throw new FileLockException("Fail to lock " + file
                         + " for resume download");
             }
             try {
                 if (file.exists()) {
                     fileLength = file.length();
                     startingByte = startingByte + fileLength;
                     getObjectRequest.setRange(startingBytelastByte);
                     transferProgress.updateProgress(Math.min(fileLength,
                             totalBytesToDownload));
                     totalBytesToDownload = lastByte - startingByte + 1;
                     if (.isDebugEnabled()) {
                         .debug("Resume download: totalBytesToDownload=" + totalBytesToDownload
                             + ", origStartingByte=" + origStartingByte
                             + ", startingByte=" + startingByte + ", lastByte="
                             + lastByte + ", numberOfBytesRead=" + fileLength
                             + ", file: " + file);
                     }
                 }
             } finally {
                 FileLocks.unlock(file);
             }
         }
 
         if (totalBytesToDownload < 0) {
             throw new IllegalArgumentException(
                     "Unable to determine the range for download operation.");
         }
 
         final CountDownLatch latch = new CountDownLatch(1);
         Future<?> future = .submit(
             new DownloadCallable(latch,
                 getObjectRequestresumeExistingDownloaddownloadfile,
                 origStartingBytefileLength));
         download.setMonitor(new DownloadMonitor(downloadfuture));
         latch.countDown();
         return download;
     }

    
Downloads all objects in the virtual directory designated by the keyPrefix given to the destination directory given. All virtual subdirectories will be downloaded recursively.

If you are downloading AWS KMS-encrypted objects, you need to specify the correct region of the bucket on your client and configure AWS Signature Version 4 for added security. For more information on how to do this, see http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html# specify-signature-version

Parameters:
bucketName The bucket containing the virtual directory
keyPrefix The key prefix for the virtual directory, or null for the entire bucket. All subdirectories will be downloaded recursively.
destinationDirectory The directory to place downloaded files. Subdirectories will be created as necessary.
 
     public MultipleFileDownload downloadDirectory(String bucketNameString keyPrefixFile destinationDirectory) {
         if ( keyPrefix == null )
             keyPrefix = "";
         List<S3ObjectSummaryobjectSummaries = new LinkedList<S3ObjectSummary>();
         Stack<StringcommonPrefixes = new Stack<String>();
         commonPrefixes.add(keyPrefix);
         long totalSize = 0;
         // Recurse all virtual subdirectories to get a list of object summaries.
         // This is a depth-first search.
         do {
             String prefix = commonPrefixes.pop();
             ObjectListing listObjectsResponse = null;
 
             do {
                 if ( listObjectsResponse == null ) {
                     ListObjectsRequest listObjectsRequest = new ListObjectsRequest().withBucketName(bucketName)
                             .withDelimiter().withPrefix(prefix);
                     listObjectsResponse = .listObjects(listObjectsRequest);
                 } else {
                     listObjectsResponse = .listNextBatchOfObjects(listObjectsResponse);
                 }
 
                 for ( S3ObjectSummary s : listObjectsResponse.getObjectSummaries() ) {
                     // Skip any files that are also virtual directories, since
                     // we can't save both a directory and a file of the same
                     // name.
                     if ( !s.getKey().equals(prefix)
                             && !listObjectsResponse.getCommonPrefixes().contains(s.getKey() + ) ) {
                         objectSummaries.add(s);
                         totalSize += s.getSize();
                     } else {
                         .debug("Skipping download for object " + s.getKey()
                                 + " since it is also a virtual directory");
                     }
                 }
 
                 commonPrefixes.addAll(listObjectsResponse.getCommonPrefixes());
             } while ( listObjectsResponse.isTruncated() );
         } while ( !commonPrefixes.isEmpty() );
 
         /* This is the hook for adding additional progress listeners */
         ProgressListenerChain additionalListeners = new ProgressListenerChain();
 
         TransferProgress transferProgress = new TransferProgress();
         transferProgress.setTotalBytesToTransfer(totalSize);
         /*
          * Bind additional progress listeners to this
          * MultipleFileTransferProgressUpdatingListener to receive
          * ByteTransferred events from each single-file download implementation.
          */
                 transferProgressadditionalListeners);
 
         List<DownloadImpldownloads = new ArrayList<DownloadImpl>();
 
         String description = "Downloading from " + bucketName + "/" + keyPrefix;
         final MultipleFileDownloadImpl multipleFileDownload = new MultipleFileDownloadImpl(descriptiontransferProgress,
                 additionalListenerskeyPrefixbucketNamedownloads);
         multipleFileDownload.setMonitor(new MultipleFileTransferMonitor(multipleFileDownloaddownloads));
 
         final CountDownLatch latch = new CountDownLatch(1);
         MultipleFileTransferStateChangeListener transferListener =
                 new MultipleFileTransferStateChangeListener(latchmultipleFileDownload);
 
         for ( S3ObjectSummary summary : objectSummaries ) {
             // TODO: non-standard delimiters
             File f = new File(destinationDirectorysummary.getKey());
             File parentFile = f.getParentFile();
             if ( !parentFile.exists() && !parentFile.mkdirs() ) {
                 throw new RuntimeException("Couldn't create parent directories for " + f.getAbsolutePath());
             }
 
             // All the single-file downloads share the same
             // MultipleFileTransferProgressUpdatingListener and
             // MultipleFileTransferStateChangeListener
             downloads.add((DownloadImpldoDownload(
                             new GetObjectRequest(summary.getBucketName(),
                                     summary.getKey())
                                     .<GetObjectRequest>withGeneralProgressListener(
                                             listener),
                             f,
                             transferListenernullfalse));
         }
 
         if ( downloads.isEmpty() ) {
             multipleFileDownload.setState(.);
             return multipleFileDownload;
         }
 
         // Notify all state changes waiting for the downloads to all be queued
         // to wake up and continue.
         latch.countDown();
         return multipleFileDownload;
     }

    
Uploads all files in the directory given to the bucket named, optionally recursing for all subdirectories.

S3 will overwrite any existing objects that happen to have the same key, just as when uploading individual files, so use with caution.

If you are uploading AWS KMS-encrypted objects, you need to specify the correct region of the bucket on your client and configure AWS Signature Version 4 for added security. For more information on how to do this, see http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html# specify-signature-version

Parameters:
bucketName The name of the bucket to upload objects to.
virtualDirectoryKeyPrefix The key prefix of the virtual directory to upload to. Use the null or empty string to upload files to the root of the bucket.
directory The directory to upload.
includeSubdirectories Whether to include subdirectories in the upload. If true, files found in subdirectories will be included with an appropriate concatenation to the key prefix.
 
     public MultipleFileUpload uploadDirectory(String bucketNameString virtualDirectoryKeyPrefixFile directoryboolean includeSubdirectories) {
         return uploadDirectory(bucketNamevirtualDirectoryKeyPrefixdirectoryincludeSubdirectoriesnull);
     }

    
Uploads all files in the directory given to the bucket named, optionally recursing for all subdirectories.

S3 will overwrite any existing objects that happen to have the same key, just as when uploading individual files, so use with caution.

If you are uploading AWS KMS-encrypted objects, you need to specify the correct region of the bucket on your client and configure AWS Signature Version 4 for added security. For more information on how to do this, see http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html# specify-signature-version

Parameters:
bucketName The name of the bucket to upload objects to.
virtualDirectoryKeyPrefix The key prefix of the virtual directory to upload to. Use the null or empty string to upload files to the root of the bucket.
directory The directory to upload.
includeSubdirectories Whether to include subdirectories in the upload. If true, files found in subdirectories will be included with an appropriate concatenation to the key prefix.
metadataProvider A callback of type ObjectMetadataProvider which is used to provide metadata for each file being uploaded.
    public MultipleFileUpload uploadDirectory(String bucketNameString virtualDirectoryKeyPrefixFile directoryboolean includeSubdirectoriesObjectMetadataProvider metadataProvider) {
        if ( directory == null || !directory.exists() || !directory.isDirectory() ) {
            throw new IllegalArgumentException("Must provide a directory to upload");
        }
        List<Filefiles = new LinkedList<File>();
        listFiles(directoryfilesincludeSubdirectories);
        return uploadFileList(bucketNamevirtualDirectoryKeyPrefixdirectoryfilesmetadataProvider);
    }

    
Uploads all specified files to the bucket named, constructing relative keys depending on the commonParentDirectory given.

S3 will overwrite any existing objects that happen to have the same key, just as when uploading individual files, so use with caution.

If you are uploading AWS KMS-encrypted objects, you need to specify the correct region of the bucket on your client and configure AWS Signature Version 4 for added security. For more information on how to do this, see http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html# specify-signature-version

Parameters:
bucketName The name of the bucket to upload objects to.
virtualDirectoryKeyPrefix The key prefix of the virtual directory to upload to. Use the null or empty string to upload files to the root of the bucket.
directory The common parent directory of files to upload. The keys of the files in the list of files are constructed relative to this directory and the virtualDirectoryKeyPrefix.
files A list of files to upload. The keys of the files are calculated relative to the common parent directory and the virtualDirectoryKeyPrefix.
    public MultipleFileUpload uploadFileList(String bucketNameString virtualDirectoryKeyPrefixFile directoryList<Filefiles) {
        return uploadFileList(bucketNamevirtualDirectoryKeyPrefixdirectoryfilesnull);
    }

    
Uploads all specified files to the bucket named, constructing relative keys depending on the commonParentDirectory given.

S3 will overwrite any existing objects that happen to have the same key, just as when uploading individual files, so use with caution.

If you are uploading AWS KMS-encrypted objects, you need to specify the correct region of the bucket on your client and configure AWS Signature Version 4 for added security. For more information on how to do this, see http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html# specify-signature-version

Parameters:
bucketName The name of the bucket to upload objects to.
virtualDirectoryKeyPrefix The key prefix of the virtual directory to upload to. Use the null or empty string to upload files to the root of the bucket.
directory The common parent directory of files to upload. The keys of the files in the list of files are constructed relative to this directory and the virtualDirectoryKeyPrefix.
files A list of files to upload. The keys of the files are calculated relative to the common parent directory and the virtualDirectoryKeyPrefix.
metadataProvider A callback of type ObjectMetadataProvider which is used to provide metadata for each file being uploaded.
    public MultipleFileUpload uploadFileList(String bucketNameString virtualDirectoryKeyPrefixFile directoryList<Filefiles,ObjectMetadataProvider metadataProvider) {
        if ( directory == null || !directory.exists() || !directory.isDirectory() ) {
            throw new IllegalArgumentException("Must provide a common base directory for uploaded files");
        }
        if (virtualDirectoryKeyPrefix == null || virtualDirectoryKeyPrefix.length() == 0) {
            virtualDirectoryKeyPrefix = "";
        } else if ( !virtualDirectoryKeyPrefix.endsWith("/") ) {
            virtualDirectoryKeyPrefix = virtualDirectoryKeyPrefix + "/";
        }
        /* This is the hook for adding additional progress listeners */
        ProgressListenerChain additionalListeners = new ProgressListenerChain();
        TransferProgress progress = new TransferProgress();
        /*
         * Bind additional progress listeners to this
         * MultipleFileTransferProgressUpdatingListener to receive
         * ByteTransferred events from each single-file upload implementation.
         */
                progressadditionalListeners);
        List<UploadImpluploads = new LinkedList<UploadImpl>();
        MultipleFileUploadImpl multipleFileUpload = new MultipleFileUploadImpl("Uploading etc"progressadditionalListenersvirtualDirectoryKeyPrefixbucketNameuploads);
        multipleFileUpload.setMonitor(new MultipleFileTransferMonitor(multipleFileUploaduploads));
        final CountDownLatch latch = new CountDownLatch(1);
        MultipleFileTransferStateChangeListener transferListener =
            new MultipleFileTransferStateChangeListener(latchmultipleFileUpload);
        if (files == null || files.isEmpty()) {
            multipleFileUpload.setState(.);
        } else {
            /*
             * If the absolute path for the common/base directory does NOT end
             * in a separator (which is the case for anything but root
             * directories), then we know there's still a separator between the
             * base directory and the rest of the file's path, so we increment
             * the starting position by one.
             */
            int startingPosition = directory.getAbsolutePath().length();
            if (!(directory.getAbsolutePath().endsWith(.)))
                startingPosition++;
            long totalSize = 0;
            for (File f : files) {
                // Check, if file, since only files can be uploaded.
                if (f.isFile()) {
                    totalSize += f.length();
                    String key = f.getAbsolutePath()
                            .substring(startingPosition)
                            .replaceAll("\\\\""/");
                    ObjectMetadata metadata = new ObjectMetadata();
                    // Invoke the callback if it's present.
                    // The callback allows the user to customize the metadata
                    // for each file being uploaded.
                    if (metadataProvider != null) {
                        metadataProvider.provideObjectMetadata(fmetadata);
                    }
                    // All the single-file uploads share the same
                    // MultipleFileTransferProgressUpdatingListener and
                    // MultipleFileTransferStateChangeListener
                    uploads.add((UploadImpldoUpload(
                            new PutObjectRequest(bucketName,
                                    virtualDirectoryKeyPrefix + keyf)
                                    .withMetadata(metadata)
                                    .<PutObjectRequestwithGeneralProgressListener(
                                            listener), transferListenernullnull));
                }
            }
            progress.setTotalBytesToTransfer(totalSize);
        }
        // Notify all state changes waiting for the uploads to all be queued
        // to wake up and continue
        latch.countDown();
        return multipleFileUpload;
    }

    
Lists files in the directory given and adds them to the result list passed in, optionally adding subdirectories recursively.
    private void listFiles(File dirList<Fileresultsboolean includeSubDirectories) {
        File[] found = dir.listFiles();
        if ( found != null ) {
            for ( File f : found ) {
                if (f.isDirectory()) {
                    if (includeSubDirectories) {
                        listFiles(fresultsincludeSubDirectories);
                    }
                } else {
                    results.add(f);
                }
            }
        }
    }

    

Aborts any multipart uploads that were initiated before the specified date.

This method is useful for cleaning up any interrupted multipart uploads. TransferManager attempts to abort any failed uploads, but in some cases this may not be possible, such as if network connectivity is completely lost.

Parameters:
bucketName The name of the bucket containing the multipart uploads to abort.
date The date indicating which multipart uploads should be aborted.
    public void abortMultipartUploads(String bucketNameDate date)
            throws AmazonServiceExceptionAmazonClientException {
                new ListMultipartUploadsRequest(bucketName)));
        do {
            for (MultipartUpload upload : uploadListing.getMultipartUploads()) {
                if (upload.getInitiated().compareTo(date) < 0) {
                            bucketNameupload.getKey(), upload.getUploadId())));
                }
            }
            ListMultipartUploadsRequest request = new ListMultipartUploadsRequest(bucketName)
                .withUploadIdMarker(uploadListing.getNextUploadIdMarker())
                .withKeyMarker(uploadListing.getNextKeyMarker());
            uploadListing = .listMultipartUploads(appendSingleObjectUserAgent(request));
        } while (uploadListing.isTruncated());
    }

    
Forcefully shuts down this TransferManager instance - currently executing transfers will not be allowed to finish. It also by default shuts down the underlying Amazon S3 client.

    public void shutdownNow() {
        shutdownNow(true);
    }

    
Forcefully shuts down this TransferManager instance - currently executing transfers will not be allowed to finish. Callers should use this method when they either:
  • have already verified that their transfers have completed by checking each transfer's state
  • need to exit quickly and don't mind stopping transfers before they complete.

Callers should also remember that uploaded parts from an interrupted upload may not always be automatically cleaned up, but callers can use abortMultipartUploads(java.lang.String,java.util.Date) to clean up any upload parts.

Parameters:
shutDownS3Client Whether to shut down the underlying Amazon S3 client.
    public void shutdownNow(boolean shutDownS3Client) {
        if () {
            .shutdownNow();
            .shutdownNow();
        }
        if (shutDownS3Client) {
            if ( instanceof AmazonS3Client) {
                ((AmazonS3Client)).shutdown();
            }
        }
    }

    
Shutdown without interrupting the threads involved, so that, for example, any upload in progress can complete without throwing com.amazonaws.AbortedException.
    private void shutdownThreadPools() {
        if () {
            .shutdown();
            .shutdown();
        }
    }
    public static <X extends AmazonWebServiceRequest> X appendSingleObjectUserAgent(X request) {
        return request;
    }
    public static <X extends AmazonWebServiceRequest> X appendMultipartUserAgent(X request) {
        return request;
    }
    private static final String USER_AGENT = TransferManager.class.getName() + "/" + VersionInfoUtils.getVersion();
    private static final String USER_AGENT_MULTIPART = TransferManager.class.getName() + "_multipart/" + VersionInfoUtils.getVersion();
    private static final String DEFAULT_DELIMITER = "/";

    
There is no need for threads from timedThreadPool if there is no more running threads in current process, so we need a daemon thread factory for it.
    private static final ThreadFactory daemonThreadFactory = new ThreadFactory() {
        final AtomicInteger threadCount = new AtomicInteger( 0 );
        public Thread newThread(Runnable r) {
            int threadNumber = .incrementAndGet();
            Thread thread = new Thread(r);
            thread.setDaemon(true);
            thread.setName("S3TransferManagerTimedThread-" + threadNumber);
            return thread;
        }
    };

    

Schedules a new transfer to copy data from one Amazon S3 location to another Amazon S3 location. This method is non-blocking and returns immediately (i.e. before the copy has finished).

TransferManager doesn't support copying of encrypted objects whose encryption materials is stored in instruction file.

Use the returned Copy object to check if the copy is complete.

If resources are available, the copy request will begin immediately. Otherwise, the copy is scheduled and started as soon as resources become available.

Parameters:
sourceBucketName The name of the bucket from where the object is to be copied.
sourceKey The name of the Amazon S3 object.
destinationBucketName The name of the bucket to where the Amazon S3 object has to be copied.
destinationKey The name of the object in the destination bucket.
Returns:
A new Copy object to use to check the state of the copy request being processed.
Throws:
com.amazonaws.AmazonClientException If any errors are encountered in the client while making the request or handling the response.
com.amazonaws.AmazonServiceException If any errors occurred in Amazon S3 while processing the request.
    public Copy copy(String sourceBucketNameString sourceKey,
            String destinationBucketNameString destinationKey)
            throws AmazonServiceExceptionAmazonClientException {
        return copy(new CopyObjectRequest(sourceBucketNamesourceKey,
                destinationBucketNamedestinationKey));
    }

    

Schedules a new transfer to copy data from one Amazon S3 location to another Amazon S3 location. This method is non-blocking and returns immediately (i.e. before the copy has finished).

TransferManager doesn't support copying of encrypted objects whose encryption materials is stored i instruction file.

Use the returned Copy object to check if the copy is complete.

If resources are available, the copy request will begin immediately. Otherwise, the copy is scheduled and started as soon as resources become available.

Parameters:
copyObjectRequest The request containing all the parameters for the copy.
Returns:
A new Copy object to use to check the state of the copy request being processed.
Throws:
com.amazonaws.AmazonClientException If any errors are encountered in the client while making the request or handling the response.
com.amazonaws.AmazonServiceException If any errors occurred in Amazon S3 while processing the request.
    public Copy copy(final CopyObjectRequest copyObjectRequest){
        return copy(copyObjectRequest,null);
    }

    

Schedules a new transfer to copy data from one Amazon S3 location to another Amazon S3 location. This method is non-blocking and returns immediately (i.e. before the copy has finished).

TransferManager doesn't support copying of encrypted objects whose encryption materials is stored in instruction file.

Use the returned Copy object to check if the copy is complete.

If resources are available, the copy request will begin immediately. Otherwise, the copy is scheduled and started as soon as resources become available.

Parameters:
copyObjectRequest The request containing all the parameters for the copy.
stateChangeListener The transfer state change listener to monitor the copy request
Returns:
A new Copy object to use to check the state of the copy request being processed.
Throws:
com.amazonaws.AmazonClientException If any errors are encountered in the client while making the request or handling the response.
com.amazonaws.AmazonServiceException If any errors occurred in Amazon S3 while processing the request.
    public Copy copy(final CopyObjectRequest copyObjectRequest,
            final TransferStateChangeListener stateChangeListener)
            throws AmazonServiceExceptionAmazonClientException {
        appendSingleObjectUserAgent(copyObjectRequest);
        assertParameterNotNull(copyObjectRequest.getSourceBucketName(),
                "The source bucket name must be specified when a copy request is initiated.");
        assertParameterNotNull(copyObjectRequest.getSourceKey(),
                "The source object key must be specified when a copy request is initiated.");
                copyObjectRequest.getDestinationBucketName(),
                "The destination bucket name must be specified when a copy request is initiated.");
                copyObjectRequest.getDestinationKey(),
                "The destination object key must be specified when a copy request is initiated.");
        String description = "Copying object from "
                + copyObjectRequest.getSourceBucketName() + "/"
                + copyObjectRequest.getSourceKey() + " to "
                + copyObjectRequest.getDestinationBucketName() + "/"
                + copyObjectRequest.getDestinationKey();
        GetObjectMetadataRequest getObjectMetadataRequest =
                new GetObjectMetadataRequest(
                        copyObjectRequest.getSourceBucketName(),
                        copyObjectRequest.getSourceKey())
                        .withSSECustomerKey(copyObjectRequest.getSourceSSECustomerKey());
        ObjectMetadata metadata = .getObjectMetadata(getObjectMetadataRequest);
        TransferProgress transferProgress = new TransferProgress();
        transferProgress.setTotalBytesToTransfer(metadata.getContentLength());
        ProgressListenerChain listenerChain = new ProgressListenerChain(
                new TransferProgressUpdatingListener(transferProgress));
        CopyImpl copy = new CopyImpl(descriptiontransferProgress,
                listenerChainstateChangeListener);
        CopyCallable copyCallable = new CopyCallable(thiscopy,
                copyObjectRequestmetadatalistenerChain);
        CopyMonitor watcher = CopyMonitor.create(thiscopy,
                copyCallablecopyObjectRequestlistenerChain);
        watcher.setTimedThreadPool();
        copy.setMonitor(watcher);
        return copy;
    }

    
Resumes an upload operation. This upload operation uses the same configuration TransferManagerConfiguration as the original upload. Any data already uploaded will be skipped, and only the remaining will be uploaded to Amazon S3.

Parameters:
persistableUpload the upload to resume.
Returns:
A new Upload object to use to check the state of the upload, listen for progress notifications, and otherwise manage the upload.
Throws:
com.amazonaws.AmazonClientException If any errors are encountered in the client while making the request or handling the response.
com.amazonaws.AmazonServiceException If any errors occurred in Amazon S3 while processing the request.
    public Upload resumeUpload(PersistableUpload persistableUpload) {
        assertParameterNotNull(persistableUpload,
                "PauseUpload is mandatory to resume a upload.");
        .setMinimumUploadPartSize(persistableUpload.getPartSize());
        .setMultipartUploadThreshold(persistableUpload
                .getMutlipartUploadThreshold());
        return doUpload(new PutObjectRequest(persistableUpload.getBucketName(),
                persistableUpload.getKey(), new File(persistableUpload.getFile())), nullnull,
                persistableUpload);
    }

    
Resumes an download operation. This download operation uses the same configuration as the original download. Any data already fetched will be skipped, and only the remaining data is retrieved from Amazon S3.

Parameters:
persistableDownload the download to resume.
Returns:
A new Download object to use to check the state of the download, listen for progress notifications, and otherwise manage the download.
Throws:
com.amazonaws.AmazonClientException If any errors are encountered in the client while making the request or handling the response.
com.amazonaws.AmazonServiceException If any errors occurred in Amazon S3 while processing the request.
    public Download resumeDownload(PersistableDownload persistableDownload) {
        assertParameterNotNull(persistableDownload,
                "PausedDownload is mandatory to resume a download.");
        GetObjectRequest request = new GetObjectRequest(
                persistableDownload.getBucketName(), persistableDownload.getKey(),
                persistableDownload.getVersionId());
        if (persistableDownload.getRange() != null
                && persistableDownload.getRange().length == 2) {
            long[] range = persistableDownload.getRange();
            request.setRange(range[0], range[1]);
        }
        request.setRequesterPays(persistableDownload.isRequesterPays());
        request.setResponseHeaders(persistableDownload.getResponseHeaders());
        return doDownload(requestnew File(persistableDownload.getFile()), nullnull,
                );
    }

    

Asserts that the specified parameter value is not null and if it is, throws an IllegalArgumentException with the specified error message.

Parameters:
parameterValue The parameter value being checked.
errorMessage The error message to include in the IllegalArgumentException if the specified parameter is null.
    private void assertParameterNotNull(Object parameterValueString errorMessage) {
        if (parameterValue == nullthrow new IllegalArgumentException(errorMessage);
    }

    
Releasing all resources created by TransferManager before it is being garbage collected.
    @Override
    protected void finalize() throws Throwable {
        shutdownThreadPools();
    }
New to GrepCode? Check out our FAQ X