Start line:  
End line:  

Snippet Preview

Snippet HTML Code

Stack Overflow Questions
   /*
    * Copyright 2010-2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
    * 
    * Licensed under the Apache License, Version 2.0 (the "License").
    * You may not use this file except in compliance with the License.
    * A copy of the License is located at
    * 
    *  http://aws.amazon.com/apache2.0
    * 
   * or in the "license" file accompanying this file. This file is distributed
   * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
   * express or implied. See the License for the specific language governing
   * permissions and limitations under the License.
   */
  package com.amazonaws.services.machinelearning;
  
  import java.net.*;
  import java.util.*;
  
  
  import com.amazonaws.*;
  
Client for accessing AmazonMachineLearning. All service calls made using this client are blocking, and will not return until the service call completes.

Definition of the public APIs exposed by Amazon Machine Learning

  
  public class AmazonMachineLearningClient extends AmazonWebServiceClient implements AmazonMachineLearning {

    
Provider for AWS credentials.
  
  
      private static final Log log = LogFactory.getLog(AmazonMachineLearning.class);

    
List of exception unmarshallers for all AmazonMachineLearning exceptions.
  
      protected List<JsonErrorUnmarshallerjsonErrorUnmarshallers;

    
Constructs a new client to invoke service methods on AmazonMachineLearning. A credentials provider chain will be used that searches for credentials in this order:
  • Environment Variables - AWS_ACCESS_KEY_ID and AWS_SECRET_KEY
  • Java System Properties - aws.accessKeyId and aws.secretKey
  • Instance profile credentials delivered through the Amazon EC2 metadata service

All service calls made using this new client object are blocking, and will not return until the service call completes.

  
      public AmazonMachineLearningClient() {
          this(new DefaultAWSCredentialsProviderChain(), new ClientConfiguration());
      }

    
Constructs a new client to invoke service methods on AmazonMachineLearning. A credentials provider chain will be used that searches for credentials in this order:
  • Environment Variables - AWS_ACCESS_KEY_ID and AWS_SECRET_KEY
  • Java System Properties - aws.accessKeyId and aws.secretKey
  • Instance profile credentials delivered through the Amazon EC2 metadata service

All service calls made using this new client object are blocking, and will not return until the service call completes.

Parameters:
clientConfiguration The client configuration options controlling how this client connects to AmazonMachineLearning (ex: proxy settings, retry counts, etc.).
See also:
com.amazonaws.auth.DefaultAWSCredentialsProviderChain
  
      public AmazonMachineLearningClient(ClientConfiguration clientConfiguration) {
         this(new DefaultAWSCredentialsProviderChain(), clientConfiguration);
     }

    
Constructs a new client to invoke service methods on AmazonMachineLearning using the specified AWS account credentials.

All service calls made using this new client object are blocking, and will not return until the service call completes.

Parameters:
awsCredentials The AWS credentials (access key ID and secret key) to use when authenticating with AWS services.
 
     public AmazonMachineLearningClient(AWSCredentials awsCredentials) {
         this(awsCredentialsnew ClientConfiguration());
     }

    
Constructs a new client to invoke service methods on AmazonMachineLearning using the specified AWS account credentials and client configuration options.

All service calls made using this new client object are blocking, and will not return until the service call completes.

Parameters:
awsCredentials The AWS credentials (access key ID and secret key) to use when authenticating with AWS services.
clientConfiguration The client configuration options controlling how this client connects to AmazonMachineLearning (ex: proxy settings, retry counts, etc.).
 
     public AmazonMachineLearningClient(AWSCredentials awsCredentialsClientConfiguration clientConfiguration) {
         super(adjustClientConfiguration(clientConfiguration));
         
         this. = new StaticCredentialsProvider(awsCredentials);
         
         init();
     }

    
Constructs a new client to invoke service methods on AmazonMachineLearning using the specified AWS account credentials provider.

All service calls made using this new client object are blocking, and will not return until the service call completes.

Parameters:
awsCredentialsProvider The AWS credentials provider which will provide credentials to authenticate requests with AWS services.
 
     public AmazonMachineLearningClient(AWSCredentialsProvider awsCredentialsProvider) {
         this(awsCredentialsProvidernew ClientConfiguration());
     }

    
Constructs a new client to invoke service methods on AmazonMachineLearning using the specified AWS account credentials provider and client configuration options.

All service calls made using this new client object are blocking, and will not return until the service call completes.

Parameters:
awsCredentialsProvider The AWS credentials provider which will provide credentials to authenticate requests with AWS services.
clientConfiguration The client configuration options controlling how this client connects to AmazonMachineLearning (ex: proxy settings, retry counts, etc.).
 
     public AmazonMachineLearningClient(AWSCredentialsProvider awsCredentialsProviderClientConfiguration clientConfiguration) {
         this(awsCredentialsProviderclientConfigurationnull);
     }

    
Constructs a new client to invoke service methods on AmazonMachineLearning using the specified AWS account credentials provider, client configuration options and request metric collector.

All service calls made using this new client object are blocking, and will not return until the service call completes.

Parameters:
awsCredentialsProvider The AWS credentials provider which will provide credentials to authenticate requests with AWS services.
clientConfiguration The client configuration options controlling how this client connects to AmazonMachineLearning (ex: proxy settings, retry counts, etc.).
requestMetricCollector optional request metric collector
 
     public AmazonMachineLearningClient(AWSCredentialsProvider awsCredentialsProvider,
             ClientConfiguration clientConfiguration,
             RequestMetricCollector requestMetricCollector) {
         super(adjustClientConfiguration(clientConfiguration), requestMetricCollector);
         
         this. = awsCredentialsProvider;
         
         init();
     }
 
     private void init() {
         
         
         // calling this.setEndPoint(...) will also modify the signer accordingly
         this.setEndpoint("machinelearning.us-east-1.amazonaws.com");
         
         HandlerChainFactory chainFactory = new HandlerChainFactory();
         .addAll(chainFactory.newRequestHandlerChain(
                 "/com/amazonaws/services/machinelearning/request.handlers"));
         .addAll(chainFactory.newRequestHandler2Chain(
                 "/com/amazonaws/services/machinelearning/request.handler2s"));
     }
 
         ClientConfiguration config = orig;
         
         return config;
     }

    

Updates the EvaluationName of an Evaluation .

You can use the GetEvaluation operation to view the contents of the updated data element.

Parameters:
updateEvaluationRequest Container for the necessary parameters to execute the UpdateEvaluation service method on AmazonMachineLearning.
Returns:
The response from the UpdateEvaluation service method, as returned by AmazonMachineLearning.
Throws:
com.amazonaws.services.machinelearning.model.ResourceNotFoundException
com.amazonaws.services.machinelearning.model.InvalidInputException
com.amazonaws.services.machinelearning.model.InternalServerException
com.amazonaws.AmazonClientException If any internal errors are encountered inside the client while attempting to make the request or handle the response. For example if a network connection is not available.
com.amazonaws.AmazonServiceException If an error response is returned by AmazonMachineLearning indicating either a problem with the data in the request, or a server side issue.
 
     public UpdateEvaluationResult updateEvaluation(UpdateEvaluationRequest updateEvaluationRequest) {
         ExecutionContext executionContext = createExecutionContext(updateEvaluationRequest);
         AWSRequestMetrics awsRequestMetrics = executionContext.getAwsRequestMetrics();
         awsRequestMetrics.startEvent(.);
         Request<UpdateEvaluationRequestrequest = null;
         Response<UpdateEvaluationResultresponse = null;
         
         try {
             awsRequestMetrics.startEvent(.);
             try {
                 request = new UpdateEvaluationRequestMarshaller().marshall(super.beforeMarshalling(updateEvaluationRequest));
                 // Binds the request metrics to the current request.
                 request.setAWSRequestMetrics(awsRequestMetrics);
             } finally {
                 awsRequestMetrics.endEvent(.);
             }
 
             Unmarshaller<UpdateEvaluationResultJsonUnmarshallerContextunmarshaller =
                 new UpdateEvaluationResultJsonUnmarshaller();
             JsonResponseHandler<UpdateEvaluationResultresponseHandler =
                 new JsonResponseHandler<UpdateEvaluationResult>(unmarshaller);
 
             response = invoke(requestresponseHandlerexecutionContext);
 
             return response.getAwsResponse();
         } finally {
             
             endClientExecution(awsRequestMetricsrequestresponse);
         }
     }

    

Creates a new MLModel using the data files and the recipe as information sources.

An MLModel is nearly immutable. Users can only update the MLModelName and the ScoreThreshold in an MLModel without creating a new MLModel .

CreateMLModel is an asynchronous operation. In response to CreateMLModel , Amazon Machine Learning (Amazon ML) immediately returns and sets the MLModel status to PENDING . After the MLModel is created and ready for use, Amazon ML sets the status to COMPLETED .

You can use the GetMLModel operation to check progress of the MLModel during the creation operation.

CreateMLModel requires a DataSource with computed statistics, which can be created by setting ComputeStatistics to true in CreateDataSourceFromRDS, CreateDataSourceFromS3, or CreateDataSourceFromRedshift operations.

Parameters:
createMLModelRequest Container for the necessary parameters to execute the CreateMLModel service method on AmazonMachineLearning.
Returns:
The response from the CreateMLModel service method, as returned by AmazonMachineLearning.
Throws:
com.amazonaws.services.machinelearning.model.InvalidInputException
com.amazonaws.services.machinelearning.model.IdempotentParameterMismatchException
com.amazonaws.services.machinelearning.model.InternalServerException
com.amazonaws.AmazonClientException If any internal errors are encountered inside the client while attempting to make the request or handle the response. For example if a network connection is not available.
com.amazonaws.AmazonServiceException If an error response is returned by AmazonMachineLearning indicating either a problem with the data in the request, or a server side issue.
 
     public CreateMLModelResult createMLModel(CreateMLModelRequest createMLModelRequest) {
         ExecutionContext executionContext = createExecutionContext(createMLModelRequest);
         AWSRequestMetrics awsRequestMetrics = executionContext.getAwsRequestMetrics();
         awsRequestMetrics.startEvent(.);
         Request<CreateMLModelRequestrequest = null;
         Response<CreateMLModelResultresponse = null;
         
         try {
             awsRequestMetrics.startEvent(.);
             try {
                 request = new CreateMLModelRequestMarshaller().marshall(super.beforeMarshalling(createMLModelRequest));
                 // Binds the request metrics to the current request.
                 request.setAWSRequestMetrics(awsRequestMetrics);
             } finally {
                 awsRequestMetrics.endEvent(.);
             }
 
             Unmarshaller<CreateMLModelResultJsonUnmarshallerContextunmarshaller =
                 new CreateMLModelResultJsonUnmarshaller();
             JsonResponseHandler<CreateMLModelResultresponseHandler =
                 new JsonResponseHandler<CreateMLModelResult>(unmarshaller);
 
             response = invoke(requestresponseHandlerexecutionContext);
 
             return response.getAwsResponse();
         } finally {
             
             endClientExecution(awsRequestMetricsrequestresponse);
         }
     }

    

Creates a real-time endpoint for the MLModel . The endpoint contains the URI of the MLModel ; that is, the location to send real-time prediction requests for the specified MLModel .

Parameters:
createRealtimeEndpointRequest Container for the necessary parameters to execute the CreateRealtimeEndpoint service method on AmazonMachineLearning.
Returns:
The response from the CreateRealtimeEndpoint service method, as returned by AmazonMachineLearning.
Throws:
com.amazonaws.services.machinelearning.model.ResourceNotFoundException
com.amazonaws.services.machinelearning.model.InvalidInputException
com.amazonaws.services.machinelearning.model.InternalServerException
com.amazonaws.AmazonClientException If any internal errors are encountered inside the client while attempting to make the request or handle the response. For example if a network connection is not available.
com.amazonaws.AmazonServiceException If an error response is returned by AmazonMachineLearning indicating either a problem with the data in the request, or a server side issue.
 
     public CreateRealtimeEndpointResult createRealtimeEndpoint(CreateRealtimeEndpointRequest createRealtimeEndpointRequest) {
         ExecutionContext executionContext = createExecutionContext(createRealtimeEndpointRequest);
         AWSRequestMetrics awsRequestMetrics = executionContext.getAwsRequestMetrics();
         awsRequestMetrics.startEvent(.);
         Request<CreateRealtimeEndpointRequestrequest = null;
         Response<CreateRealtimeEndpointResultresponse = null;
         
         try {
             awsRequestMetrics.startEvent(.);
             try {
                 request = new CreateRealtimeEndpointRequestMarshaller().marshall(super.beforeMarshalling(createRealtimeEndpointRequest));
                 // Binds the request metrics to the current request.
                 request.setAWSRequestMetrics(awsRequestMetrics);
             } finally {
                 awsRequestMetrics.endEvent(.);
             }
 
             Unmarshaller<CreateRealtimeEndpointResultJsonUnmarshallerContextunmarshaller =
                 new CreateRealtimeEndpointResultJsonUnmarshaller();
             JsonResponseHandler<CreateRealtimeEndpointResultresponseHandler =
                 new JsonResponseHandler<CreateRealtimeEndpointResult>(unmarshaller);
 
             response = invoke(requestresponseHandlerexecutionContext);
 
             return response.getAwsResponse();
         } finally {
             
             endClientExecution(awsRequestMetricsrequestresponse);
         }
     }

    

Creates a DataSource object. A DataSource references data that can be used to perform CreateMLModel, CreateEvaluation, or CreateBatchPrediction operations.

CreateDataSourceFromS3 is an asynchronous operation. In response to CreateDataSourceFromS3 , Amazon Machine Learning (Amazon ML) immediately returns and sets the DataSource status to PENDING . After the DataSource is created and ready for use, Amazon ML sets the Status parameter to COMPLETED . DataSource in COMPLETED or PENDING status can only be used to perform CreateMLModel, CreateEvaluation or CreateBatchPrediction operations.

If Amazon ML cannot accept the input source, it sets the Status parameter to FAILED and includes an error message in the Message attribute of the GetDataSource operation response.

The observation data used in a DataSource should be ready to use; that is, it should have a consistent structure, and missing data values should be kept to a minimum. The observation data must reside in one or more CSV files in an Amazon Simple Storage Service (Amazon S3) bucket, along with a schema that describes the data items by name and type. The same schema must be used for all of the data files referenced by the DataSource .

After the DataSource has been created, it's ready to use in evaluations and batch predictions. If you plan to use the DataSource to train an MLModel , the DataSource requires another item: a recipe. A recipe describes the observation variables that participate in training an MLModel . A recipe describes how each input variable will be used in training. Will the variable be included or excluded from training? Will the variable be manipulated, for example, combined with another variable, or split apart into word combinations? The recipe provides answers to these questions. For more information, see the Amazon Machine Learning Developer Guide .

Parameters:
createDataSourceFromS3Request Container for the necessary parameters to execute the CreateDataSourceFromS3 service method on AmazonMachineLearning.
Returns:
The response from the CreateDataSourceFromS3 service method, as returned by AmazonMachineLearning.
Throws:
com.amazonaws.services.machinelearning.model.InvalidInputException
com.amazonaws.services.machinelearning.model.IdempotentParameterMismatchException
com.amazonaws.services.machinelearning.model.InternalServerException
com.amazonaws.AmazonClientException If any internal errors are encountered inside the client while attempting to make the request or handle the response. For example if a network connection is not available.
com.amazonaws.AmazonServiceException If an error response is returned by AmazonMachineLearning indicating either a problem with the data in the request, or a server side issue.
 
     public CreateDataSourceFromS3Result createDataSourceFromS3(CreateDataSourceFromS3Request createDataSourceFromS3Request) {
         ExecutionContext executionContext = createExecutionContext(createDataSourceFromS3Request);
         AWSRequestMetrics awsRequestMetrics = executionContext.getAwsRequestMetrics();
         awsRequestMetrics.startEvent(.);
         Request<CreateDataSourceFromS3Requestrequest = null;
         Response<CreateDataSourceFromS3Resultresponse = null;
         
         try {
             awsRequestMetrics.startEvent(.);
             try {
                 request = new CreateDataSourceFromS3RequestMarshaller().marshall(super.beforeMarshalling(createDataSourceFromS3Request));
                 // Binds the request metrics to the current request.
                 request.setAWSRequestMetrics(awsRequestMetrics);
             } finally {
                 awsRequestMetrics.endEvent(.);
             }
 
             Unmarshaller<CreateDataSourceFromS3ResultJsonUnmarshallerContextunmarshaller =
                 new CreateDataSourceFromS3ResultJsonUnmarshaller();
             JsonResponseHandler<CreateDataSourceFromS3ResultresponseHandler =
                 new JsonResponseHandler<CreateDataSourceFromS3Result>(unmarshaller);
 
             response = invoke(requestresponseHandlerexecutionContext);
 
             return response.getAwsResponse();
         } finally {
             
             endClientExecution(awsRequestMetricsrequestresponse);
         }
     }

    

Assigns the DELETED status to an MLModel , rendering it unusable.

After using the DeleteMLModel operation, you can use the GetMLModel operation to verify that the status of the MLModel changed to DELETED.

Caution The result of the DeleteMLModel operation is irreversible.

Parameters:
deleteMLModelRequest Container for the necessary parameters to execute the DeleteMLModel service method on AmazonMachineLearning.
Returns:
The response from the DeleteMLModel service method, as returned by AmazonMachineLearning.
Throws:
com.amazonaws.services.machinelearning.model.ResourceNotFoundException
com.amazonaws.services.machinelearning.model.InvalidInputException
com.amazonaws.services.machinelearning.model.InternalServerException
com.amazonaws.AmazonClientException If any internal errors are encountered inside the client while attempting to make the request or handle the response. For example if a network connection is not available.
com.amazonaws.AmazonServiceException If an error response is returned by AmazonMachineLearning indicating either a problem with the data in the request, or a server side issue.
 
     public DeleteMLModelResult deleteMLModel(DeleteMLModelRequest deleteMLModelRequest) {
         ExecutionContext executionContext = createExecutionContext(deleteMLModelRequest);
         AWSRequestMetrics awsRequestMetrics = executionContext.getAwsRequestMetrics();
         awsRequestMetrics.startEvent(.);
         Request<DeleteMLModelRequestrequest = null;
         Response<DeleteMLModelResultresponse = null;
         
         try {
             awsRequestMetrics.startEvent(.);
             try {
                 request = new DeleteMLModelRequestMarshaller().marshall(super.beforeMarshalling(deleteMLModelRequest));
                 // Binds the request metrics to the current request.
                 request.setAWSRequestMetrics(awsRequestMetrics);
             } finally {
                 awsRequestMetrics.endEvent(.);
             }
 
             Unmarshaller<DeleteMLModelResultJsonUnmarshallerContextunmarshaller =
                 new DeleteMLModelResultJsonUnmarshaller();
             JsonResponseHandler<DeleteMLModelResultresponseHandler =
                 new JsonResponseHandler<DeleteMLModelResult>(unmarshaller);
 
             response = invoke(requestresponseHandlerexecutionContext);
 
             return response.getAwsResponse();
         } finally {
             
             endClientExecution(awsRequestMetricsrequestresponse);
         }
     }

    

Generates a prediction for the observation using the specified MLModel .

NOTE: Note Not all response parameters will be populated because this is dependent on the type of requested model.

Parameters:
predictRequest Container for the necessary parameters to execute the Predict service method on AmazonMachineLearning.
Returns:
The response from the Predict service method, as returned by AmazonMachineLearning.
Throws:
com.amazonaws.services.machinelearning.model.ResourceNotFoundException
com.amazonaws.services.machinelearning.model.InvalidInputException
com.amazonaws.services.machinelearning.model.PredictorNotMountedException
com.amazonaws.services.machinelearning.model.LimitExceededException
com.amazonaws.services.machinelearning.model.InternalServerException
com.amazonaws.AmazonClientException If any internal errors are encountered inside the client while attempting to make the request or handle the response. For example if a network connection is not available.
com.amazonaws.AmazonServiceException If an error response is returned by AmazonMachineLearning indicating either a problem with the data in the request, or a server side issue.
 
     public PredictResult predict(PredictRequest predictRequest) {
         ExecutionContext executionContext = createExecutionContext(predictRequest);
         AWSRequestMetrics awsRequestMetrics = executionContext.getAwsRequestMetrics();
         awsRequestMetrics.startEvent(.);
         Request<PredictRequestrequest = null;
         Response<PredictResultresponse = null;
         
         try {
             awsRequestMetrics.startEvent(.);
             try {
                 request = new PredictRequestMarshaller().marshall(super.beforeMarshalling(predictRequest));
                 // Binds the request metrics to the current request.
                 request.setAWSRequestMetrics(awsRequestMetrics);
             } finally {
                 awsRequestMetrics.endEvent(.);
             }
 
             Unmarshaller<PredictResultJsonUnmarshallerContextunmarshaller =
                 new PredictResultJsonUnmarshaller();
             JsonResponseHandler<PredictResultresponseHandler =
                 new JsonResponseHandler<PredictResult>(unmarshaller);
 
             response = invoke(requestresponseHandlerexecutionContext);
 
             return response.getAwsResponse();
         } finally {
             
             endClientExecution(awsRequestMetricsrequestresponse);
         }
     }

    

Returns a list of BatchPrediction operations that match the search criteria in the request.

Parameters:
describeBatchPredictionsRequest Container for the necessary parameters to execute the DescribeBatchPredictions service method on AmazonMachineLearning.
Returns:
The response from the DescribeBatchPredictions service method, as returned by AmazonMachineLearning.
Throws:
com.amazonaws.services.machinelearning.model.InvalidInputException
com.amazonaws.services.machinelearning.model.InternalServerException
com.amazonaws.AmazonClientException If any internal errors are encountered inside the client while attempting to make the request or handle the response. For example if a network connection is not available.
com.amazonaws.AmazonServiceException If an error response is returned by AmazonMachineLearning indicating either a problem with the data in the request, or a server side issue.
 
         ExecutionContext executionContext = createExecutionContext(describeBatchPredictionsRequest);
         AWSRequestMetrics awsRequestMetrics = executionContext.getAwsRequestMetrics();
         awsRequestMetrics.startEvent(.);
         Request<DescribeBatchPredictionsRequestrequest = null;
         Response<DescribeBatchPredictionsResultresponse = null;
         
         try {
             awsRequestMetrics.startEvent(.);
             try {
                 request = new DescribeBatchPredictionsRequestMarshaller().marshall(super.beforeMarshalling(describeBatchPredictionsRequest));
                 // Binds the request metrics to the current request.
                 request.setAWSRequestMetrics(awsRequestMetrics);
             } finally {
                 awsRequestMetrics.endEvent(.);
             }
 
             Unmarshaller<DescribeBatchPredictionsResultJsonUnmarshallerContextunmarshaller =
                 new DescribeBatchPredictionsResultJsonUnmarshaller();
             JsonResponseHandler<DescribeBatchPredictionsResultresponseHandler =
                 new JsonResponseHandler<DescribeBatchPredictionsResult>(unmarshaller);
 
             response = invoke(requestresponseHandlerexecutionContext);
 
             return response.getAwsResponse();
         } finally {
             
             endClientExecution(awsRequestMetricsrequestresponse);
         }
     }

    

Returns an Evaluation that includes metadata as well as the current status of the Evaluation .

Parameters:
getEvaluationRequest Container for the necessary parameters to execute the GetEvaluation service method on AmazonMachineLearning.
Returns:
The response from the GetEvaluation service method, as returned by AmazonMachineLearning.
Throws:
com.amazonaws.services.machinelearning.model.ResourceNotFoundException
com.amazonaws.services.machinelearning.model.InvalidInputException
com.amazonaws.services.machinelearning.model.InternalServerException
com.amazonaws.AmazonClientException If any internal errors are encountered inside the client while attempting to make the request or handle the response. For example if a network connection is not available.
com.amazonaws.AmazonServiceException If an error response is returned by AmazonMachineLearning indicating either a problem with the data in the request, or a server side issue.
 
     public GetEvaluationResult getEvaluation(GetEvaluationRequest getEvaluationRequest) {
         ExecutionContext executionContext = createExecutionContext(getEvaluationRequest);
         AWSRequestMetrics awsRequestMetrics = executionContext.getAwsRequestMetrics();
         awsRequestMetrics.startEvent(.);
         Request<GetEvaluationRequestrequest = null;
         Response<GetEvaluationResultresponse = null;
         
         try {
             awsRequestMetrics.startEvent(.);
             try {
                 request = new GetEvaluationRequestMarshaller().marshall(super.beforeMarshalling(getEvaluationRequest));
                 // Binds the request metrics to the current request.
                 request.setAWSRequestMetrics(awsRequestMetrics);
             } finally {
                 awsRequestMetrics.endEvent(.);
             }
 
             Unmarshaller<GetEvaluationResultJsonUnmarshallerContextunmarshaller =
                 new GetEvaluationResultJsonUnmarshaller();
             JsonResponseHandler<GetEvaluationResultresponseHandler =
                 new JsonResponseHandler<GetEvaluationResult>(unmarshaller);
 
             response = invoke(requestresponseHandlerexecutionContext);
 
             return response.getAwsResponse();
         } finally {
             
             endClientExecution(awsRequestMetricsrequestresponse);
         }
     }

    

Updates the MLModelName and the ScoreThreshold of an MLModel .

You can use the GetMLModel operation to view the contents of the updated data element.

Parameters:
updateMLModelRequest Container for the necessary parameters to execute the UpdateMLModel service method on AmazonMachineLearning.
Returns:
The response from the UpdateMLModel service method, as returned by AmazonMachineLearning.
Throws:
com.amazonaws.services.machinelearning.model.ResourceNotFoundException
com.amazonaws.services.machinelearning.model.InvalidInputException
com.amazonaws.services.machinelearning.model.InternalServerException
com.amazonaws.AmazonClientException If any internal errors are encountered inside the client while attempting to make the request or handle the response. For example if a network connection is not available.
com.amazonaws.AmazonServiceException If an error response is returned by AmazonMachineLearning indicating either a problem with the data in the request, or a server side issue.
 
     public UpdateMLModelResult updateMLModel(UpdateMLModelRequest updateMLModelRequest) {
         ExecutionContext executionContext = createExecutionContext(updateMLModelRequest);
         AWSRequestMetrics awsRequestMetrics = executionContext.getAwsRequestMetrics();
         awsRequestMetrics.startEvent(.);
         Request<UpdateMLModelRequestrequest = null;
         Response<UpdateMLModelResultresponse = null;
         
         try {
             awsRequestMetrics.startEvent(.);
             try {
                 request = new UpdateMLModelRequestMarshaller().marshall(super.beforeMarshalling(updateMLModelRequest));
                 // Binds the request metrics to the current request.
                 request.setAWSRequestMetrics(awsRequestMetrics);
             } finally {
                 awsRequestMetrics.endEvent(.);
             }
 
             Unmarshaller<UpdateMLModelResultJsonUnmarshallerContextunmarshaller =
                 new UpdateMLModelResultJsonUnmarshaller();
             JsonResponseHandler<UpdateMLModelResultresponseHandler =
                 new JsonResponseHandler<UpdateMLModelResult>(unmarshaller);
 
             response = invoke(requestresponseHandlerexecutionContext);
 
             return response.getAwsResponse();
         } finally {
             
             endClientExecution(awsRequestMetricsrequestresponse);
         }
     }

    

Returns a DataSource that includes metadata and data file information, as well as the current status of the DataSource .

GetDataSource provides results in normal or verbose format. The verbose format adds the schema description and the list of files pointed to by the DataSource to the normal format.

Parameters:
getDataSourceRequest Container for the necessary parameters to execute the GetDataSource service method on AmazonMachineLearning.
Returns:
The response from the GetDataSource service method, as returned by AmazonMachineLearning.
Throws:
com.amazonaws.services.machinelearning.model.ResourceNotFoundException
com.amazonaws.services.machinelearning.model.InvalidInputException
com.amazonaws.services.machinelearning.model.InternalServerException
com.amazonaws.AmazonClientException If any internal errors are encountered inside the client while attempting to make the request or handle the response. For example if a network connection is not available.
com.amazonaws.AmazonServiceException If an error response is returned by AmazonMachineLearning indicating either a problem with the data in the request, or a server side issue.
 
     public GetDataSourceResult getDataSource(GetDataSourceRequest getDataSourceRequest) {
         ExecutionContext executionContext = createExecutionContext(getDataSourceRequest);
         AWSRequestMetrics awsRequestMetrics = executionContext.getAwsRequestMetrics();
         awsRequestMetrics.startEvent(.);
         Request<GetDataSourceRequestrequest = null;
         Response<GetDataSourceResultresponse = null;
         
         try {
             awsRequestMetrics.startEvent(.);
             try {
                 request = new GetDataSourceRequestMarshaller().marshall(super.beforeMarshalling(getDataSourceRequest));
                 // Binds the request metrics to the current request.
                 request.setAWSRequestMetrics(awsRequestMetrics);
             } finally {
                 awsRequestMetrics.endEvent(.);
             }
 
             Unmarshaller<GetDataSourceResultJsonUnmarshallerContextunmarshaller =
                 new GetDataSourceResultJsonUnmarshaller();
             JsonResponseHandler<GetDataSourceResultresponseHandler =
                 new JsonResponseHandler<GetDataSourceResult>(unmarshaller);
 
             response = invoke(requestresponseHandlerexecutionContext);
 
             return response.getAwsResponse();
         } finally {
             
             endClientExecution(awsRequestMetricsrequestresponse);
         }
     }

    

Returns a list of DataSource that match the search criteria in the request.

Parameters:
describeDataSourcesRequest Container for the necessary parameters to execute the DescribeDataSources service method on AmazonMachineLearning.
Returns:
The response from the DescribeDataSources service method, as returned by AmazonMachineLearning.
Throws:
com.amazonaws.services.machinelearning.model.InvalidInputException
com.amazonaws.services.machinelearning.model.InternalServerException
com.amazonaws.AmazonClientException If any internal errors are encountered inside the client while attempting to make the request or handle the response. For example if a network connection is not available.
com.amazonaws.AmazonServiceException If an error response is returned by AmazonMachineLearning indicating either a problem with the data in the request, or a server side issue.
 
     public DescribeDataSourcesResult describeDataSources(DescribeDataSourcesRequest describeDataSourcesRequest) {
         ExecutionContext executionContext = createExecutionContext(describeDataSourcesRequest);
         AWSRequestMetrics awsRequestMetrics = executionContext.getAwsRequestMetrics();
         awsRequestMetrics.startEvent(.);
         Request<DescribeDataSourcesRequestrequest = null;
         Response<DescribeDataSourcesResultresponse = null;
         
         try {
             awsRequestMetrics.startEvent(.);
             try {
                 request = new DescribeDataSourcesRequestMarshaller().marshall(super.beforeMarshalling(describeDataSourcesRequest));
                 // Binds the request metrics to the current request.
                 request.setAWSRequestMetrics(awsRequestMetrics);
             } finally {
                 awsRequestMetrics.endEvent(.);
             }
 
             Unmarshaller<DescribeDataSourcesResultJsonUnmarshallerContextunmarshaller =
                 new DescribeDataSourcesResultJsonUnmarshaller();
             JsonResponseHandler<DescribeDataSourcesResultresponseHandler =
                 new JsonResponseHandler<DescribeDataSourcesResult>(unmarshaller);
 
             response = invoke(requestresponseHandlerexecutionContext);
 
             return response.getAwsResponse();
         } finally {
             
             endClientExecution(awsRequestMetricsrequestresponse);
         }
     }

    

Assigns the DELETED status to an Evaluation , rendering it unusable.

After invoking the DeleteEvaluation operation, you can use the GetEvaluation operation to verify that the status of the Evaluation changed to DELETED .

Caution The results of the DeleteEvaluation operation are irreversible.

Parameters:
deleteEvaluationRequest Container for the necessary parameters to execute the DeleteEvaluation service method on AmazonMachineLearning.
Returns:
The response from the DeleteEvaluation service method, as returned by AmazonMachineLearning.
Throws:
com.amazonaws.services.machinelearning.model.ResourceNotFoundException
com.amazonaws.services.machinelearning.model.InvalidInputException
com.amazonaws.services.machinelearning.model.InternalServerException
com.amazonaws.AmazonClientException If any internal errors are encountered inside the client while attempting to make the request or handle the response. For example if a network connection is not available.
com.amazonaws.AmazonServiceException If an error response is returned by AmazonMachineLearning indicating either a problem with the data in the request, or a server side issue.
 
     public DeleteEvaluationResult deleteEvaluation(DeleteEvaluationRequest deleteEvaluationRequest) {
         ExecutionContext executionContext = createExecutionContext(deleteEvaluationRequest);
         AWSRequestMetrics awsRequestMetrics = executionContext.getAwsRequestMetrics();
         awsRequestMetrics.startEvent(.);
         Request<DeleteEvaluationRequestrequest = null;
         Response<DeleteEvaluationResultresponse = null;
         
         try {
             awsRequestMetrics.startEvent(.);
             try {
                 request = new DeleteEvaluationRequestMarshaller().marshall(super.beforeMarshalling(deleteEvaluationRequest));
                 // Binds the request metrics to the current request.
                 request.setAWSRequestMetrics(awsRequestMetrics);
             } finally {
                 awsRequestMetrics.endEvent(.);
             }
 
             Unmarshaller<DeleteEvaluationResultJsonUnmarshallerContextunmarshaller =
                 new DeleteEvaluationResultJsonUnmarshaller();
             JsonResponseHandler<DeleteEvaluationResultresponseHandler =
                 new JsonResponseHandler<DeleteEvaluationResult>(unmarshaller);
 
             response = invoke(requestresponseHandlerexecutionContext);
 
             return response.getAwsResponse();
         } finally {
             
             endClientExecution(awsRequestMetricsrequestresponse);
         }
     }

    

Updates the BatchPredictionName of a BatchPrediction .

You can use the GetBatchPrediction operation to view the contents of the updated data element.

Parameters:
updateBatchPredictionRequest Container for the necessary parameters to execute the UpdateBatchPrediction service method on AmazonMachineLearning.
Returns:
The response from the UpdateBatchPrediction service method, as returned by AmazonMachineLearning.
Throws:
com.amazonaws.services.machinelearning.model.ResourceNotFoundException
com.amazonaws.services.machinelearning.model.InvalidInputException
com.amazonaws.services.machinelearning.model.InternalServerException
com.amazonaws.AmazonClientException If any internal errors are encountered inside the client while attempting to make the request or handle the response. For example if a network connection is not available.
com.amazonaws.AmazonServiceException If an error response is returned by AmazonMachineLearning indicating either a problem with the data in the request, or a server side issue.
        ExecutionContext executionContext = createExecutionContext(updateBatchPredictionRequest);
        AWSRequestMetrics awsRequestMetrics = executionContext.getAwsRequestMetrics();
        awsRequestMetrics.startEvent(.);
        Request<UpdateBatchPredictionRequestrequest = null;
        Response<UpdateBatchPredictionResultresponse = null;
        
        try {
            awsRequestMetrics.startEvent(.);
            try {
                request = new UpdateBatchPredictionRequestMarshaller().marshall(super.beforeMarshalling(updateBatchPredictionRequest));
                // Binds the request metrics to the current request.
                request.setAWSRequestMetrics(awsRequestMetrics);
            } finally {
                awsRequestMetrics.endEvent(.);
            }
            Unmarshaller<UpdateBatchPredictionResultJsonUnmarshallerContextunmarshaller =
                new UpdateBatchPredictionResultJsonUnmarshaller();
            JsonResponseHandler<UpdateBatchPredictionResultresponseHandler =
                new JsonResponseHandler<UpdateBatchPredictionResult>(unmarshaller);
            response = invoke(requestresponseHandlerexecutionContext);
            return response.getAwsResponse();
        } finally {
            
            endClientExecution(awsRequestMetricsrequestresponse);
        }
    }

    

Generates predictions for a group of observations. The observations to process exist in one or more data files referenced by a DataSource . This operation creates a new BatchPrediction , and uses an MLModel and the data files referenced by the DataSource as information sources.

CreateBatchPrediction is an asynchronous operation. In response to CreateBatchPrediction , Amazon Machine Learning (Amazon ML) immediately returns and sets the BatchPrediction status to PENDING . After the BatchPrediction completes, Amazon ML sets the status to COMPLETED .

You can poll for status updates by using the GetBatchPrediction operation and checking the Status parameter of the result. After the COMPLETED status appears, the results are available in the location specified by the OutputUri parameter.

Parameters:
createBatchPredictionRequest Container for the necessary parameters to execute the CreateBatchPrediction service method on AmazonMachineLearning.
Returns:
The response from the CreateBatchPrediction service method, as returned by AmazonMachineLearning.
Throws:
com.amazonaws.services.machinelearning.model.InvalidInputException
com.amazonaws.services.machinelearning.model.IdempotentParameterMismatchException
com.amazonaws.services.machinelearning.model.InternalServerException
com.amazonaws.AmazonClientException If any internal errors are encountered inside the client while attempting to make the request or handle the response. For example if a network connection is not available.
com.amazonaws.AmazonServiceException If an error response is returned by AmazonMachineLearning indicating either a problem with the data in the request, or a server side issue.
        ExecutionContext executionContext = createExecutionContext(createBatchPredictionRequest);
        AWSRequestMetrics awsRequestMetrics = executionContext.getAwsRequestMetrics();
        awsRequestMetrics.startEvent(.);
        Request<CreateBatchPredictionRequestrequest = null;
        Response<CreateBatchPredictionResultresponse = null;
        
        try {
            awsRequestMetrics.startEvent(.);
            try {
                request = new CreateBatchPredictionRequestMarshaller().marshall(super.beforeMarshalling(createBatchPredictionRequest));
                // Binds the request metrics to the current request.
                request.setAWSRequestMetrics(awsRequestMetrics);
            } finally {
                awsRequestMetrics.endEvent(.);
            }
            Unmarshaller<CreateBatchPredictionResultJsonUnmarshallerContextunmarshaller =
                new CreateBatchPredictionResultJsonUnmarshaller();
            JsonResponseHandler<CreateBatchPredictionResultresponseHandler =
                new JsonResponseHandler<CreateBatchPredictionResult>(unmarshaller);
            response = invoke(requestresponseHandlerexecutionContext);
            return response.getAwsResponse();
        } finally {
            
            endClientExecution(awsRequestMetricsrequestresponse);
        }
    }

    

Returns a list of MLModel that match the search criteria in the request.

Parameters:
describeMLModelsRequest Container for the necessary parameters to execute the DescribeMLModels service method on AmazonMachineLearning.
Returns:
The response from the DescribeMLModels service method, as returned by AmazonMachineLearning.
Throws:
com.amazonaws.services.machinelearning.model.InvalidInputException
com.amazonaws.services.machinelearning.model.InternalServerException
com.amazonaws.AmazonClientException If any internal errors are encountered inside the client while attempting to make the request or handle the response. For example if a network connection is not available.
com.amazonaws.AmazonServiceException If an error response is returned by AmazonMachineLearning indicating either a problem with the data in the request, or a server side issue.
    public DescribeMLModelsResult describeMLModels(DescribeMLModelsRequest describeMLModelsRequest) {
        ExecutionContext executionContext = createExecutionContext(describeMLModelsRequest);
        AWSRequestMetrics awsRequestMetrics = executionContext.getAwsRequestMetrics();
        awsRequestMetrics.startEvent(.);
        Request<DescribeMLModelsRequestrequest = null;
        Response<DescribeMLModelsResultresponse = null;
        
        try {
            awsRequestMetrics.startEvent(.);
            try {
                request = new DescribeMLModelsRequestMarshaller().marshall(super.beforeMarshalling(describeMLModelsRequest));
                // Binds the request metrics to the current request.
                request.setAWSRequestMetrics(awsRequestMetrics);
            } finally {
                awsRequestMetrics.endEvent(.);
            }
            Unmarshaller<DescribeMLModelsResultJsonUnmarshallerContextunmarshaller =
                new DescribeMLModelsResultJsonUnmarshaller();
            JsonResponseHandler<DescribeMLModelsResultresponseHandler =
                new JsonResponseHandler<DescribeMLModelsResult>(unmarshaller);
            response = invoke(requestresponseHandlerexecutionContext);
            return response.getAwsResponse();
        } finally {
            
            endClientExecution(awsRequestMetricsrequestresponse);
        }
    }

    

Assigns the DELETED status to a BatchPrediction , rendering it unusable.

After using the DeleteBatchPrediction operation, you can use the GetBatchPrediction operation to verify that the status of the BatchPrediction changed to DELETED.

Caution The result of the DeleteBatchPrediction operation is irreversible.

Parameters:
deleteBatchPredictionRequest Container for the necessary parameters to execute the DeleteBatchPrediction service method on AmazonMachineLearning.
Returns:
The response from the DeleteBatchPrediction service method, as returned by AmazonMachineLearning.
Throws:
com.amazonaws.services.machinelearning.model.ResourceNotFoundException
com.amazonaws.services.machinelearning.model.InvalidInputException
com.amazonaws.services.machinelearning.model.InternalServerException
com.amazonaws.AmazonClientException If any internal errors are encountered inside the client while attempting to make the request or handle the response. For example if a network connection is not available.
com.amazonaws.AmazonServiceException If an error response is returned by AmazonMachineLearning indicating either a problem with the data in the request, or a server side issue.
        ExecutionContext executionContext = createExecutionContext(deleteBatchPredictionRequest);
        AWSRequestMetrics awsRequestMetrics = executionContext.getAwsRequestMetrics();
        awsRequestMetrics.startEvent(.);
        Request<DeleteBatchPredictionRequestrequest = null;
        Response<DeleteBatchPredictionResultresponse = null;
        
        try {
            awsRequestMetrics.startEvent(.);
            try {
                request = new DeleteBatchPredictionRequestMarshaller().marshall(super.beforeMarshalling(deleteBatchPredictionRequest));
                // Binds the request metrics to the current request.
                request.setAWSRequestMetrics(awsRequestMetrics);
            } finally {
                awsRequestMetrics.endEvent(.);
            }
            Unmarshaller<DeleteBatchPredictionResultJsonUnmarshallerContextunmarshaller =
                new DeleteBatchPredictionResultJsonUnmarshaller();
            JsonResponseHandler<DeleteBatchPredictionResultresponseHandler =
                new JsonResponseHandler<DeleteBatchPredictionResult>(unmarshaller);
            response = invoke(requestresponseHandlerexecutionContext);
            return response.getAwsResponse();
        } finally {
            
            endClientExecution(awsRequestMetricsrequestresponse);
        }
    }

    

Updates the DataSourceName of a DataSource .

You can use the GetDataSource operation to view the contents of the updated data element.

Parameters:
updateDataSourceRequest Container for the necessary parameters to execute the UpdateDataSource service method on AmazonMachineLearning.
Returns:
The response from the UpdateDataSource service method, as returned by AmazonMachineLearning.
Throws:
com.amazonaws.services.machinelearning.model.ResourceNotFoundException
com.amazonaws.services.machinelearning.model.InvalidInputException
com.amazonaws.services.machinelearning.model.InternalServerException
com.amazonaws.AmazonClientException If any internal errors are encountered inside the client while attempting to make the request or handle the response. For example if a network connection is not available.
com.amazonaws.AmazonServiceException If an error response is returned by AmazonMachineLearning indicating either a problem with the data in the request, or a server side issue.
    public UpdateDataSourceResult updateDataSource(UpdateDataSourceRequest updateDataSourceRequest) {
        ExecutionContext executionContext = createExecutionContext(updateDataSourceRequest);
        AWSRequestMetrics awsRequestMetrics = executionContext.getAwsRequestMetrics();
        awsRequestMetrics.startEvent(.);
        Request<UpdateDataSourceRequestrequest = null;
        Response<UpdateDataSourceResultresponse = null;
        
        try {
            awsRequestMetrics.startEvent(.);
            try {
                request = new UpdateDataSourceRequestMarshaller().marshall(super.beforeMarshalling(updateDataSourceRequest));
                // Binds the request metrics to the current request.
                request.setAWSRequestMetrics(awsRequestMetrics);
            } finally {
                awsRequestMetrics.endEvent(.);
            }
            Unmarshaller<UpdateDataSourceResultJsonUnmarshallerContextunmarshaller =
                new UpdateDataSourceResultJsonUnmarshaller();
            JsonResponseHandler<UpdateDataSourceResultresponseHandler =
                new JsonResponseHandler<UpdateDataSourceResult>(unmarshaller);
            response = invoke(requestresponseHandlerexecutionContext);
            return response.getAwsResponse();
        } finally {
            
            endClientExecution(awsRequestMetricsrequestresponse);
        }
    }

    

Creates a DataSource object from an Amazon Relational Database Service (Amazon RDS). A DataSource references data that can be used to perform CreateMLModel, CreateEvaluation, or CreateBatchPrediction operations.

CreateDataSourceFromRDS is an asynchronous operation. In response to CreateDataSourceFromRDS , Amazon Machine Learning (Amazon ML) immediately returns and sets the DataSource status to PENDING . After the DataSource is created and ready for use, Amazon ML sets the Status parameter to COMPLETED . DataSource in COMPLETED or PENDING status can only be used to perform CreateMLModel, CreateEvaluation, or CreateBatchPrediction operations.

If Amazon ML cannot accept the input source, it sets the Status parameter to FAILED and includes an error message in the Message attribute of the GetDataSource operation response.

Parameters:
createDataSourceFromRDSRequest Container for the necessary parameters to execute the CreateDataSourceFromRDS service method on AmazonMachineLearning.
Returns:
The response from the CreateDataSourceFromRDS service method, as returned by AmazonMachineLearning.
Throws:
com.amazonaws.services.machinelearning.model.InvalidInputException
com.amazonaws.services.machinelearning.model.IdempotentParameterMismatchException
com.amazonaws.services.machinelearning.model.InternalServerException
com.amazonaws.AmazonClientException If any internal errors are encountered inside the client while attempting to make the request or handle the response. For example if a network connection is not available.
com.amazonaws.AmazonServiceException If an error response is returned by AmazonMachineLearning indicating either a problem with the data in the request, or a server side issue.
        ExecutionContext executionContext = createExecutionContext(createDataSourceFromRDSRequest);
        AWSRequestMetrics awsRequestMetrics = executionContext.getAwsRequestMetrics();
        awsRequestMetrics.startEvent(.);
        Request<CreateDataSourceFromRDSRequestrequest = null;
        Response<CreateDataSourceFromRDSResultresponse = null;
        
        try {
            awsRequestMetrics.startEvent(.);
            try {
                request = new CreateDataSourceFromRDSRequestMarshaller().marshall(super.beforeMarshalling(createDataSourceFromRDSRequest));
                // Binds the request metrics to the current request.
                request.setAWSRequestMetrics(awsRequestMetrics);
            } finally {
                awsRequestMetrics.endEvent(.);
            }
            Unmarshaller<CreateDataSourceFromRDSResultJsonUnmarshallerContextunmarshaller =
                new CreateDataSourceFromRDSResultJsonUnmarshaller();
            JsonResponseHandler<CreateDataSourceFromRDSResultresponseHandler =
                new JsonResponseHandler<CreateDataSourceFromRDSResult>(unmarshaller);
            response = invoke(requestresponseHandlerexecutionContext);
            return response.getAwsResponse();
        } finally {
            
            endClientExecution(awsRequestMetricsrequestresponse);
        }
    }

    

Creates a DataSource from Amazon Redshift . A DataSource references data that can be used to perform either CreateMLModel, CreateEvaluation or CreateBatchPrediction operations.

CreateDataSourceFromRedshift is an asynchronous operation. In response to CreateDataSourceFromRedshift , Amazon Machine Learning (Amazon ML) immediately returns and sets the DataSource status to PENDING . After the DataSource is created and ready for use, Amazon ML sets the Status parameter to COMPLETED . DataSource in COMPLETED or PENDING status can only be used to perform CreateMLModel, CreateEvaluation, or CreateBatchPrediction operations.

If Amazon ML cannot accept the input source, it sets the Status parameter to FAILED and includes an error message in the Message attribute of the GetDataSource operation response.

The observations should exist in the database hosted on an Amazon Redshift cluster and should be specified by a SelectSqlQuery . Amazon ML executes Unload command in Amazon Redshift to transfer the result set of SelectSqlQuery to S3StagingLocation.

After the DataSource is created, it's ready for use in evaluations and batch predictions. If you plan to use the DataSource to train an MLModel , the DataSource requires another item -- a recipe. A recipe describes the observation variables that participate in training an MLModel . A recipe describes how each input variable will be used in training. Will the variable be included or excluded from training? Will the variable be manipulated, for example, combined with another variable or split apart into word combinations? The recipe provides answers to these questions. For more information, see the Amazon Machine Learning Developer Guide.

Parameters:
createDataSourceFromRedshiftRequest Container for the necessary parameters to execute the CreateDataSourceFromRedshift service method on AmazonMachineLearning.
Returns:
The response from the CreateDataSourceFromRedshift service method, as returned by AmazonMachineLearning.
Throws:
com.amazonaws.services.machinelearning.model.InvalidInputException
com.amazonaws.services.machinelearning.model.IdempotentParameterMismatchException
com.amazonaws.services.machinelearning.model.InternalServerException
com.amazonaws.AmazonClientException If any internal errors are encountered inside the client while attempting to make the request or handle the response. For example if a network connection is not available.
com.amazonaws.AmazonServiceException If an error response is returned by AmazonMachineLearning indicating either a problem with the data in the request, or a server side issue.
        ExecutionContext executionContext = createExecutionContext(createDataSourceFromRedshiftRequest);
        AWSRequestMetrics awsRequestMetrics = executionContext.getAwsRequestMetrics();
        awsRequestMetrics.startEvent(.);
        Request<CreateDataSourceFromRedshiftRequestrequest = null;
        Response<CreateDataSourceFromRedshiftResultresponse = null;
        
        try {
            awsRequestMetrics.startEvent(.);
            try {
                request = new CreateDataSourceFromRedshiftRequestMarshaller().marshall(super.beforeMarshalling(createDataSourceFromRedshiftRequest));
                // Binds the request metrics to the current request.
                request.setAWSRequestMetrics(awsRequestMetrics);
            } finally {
                awsRequestMetrics.endEvent(.);
            }
                new CreateDataSourceFromRedshiftResultJsonUnmarshaller();
            JsonResponseHandler<CreateDataSourceFromRedshiftResultresponseHandler =
                new JsonResponseHandler<CreateDataSourceFromRedshiftResult>(unmarshaller);
            response = invoke(requestresponseHandlerexecutionContext);
            return response.getAwsResponse();
        } finally {
            
            endClientExecution(awsRequestMetricsrequestresponse);
        }
    }

    

Returns a list of DescribeEvaluations that match the search criteria in the request.

Parameters:
describeEvaluationsRequest Container for the necessary parameters to execute the DescribeEvaluations service method on AmazonMachineLearning.
Returns:
The response from the DescribeEvaluations service method, as returned by AmazonMachineLearning.
Throws:
com.amazonaws.services.machinelearning.model.InvalidInputException
com.amazonaws.services.machinelearning.model.InternalServerException
com.amazonaws.AmazonClientException If any internal errors are encountered inside the client while attempting to make the request or handle the response. For example if a network connection is not available.
com.amazonaws.AmazonServiceException If an error response is returned by AmazonMachineLearning indicating either a problem with the data in the request, or a server side issue.
    public DescribeEvaluationsResult describeEvaluations(DescribeEvaluationsRequest describeEvaluationsRequest) {
        ExecutionContext executionContext = createExecutionContext(describeEvaluationsRequest);
        AWSRequestMetrics awsRequestMetrics = executionContext.getAwsRequestMetrics();
        awsRequestMetrics.startEvent(.);
        Request<DescribeEvaluationsRequestrequest = null;
        Response<DescribeEvaluationsResultresponse = null;
        
        try {
            awsRequestMetrics.startEvent(.);
            try {
                request = new DescribeEvaluationsRequestMarshaller().marshall(super.beforeMarshalling(describeEvaluationsRequest));
                // Binds the request metrics to the current request.
                request.setAWSRequestMetrics(awsRequestMetrics);
            } finally {
                awsRequestMetrics.endEvent(.);
            }
            Unmarshaller<DescribeEvaluationsResultJsonUnmarshallerContextunmarshaller =
                new DescribeEvaluationsResultJsonUnmarshaller();
            JsonResponseHandler<DescribeEvaluationsResultresponseHandler =
                new JsonResponseHandler<DescribeEvaluationsResult>(unmarshaller);
            response = invoke(requestresponseHandlerexecutionContext);
            return response.getAwsResponse();
        } finally {
            
            endClientExecution(awsRequestMetricsrequestresponse);
        }
    }

    

Returns an MLModel that includes detailed metadata, and data source information as well as the current status of the MLModel .

GetMLModel provides results in normal or verbose format.

Parameters:
getMLModelRequest Container for the necessary parameters to execute the GetMLModel service method on AmazonMachineLearning.
Returns:
The response from the GetMLModel service method, as returned by AmazonMachineLearning.
Throws:
com.amazonaws.services.machinelearning.model.ResourceNotFoundException
com.amazonaws.services.machinelearning.model.InvalidInputException
com.amazonaws.services.machinelearning.model.InternalServerException
com.amazonaws.AmazonClientException If any internal errors are encountered inside the client while attempting to make the request or handle the response. For example if a network connection is not available.
com.amazonaws.AmazonServiceException If an error response is returned by AmazonMachineLearning indicating either a problem with the data in the request, or a server side issue.
    public GetMLModelResult getMLModel(GetMLModelRequest getMLModelRequest) {
        ExecutionContext executionContext = createExecutionContext(getMLModelRequest);
        AWSRequestMetrics awsRequestMetrics = executionContext.getAwsRequestMetrics();
        awsRequestMetrics.startEvent(.);
        Request<GetMLModelRequestrequest = null;
        Response<GetMLModelResultresponse = null;
        
        try {
            awsRequestMetrics.startEvent(.);
            try {
                request = new GetMLModelRequestMarshaller().marshall(super.beforeMarshalling(getMLModelRequest));
                // Binds the request metrics to the current request.
                request.setAWSRequestMetrics(awsRequestMetrics);
            } finally {
                awsRequestMetrics.endEvent(.);
            }
            Unmarshaller<GetMLModelResultJsonUnmarshallerContextunmarshaller =
                new GetMLModelResultJsonUnmarshaller();
            JsonResponseHandler<GetMLModelResultresponseHandler =
                new JsonResponseHandler<GetMLModelResult>(unmarshaller);
            response = invoke(requestresponseHandlerexecutionContext);
            return response.getAwsResponse();
        } finally {
            
            endClientExecution(awsRequestMetricsrequestresponse);
        }
    }

    

Assigns the DELETED status to a DataSource , rendering it unusable.

After using the DeleteDataSource operation, you can use the GetDataSource operation to verify that the status of the DataSource changed to DELETED.

Caution The results of the DeleteDataSource operation are irreversible.

Parameters:
deleteDataSourceRequest Container for the necessary parameters to execute the DeleteDataSource service method on AmazonMachineLearning.
Returns:
The response from the DeleteDataSource service method, as returned by AmazonMachineLearning.
Throws:
com.amazonaws.services.machinelearning.model.ResourceNotFoundException
com.amazonaws.services.machinelearning.model.InvalidInputException
com.amazonaws.services.machinelearning.model.InternalServerException
com.amazonaws.AmazonClientException If any internal errors are encountered inside the client while attempting to make the request or handle the response. For example if a network connection is not available.
com.amazonaws.AmazonServiceException If an error response is returned by AmazonMachineLearning indicating either a problem with the data in the request, or a server side issue.
    public DeleteDataSourceResult deleteDataSource(DeleteDataSourceRequest deleteDataSourceRequest) {
        ExecutionContext executionContext = createExecutionContext(deleteDataSourceRequest);
        AWSRequestMetrics awsRequestMetrics = executionContext.getAwsRequestMetrics();
        awsRequestMetrics.startEvent(.);
        Request<DeleteDataSourceRequestrequest = null;
        Response<DeleteDataSourceResultresponse = null;
        
        try {
            awsRequestMetrics.startEvent(.);
            try {
                request = new DeleteDataSourceRequestMarshaller().marshall(super.beforeMarshalling(deleteDataSourceRequest));
                // Binds the request metrics to the current request.
                request.setAWSRequestMetrics(awsRequestMetrics);
            } finally {
                awsRequestMetrics.endEvent(.);
            }
            Unmarshaller<DeleteDataSourceResultJsonUnmarshallerContextunmarshaller =
                new DeleteDataSourceResultJsonUnmarshaller();
            JsonResponseHandler<DeleteDataSourceResultresponseHandler =
                new JsonResponseHandler<DeleteDataSourceResult>(unmarshaller);
            response = invoke(requestresponseHandlerexecutionContext);
            return response.getAwsResponse();
        } finally {
            
            endClientExecution(awsRequestMetricsrequestresponse);
        }
    }

    

Returns a BatchPrediction that includes detailed metadata, status, and data file information for a Batch Prediction request.

Parameters:
getBatchPredictionRequest Container for the necessary parameters to execute the GetBatchPrediction service method on AmazonMachineLearning.
Returns:
The response from the GetBatchPrediction service method, as returned by AmazonMachineLearning.
Throws:
com.amazonaws.services.machinelearning.model.ResourceNotFoundException
com.amazonaws.services.machinelearning.model.InvalidInputException
com.amazonaws.services.machinelearning.model.InternalServerException
com.amazonaws.AmazonClientException If any internal errors are encountered inside the client while attempting to make the request or handle the response. For example if a network connection is not available.
com.amazonaws.AmazonServiceException If an error response is returned by AmazonMachineLearning indicating either a problem with the data in the request, or a server side issue.
    public GetBatchPredictionResult getBatchPrediction(GetBatchPredictionRequest getBatchPredictionRequest) {
        ExecutionContext executionContext = createExecutionContext(getBatchPredictionRequest);
        AWSRequestMetrics awsRequestMetrics = executionContext.getAwsRequestMetrics();
        awsRequestMetrics.startEvent(.);
        Request<GetBatchPredictionRequestrequest = null;
        Response<GetBatchPredictionResultresponse = null;
        
        try {
            awsRequestMetrics.startEvent(.);
            try {
                request = new GetBatchPredictionRequestMarshaller().marshall(super.beforeMarshalling(getBatchPredictionRequest));
                // Binds the request metrics to the current request.
                request.setAWSRequestMetrics(awsRequestMetrics);
            } finally {
                awsRequestMetrics.endEvent(.);
            }
            Unmarshaller<GetBatchPredictionResultJsonUnmarshallerContextunmarshaller =
                new GetBatchPredictionResultJsonUnmarshaller();
            JsonResponseHandler<GetBatchPredictionResultresponseHandler =
                new JsonResponseHandler<GetBatchPredictionResult>(unmarshaller);
            response = invoke(requestresponseHandlerexecutionContext);
            return response.getAwsResponse();
        } finally {
            
            endClientExecution(awsRequestMetricsrequestresponse);
        }
    }

    

Creates a new Evaluation of an MLModel . An MLModel is evaluated on a set of observations associated to a DataSource . Like a DataSource for an MLModel , the DataSource for an Evaluation contains values for the Target Variable. The Evaluation compares the predicted result for each observation to the actual outcome and provides a summary so that you know how effective the MLModel functions on the test data. Evaluation generates a relevant performance metric such as BinaryAUC, RegressionRMSE or MulticlassAvgFScore based on the corresponding MLModelType : BINARY , REGRESSION or MULTICLASS .

CreateEvaluation is an asynchronous operation. In response to CreateEvaluation , Amazon Machine Learning (Amazon ML) immediately returns and sets the evaluation status to PENDING . After the Evaluation is created and ready for use, Amazon ML sets the status to COMPLETED .

You can use the GetEvaluation operation to check progress of the evaluation during the creation operation.

Parameters:
createEvaluationRequest Container for the necessary parameters to execute the CreateEvaluation service method on AmazonMachineLearning.
Returns:
The response from the CreateEvaluatio