×

Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All
Cloud-AWS
Use Amazon EMR for processing data

How can I use Amazon EMR to process data?

Last updated: 2022-11-17

I want to use Amazon EMR to process data.

Resolution

Amazon EMR processes data using Amazon Elastic Compute Cloud (Amazon EC2) instances and open-source applications such as Apache Spark, HBase, Presto, and Flink.

To launch your first EMR cluster, follow the video tutorial in the article, or see Tutorial: Getting started with Amazon EMR.


Overview of Amazon EMR

Overview of Amazon EMR architecture

Did this article help?

Submit feedback

Do you need billing or technical support?

Contact AWS Support
Read article
Overview of Amazon Mechanical Turk

What is Amazon Mechanical Turk?

Last updated: 2022-10-18

What is Amazon Mechanical Turk?

Resolution

Amazon Mechanical Turk is a forum where Requesters post work as Human Intelligence Tasks (HITs). Workers complete HITs in exchange for a reward. You write, test, and publish your HIT using the Mechanical Turk developer sandbox, Amazon Mechanical Turk APIs, and AWS SDKs.

Here are some common tasks posted by Requesters:

  • Localization and transcription services
  • Audio editing
  • Information gathering tasks (surveys)
  • Machine learning
  • Information gathering
  • Photo and video processing
  • Data collection

To set up your AWS account and develop solutions using the Mechanical Turk web service, see Setting up accounts and tools. For other common scenarios and resources, see Common use scenarios.


Amazon Mechanical Turk

Amazon Mechanical Turk FAQs

Amazon Mechanical Turk Worker FAQs

Implementing Amazon Mechanical Turk

Did this article help?

Submit feedback

Do you need billing or technical support?

Contact AWS Support
Read article
Stop and start instances using the AWS Instance Scheduler with CloudFormation

How do I use Instance Scheduler with CloudFormation to schedule EC2 instances?

Last updated: 2022-10-28

I want to use AWS Instance Scheduler with AWS CloudFormation to schedule Amazon Elastic Compute Cloud (Amazon EC2) instances.

Short description

Use CloudFormation templates to automate the deployment of AWS Instance Scheduler.

Note: Currently, you can't use the templates in the Asia Pacific (Jakarta) and Asia Pacific (Osaka) AWS Regions.

Important: If you use the Instance Scheduler for EC2 instances with an encrypted Amazon Elastic Block Store (Amazon EBS), then your instances can't be started. To start your instances, you must grant the Instance Scheduler a key user role with a key policy to encrypt or decrypt EBS volumes. You must add the key policy to the AWS Key Management Service (AWS KMS) key to allow the key user role to use this key.

Resolution

Install the Instance Scheduler command line interface (CLI).

To verify that the installation is successful, run the following command:

$ scheduler-cli --version

Create a CloudFormation stack with the Instance Scheduler template

The stack deploys an AWS Lambda function, an Amazon DynamoDB table, an Amazon EventBridge rule, and Amazon CloudWatch custom metrics.

  1. Open the AWS Management Console.
  2. Open CloudFormation with the Instance Scheduler template. Or, go to the Step 1. Launch the instance scheduler stack page, and choose Launch Solution .
    Note: The template is launched in the US East (N. Virginia) Region by default.
  3. In the navigation bar, select the AWS Region where you want to launch your stack with the template, and then choose Next .
  4. For Stack name , name your stack.
  5. For Instance Scheduler TagName , you can keep the default value as Schedule , or customize it.
  6. For Frequency , choose a frequency in minutes to run your scheduler. For example, you can choose 5 minutes.
    Note: The frequency is the number of minutes that pass before EventBridge initiates the Lambda function again for the Instance Scheduler. If you have a large number of instances, then use the highest frequency possible to avoid throttling. If the frequency isn't often enough for your needs, then you can adjust the Frequency property later.
  7. For Enable CloudWatch Logs , choose Yes .
  8. For Started tags , enter state=started .
  9. For Stopped tags , enter state=stopped .
  10. For cross-account scheduling, provide the Cross-account roles parameter. Put in the ARNs for every role from the secondary accounts, separated by commas. If you aren't using cross-account scheduling, then leave the parameter empty.
  11. For all other parameters, customize the stack for your needs.
  12. Choose Next .
  13. On the Options page, choose Next .
  14. Review your settings, and then select I acknowledge that AWS CloudFormation might create IAM resources .
  15. Choose Create .

Create the periods

To create periods, you can use the Instance Scheduler CLI, DynamoDB console, or Custom resources. For more information on time periods, see Start and stop times.

The following example shows you how to create instances that:

  • Start at 9 AM and stop at 5 PM on Monday through Friday
  • Start at 9 AM and stop at 12 PM on Saturday

For this example, you must create two periods. For your own scenario, create the appropriate number of periods.

Using the Instance Scheduler CLI

Connect to the Instance Scheduler CLI, and then run the following commands:

$ scheduler-cli create-period --stack your_stack_name --region eu-west-1 --name mon-fri-9-5 --begintime 9:00 --endtime 16:59 --weekdays mon-fri
$ scheduler-cli create-period --stack your_stack_name --region eu-west-1 --name sat-9-12 --begintime 9:00 --endtime 11:59 --weekdays sat

Note: Replace your_stack_name with the stack name that you chose in step 4 and eu-west-1 with your own Region.

Using the DynamoDB console

  1. Open the DynamoDB console.
  2. Choose Tables , and then choose the configuration table.
    Note: The Instance Scheduler template automatically creates two DynamoDB tables: state and configuration. The state table stores the state of instances that the template stops and starts. The configuration table allows you to specify the periods and schedules for your requirements.
  3. Choose Explore Table Items .
  4. Choose Create Item .
  5. Choose the JSON view, and then use the following javascript object notation (JSON) template:
{
  "type": {
    "S": "period"
  },
  "name": {
    "S": "mon-fri-9-5"
  },
  "begintime": {
    "S": "9:00"
  },
  "endtime": {
    "S": "16:59"
  },
  "weekdays": {
    "SS": [
      "mon-fri"
    ]
  }
}

Note: The preceding JSON template creates the first period. Use a similar JSON template for the second period. Make sure to edit the templates for your requirements.

Create a schedule

To create a schedule, you can use the Instance Scheduler CLI, DynamoDB console, or Custom resources.

Using the Instance Scheduler CLI

Run the following command:

$ scheduler-cli create-schedule --stack your_stack_name --name m-f9-5-sat9-12 --region eu-west-1 --periods mon-fri-9-5,sat-9-12 --timezone UTC

Using the DynamoDB console

  1. Open the DynamoDB console.
  2. Choose Tables , and then choose the configuration table.
  3. Choose Explore Table Items .
  4. Choose Create Item .
  5. Choose the JSON view, and then use the following JSON template:
{
  "type": {
    "S": "schedule"
  },
  "name": {
    "S": "m-f9-5-sat9-12"
  },
  "timezone": {
    "S": "UTC"
  },
  "periods": {
    "SS": [
      "mon-fri-9-5"
    ]
  }
}

Tag the instance and test the schedule

When you use a CloudFormation stack with the Instance Scheduler, you must define the Instance Scheduler TagName parameter. The default value for this parameter is Schedule .

The Instance Scheduler monitors tags on instances. If the instance tag key matches the defined scheduler tag, then the Instance Scheduler applies the schedule that's set for the instance tag value. For example, a tag's key is set to Schedule and the value is set to m-f9-5-sat9-12 . In this example, the instances start at 9 AM and stop at 5 PM on Monday through Friday. The instances also start at 9 AM and stop at 12 PM on Saturday.

Note: Tag keys and values are case sensitive. The Instance Scheduler doesn't stop running instances if they're manually started outside of the running period. The Instance Scheduler also doesn't start an instance if the instance is stopped manually during the running period, unless the schedule is enforced. For more information, see Schedule definitions.

Use predefined schedules

In addition to custom schedules, you can also use any of the predefined schedules from the configuration table. For example, the following steps test the predefined schedule named running :

  1. Open the Amazon EC2 console.
  2. Choose the stopped instances that you want to tag.
  3. Choose the Tags view, and then choose Manage Tags .
  4. Choose Add Tag .
  5. For Key , enter Schedule .
  6. For Value , enter running .
  7. Choose Save .
  8. Refresh the Amazon EC2 console, and then wait for the Lambda function to be initiated.
    Note: When the Lambda function is initiated and runs without errors, then the Instance State displays as running , depending on the schedule that you're testing. In the CloudWatch console, you can check Lambda metrics for invocations and errors.
  9. Open the DynamoDB console.
  10. Choose Tables , and then choose the state table.
  11. Choose the Explore Table Items and confirm that the tagged instance is started.
    Note: The state data is stored in the state table.
    Important: You can be charged additional costs based on the frequency and duration of the Lambda function that you're using. You can also be charged additional costs for the DynamoDB tables or EventBridge rules that you create.

For cross-account scheduling: Launch the remote stack in secondary accounts

To schedule instances in secondary accounts using the Instance Scheduler, deploy the aws-instance-scheduler-remote CloudFormation template. This template creates the role that allows the Instance Scheduler from the primary account to manage instances in the secondary account.

Note: You must provide the role's ARN as a parameter for the Instance Scheduler stack in the primary account. Make sure to create or update your Instance Scheduler stack with the correct parameter.

  1. Open the AWS Management Console of the secondary account and launch the aws-instance-scheduler-remote CloudFormation template. You can also download the template for future use.
    Note: The template is launched in the US East (N. Virginia) Region by default.
  2. In the navigation bar, select the AWS Region where you want to launch your stack with the template, and then choose Next .
  3. On the Select Template page, verify that you selected the correct template, and then choose Next .
  4. On the Specify Details page, assign a name to your remote stack.
  5. Under Parameters , review and modify the Primary account parameter. Put in the account number of the primary account.
  6. Choose Next .
  7. On the Options page, choose Next .
  8. Review your settings, and then select I acknowledge that AWS CloudFormation might create IAM resources .
  9. Choose Create .
  10. Choose the stack Outputs tab, and then copy the CrossAccountRole value.
  11. From the primary account, choose your CloudFormation stack, and then choose Update .
  12. On the Update stack page, choose Use current template .
  13. In the Cross-account roles parameter, paste the CrossAccountRole value.
  14. Choose Next , and then select I acknowledge that AWS CloudFormation might create IAM resources .
  15. Choose Update Stack .

Did this article help?

Submit feedback

Do you need billing or technical support?

Contact AWS Support
Read article
Create an IAM policy to control access to EC2 resources using tags

How do I create an IAM policy to control access to Amazon EC2 resources using tags?

Last updated: 2021-09-27

How do I create an AWS Identity and Access Management (IAM) policy that controls access to Amazon Elastic Compute Cloud (Amazon EC2) instances using tags?

Short description

You can control access to smaller deployments of Amazon EC2 instances as follows:

  1. Add a specific tag to the instances you want to grant the users or groups access to.
  2. Create an IAM policy that grants access to any instances with the specific tag.
  3. Attach the IAM policy to the users or groups that you want to access the instances.

Resolution

Add a tag to your group of EC2 instances

Open the Amazon EC2 console, and then add tags to the group of EC2 instances that you want the users or groups to be able to access. If you don't already have a tag, create a new tag.

Note: Be sure to read and understand the tag restrictions before tagging your resources. Amazon EC2 tags are case-sensitive.

Create an IAM policy that grants access to instances with the specific tag

Create an IAM policy that does the following:

  • Allows control over the instances with the tag.
  • Contains a conditional statement that allows access to Amazon EC2 resources if the value of the condition key ec2:ResourceTag/UserName matches the policy variable aws:username . The policy variable ${aws:username} is replaced with the friendly name of the current IAM user when the policy is evaluated by IAM.
  • Allows access to the ec2:Describe* actions for Amazon EC2 resources.
  • Explicitly denies access to the ec2:CreateTags and ec2:DeleteTags actions to prevent users from creating or deleting tags.
    Note: This prevents the user from taking control of an EC2 instance by adding the specific tag to it.

The finished policy looks similar to the following:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "ec2:*",
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "ec2:ResourceTag/UserName": "${aws:username}"
        }
      }
    },
    {
      "Effect": "Allow",
      "Action": "ec2:Describe*",
      "Resource": "*"
    },
    {
      "Effect": "Deny",
      "Action": [
        "ec2:CreateTags",
        "ec2:DeleteTags"
      ],
      "Resource": "*"
    }
  ]
}

Note: This policy applies to Amazon EC2 instances that use the ec2:ResourceTag condition key. To restrict launching new Amazon EC2 instances using tags, see How can I use IAM policy tags to restrict how an EC2 instance or EBS volume can be created?

Attach the IAM policy to the users or groups you want to access the instances

Finally, attach the IAM policy that you created to the users or groups you want to access the instances. You can attach the IAM policy using the AWS Management Console, AWS CLI, or AWS API.


Granting IAM users required permissions for Amazon EC2 resources

IAM policies for Amazon EC2

Did this article help?

Submit feedback

Do you need billing or technical support?

Contact AWS Support
Read article
View AWS Activate promotional credits

I received an email with my AWS Activate Founders or Portfolio package information. Where do I find my AWS promotional credit?

Last updated: 2022-03-04

I received an email with my AWS Activate Founders or Portfolio package information. Where do I find my AWS promotional credit?

Resolution

If you receive an email welcoming you to AWS Activate along with benefit information, your AWS Activate Founders or Portfolio package application is approved and processed. Your AWS promotional credits are directly added to the AWS account that you specified on your application.

Check the Credits page of the Billing and Cost Management console to see your account's active credits and promotions.


Getting started with AWS Activate

AWS Activate FAQ

Did this article help?

Submit feedback

Do you need billing or technical support?

Contact AWS Support
Read article
Resolve issues with an AWS Business support charge for an AWS Activate portfolio package

Why was I charged for AWS Business Support when I have an AWS Activate Portfolio package?

Last updated: 2022-03-04

I was charged for my AWS Support plan, even though I signed up for an AWS Activate Portfolio package that includes a credit for AWS Business Support. How can I resolve this issue?

Resolution

If you had an AWS Support subscription other than a Business-level Support plan before you were approved for an AWS Activate Portfolio package, see AWS Premium Support FAQs, and follow the instructions in Q: How do I cancel my AWS Support subscription? to cancel your support subscription. Then, follow the instructions in the welcome email you received from the AWS Activate team.

If you can't locate the email from the AWS Activate team, or if you have questions about the AWS Activate program, then contact the AWS Activate team at AWS Activate Contact Us.


AWS Activate

AWS Support

Did this article help?

Submit feedback

Do you need billing or technical support?

Contact AWS Support
Read article
Sign up for an AWS Activate package

How do I sign up for an AWS Activate package?

Last updated: 2022-03-04

I'm interested in an AWS Activate package. How do I sign up?

Resolution

AWS Activate offers two packages: the Founders package and the Portfolio package.

  • The AWS Activate Founders package is available for startups that aren't associated with an AWS Activate Provider. The AWS Activate Providers include select venture capital firms, accelerators, incubators, and other startup-enabling organizations. For more information on how to qualify for the AWS Founders package, see Getting Started with AWS Activate.
  • The AWS Activate Portfolio package is available to startups that are associated with an AWS Activate Provider. For a non-exhaustive list of AWS Activate Providers, see AWS Activate Providers. You can contact your AWS Activate Provider for more information on how to qualify for the AWS Activate Portfolio package.

For more information about these packages, see AWS Activate.

Note: If you're an agency, IT shop, or a consultancy, consider the AWS Partner Network instead.


Apply for AWS Activate

AWS Activate FAQ

Redeem your AWS Promotional Credit

Did this article help?

Submit feedback

Do you need billing or technical support?

Contact AWS Support
Read article
Troubleshoot CloudFormation stack issues in AWS Amplify

How do I troubleshoot CloudFormation stack issues in my AWS Amplify project?

Last updated: 2022-04-05

When I try to deploy my AWS Amplify application, I receive an AWS CloudFormation error similar to the following: "Resource is not in the state stackUpdateComplete". How do I troubleshoot the issue?

Short description

To troubleshoot CloudFormation stack issues in your Amplify project, first identify what's causing the issue by reviewing the following in the CloudFormation console:

  • The Status code and Status reason of the backend stack.
  • The Status , Status reason , and Logical ID values of the backend stack's recent Events .
  • The Status , Status reason , and Logical ID values of the backend stack's Resources .

Note: The Status reason value contains an error message returned by CloudFormation that identifies what's causing the error.

Then, remediate the issue based on the Status , Status reason , and Logical ID values listed in the console.

Resolution

Note: The CloudFormation stacks that Amplify provisions or updates can return errors for many reasons. The following are the most common reasons why CloudFormation stacks return errors associated with Amplify projects:

  • Misconfigurations in the associated Amplify project
  • Missing files in the associated Amplify project
  • Using an outdated version of the Amplify Command Line Interface (Amplify CLI)

Identify what's causing the issue by reviewing the stack's status codes and status reasons in the CloudFormation console

1.    Open the Amplify console.

2.    Choose the Backend environments tab. Then choose your application's backend environment.

3.    Choose the Overview tab. Then, choose View in CloudFormation . The backend environment's associated CloudFormation stack's Stack info page opens in the CloudFormation console.

4.    In the Overview pane , review the Status and Status reason values. This is the backend stack's status code Status reason .

Note: If the project's root stack is in the UPDATE_ROLLBACK_FAILED status, then follow the instructions in this article: How can I get my CloudFormation stack to update if it's stuck in the UPDATE_ROLLBACK_FAILED state?

5.    Choose the Events tab. Review the Status , Status reason , and Logical ID values for all of the recent events that are in a failed status.

Note: Make sure that you identify any events with the UPDATE_FAILED status.

6.    Choose the Resources tab. Review the Status , Status reason , and Logical ID values for all of the resources that are in a failed status.

7.    (For nested stacks only) On the Resources pane, look for resources of type AWS::CloudFormation::Stack . Then, review the Status reason values for the nested stacks that are in a failed status.

Important: When troubleshooting, ignore resources that failed with a Resource update cancelled status. This status signifies a dependent, downstream resource that didn't fail, but also wasn't updated because of another resource failure.

Remediate the issue based on the Status, Status reason, and Logical ID values listed in the console

Follow the instructions in the Amplify CLI Troubleshooting guide. For more information, you can also search for specific Status reasons in the Amplify CLI Issues page in GitHub.

Note: It's a best practice to test solutions in a nonproduction environment first.


Did this article help?

Submit feedback

Do you need billing or technical support?

Contact AWS Support
Read article
Delete an AWS Amplify application

How do I delete an application in AWS Amplify?

Last updated: 2022-04-01

I want to delete my application in AWS Amplify, including all of the application's backend resources. How can I delete an Amplify application?

Short description

To delete an Amplify application, it's a best practice to use one of the following:

  • Amplify console
  • Amplify Command Line Interface (Amplify CLI)

If your application isn't deleted after using either of these methods, use the AWS Command Line Interface (AWS CLI) as a workaround.

Note: The AWS CloudFormation stack is deleted first. Then, any associated Amazon Simple Storage Service (Amazon S3) buckets are deleted. The application is deleted from the Amplify console last. The CloudFormation stack deletes all of the application's associated backend resources, except the Amazon S3 buckets. The time that it takes to delete an application from Amplify depends on the size of the application's backend resources.

Resolution

Important: When you delete an Amplify application, all of the application's backend resources are also deleted. You can't recover your Amplify application's resources after they're deleted.

To delete an Amplify application using the Amplify console

1.    Open the AWS Amplify console.

2.    In the left navigation pane, choose the name of the application that you want to delete. The App page opens.

3.    On the App page, select the Actions dropdown list. Then, choose Delete app .

To delete an Amplify application using the Amplify CLI

If you haven't already done so, install the Amplify CLI. Then, do one of the following, based on whether your project is locally accessible or cloud based.

For locally accessible projects

Within the project directory that you want to delete, run the following amplify delete command:

amplify delete

For cloud-based projects

1.    Pull the backend environment associated with your application to your local environment by running the following amplify pull command:

amplify pull

2.    Within the project directory that you want to delete, run the following amplify delete command:

amplify delete

3.    (For applications with multiple backend environments) Repeat steps 1 and 2 for each of your application's backend environments.

Note: Deleting an Amplify application using the Amplify console or Amplify CLI can fail for many reasons. If you receive an error when trying to delete your application, use the AWS CLI as a workaround to delete the application instead.

To delete an Amplify application using the AWS CLI

Manually delete the project's Amazon S3 buckets and CloudFormation stack from the AWS Management Console

1.    Open the Amplify console.

2.    In the left navigation pane, choose the name of the application that you want to delete. The App page opens.

3.    Copy and save the App ID value and the backend environment's name. You need these values to delete the application using the AWS CLI.

4.    Delete the CloudFormation stack's Amazon S3 deployment bucket. For instructions, see Deleting a bucket in the Amazon S3 User Guide.

Note: The CloudFormation stack's S3 deployment bucket name is listed in the following format:

amplify-<application-name>-<backend-environment-name>-<random-number>-deployment

5.    (If your project uses the Amplify Storage category) Delete the project's storage S3 bucket.

Note: The storage bucket name is listed in the project's root stack resources, under the Storage nested stack.

6.    Delete the project's CloudFormation root stack. For instructions, see Deleting a stack on the AWS CloudFormation console in the CloudFormation User Guide.

Note: The CloudFormation stack's root stack name is listed in the following format:

amplify-<application-name>-<backend-environment-name>-<random-number>

7.    (For applications with multiple backend environments) Repeat steps 4-6 for each of your application's backend environments.

Delete the Amplify application using the AWS CLI

Note: If you receive errors when running AWS CLI commands, make sure that you're using the most recent version of the AWS CLI.

Run the following delete-app AWS CLI command:

Important: Replace your-app-id with your application's App ID. Replace application-region with the AWS Region that your application is in.

aws amplify delete-app --app-id <your-app-id> --region <application-region>

Note: You can also run the delete-backend-environment command to delete all of your application's backend environments first. Then, delete your application from the Amplify console.


Did this article help?

Submit feedback

Do you need billing or technical support?

Contact AWS Support
Read article
Troubleshoot API Gateway SSL certificate errors

How do I troubleshoot errors with SSL certificates that are generated by API Gateway?

Last updated: 2022-12-15

I'm experiencing issues with self-signed and expired SSL certificates installed on my backend system. How to I fix these errors?

Short description

When Amazon API Gateway performs an SSL handshake with the backend, API Gateway expects the backend to provide certificates that are obtained from trusted issuers. API Gateway expects the certificates to be valid, and not expired. API Gateway also expects the chain of trust to be intact. This means that API Gateway expects the certificate to contain a root certificate authority (CA), intermediate CAs, and the parent certificate details. With this information, API Gateway can complete certificate validation by going through the chain of certificates.

Resolution

Test HTTP proxy integration

To familiarize yourself with HTTP proxy integrations, test bad SSL certificates from the API Gateway console. Use the external website badssl.com that provides bad SSL certificates for testing.

1.    Create a resource named "/selfsigned" with a GET method. Then, configure an HTTP proxy integration with the URL https://self-signed.badssl.com/.

From the API Gateway console, test the API. You receive the following error:

Thu Dec 15 16:05:05 UTC 2022 : Sending request to https://self-signed.badssl.com/
Thu Dec 15 16:05:05 UTC 2022 : Execution failed due to configuration error: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

2.    Create a resource named "/expiredcert" with a GET method. Then, configure an HTTP proxy integration with the URL https://expired.badssl.com/.

From the API Gateway console, test the API. You receive the following error:

Thu Dec 15 16:06:02 UTC 2022 : Sending request to https://expired.badssl.com/
Thu Dec 15 16:06:02 UTC 2022 : Execution failed due to configuration error: PKIX path validation failed: java.security.cert.CertPathValidatorException: validity check failed

3.    Create a resource named "/untrustedRootCA" with a GET method. Configure an HTTP proxy integration with the URL https://untrusted-root.badssl.com/.

From the API Gateway console, test the API. You receive the following error:

Thu Dec 15 16:06:28 UTC 2022 : Sending request to https://untrusted-root.badssl.com/
Thu Dec 15 16:06:28 UTC 2022 : Execution failed due to configuration error: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

With VPC link integration, API Gateway performs certificate validation with the next hop that performs TLS termination.

When a Network Load Balancer has a TLS listener, the Network Load Balancer performs a TLS termination and creates another connection to the target. The certificate attached to the Network Load Balancer must meet all the requirements. A Network Load Balancer doesn't perform certificate validation during the SSL handshake with the target. The Network Load Balancer accepts expired or self-signed certificates that are installed on the target instances. The Network Load Balancer and the target groups are bound within a VPC and communications are secure. If the Network Load Balancer is using a TCP listener, the TLS handshake happens end-to-end. In these cases, the backend application must comply with the SSL requirements.

API Gateway supports Server Name Indication (SNI) during an SSL handshake over a VPC link integration.

If the backend Network Load Balancer has a self-signed or private certificate that hasn't been issued by a CA, you receive the following error:

Execution failed due to configuration error: PKIX path building failed:
sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

The workaround for the execution failed error is to set insecureSkipVerification to true in the integration's tlsConfig object:

aws api gateway update-integration --rest-api-id abcde --resource-id abcd --http-method GET --patch-operations "op='replace',path='/tlsConfig/insecureSkipVerification',value= False"

Generate and configure an SSL certificate for backend authentication

API Gateway-supported certificate authorities for HTTP and HTTP proxy integrations

Target groups for your Network Load Balancers

Did this article help?

Submit feedback

Do you need billing or technical support?

Contact AWS Support
Read article

Still Thinking?
Give us a try!

We embrace agility in everything we do.
Our onboarding process is both simple and meaningful.
We can't wait to welcome you on AiDOOS!