Skip to content
This repository was archived by the owner on Mar 23, 2026. It is now read-only.

Fix notifications for s3 uploads made with presigned post requests#1640

Merged
whummer merged 3 commits intolocalstack:masterfrom
thomaschaaf:patch-1
Feb 29, 2020
Merged

Fix notifications for s3 uploads made with presigned post requests#1640
whummer merged 3 commits intolocalstack:masterfrom
thomaschaaf:patch-1

Conversation

@thomaschaaf
Copy link
Contributor

Fixes #1225 and #945

@coveralls
Copy link

coveralls commented Oct 9, 2019

Coverage Status

Coverage decreased (-0.4%) to 50.351% when pulling 7157790 on thomaschaaf:patch-1 into ae1da5b on localstack:master.

@whummer
Copy link
Member

whummer commented Oct 10, 2019

Thanks for this PR @thomaschaaf ! Can you please check whether this is now already covered by the changes in #1639? Thanks!

@thomaschaaf
Copy link
Contributor Author

Hello @whummer. No the #1639 does not fix the problem for me. The function self.is_query_allowable is truethy for me so it is not a problem. The problem is that the path with a presigned post request looks like this:

POST http://localhost:4572/bucketname

comparing it to the PUT request

PUT http://localhost:4572/bucketname/filename.ext

you see that it does not have a filename but it's a multipart request.

Here is a script I am using to test locally. Name the file create_presigned_post.py and then the notification will not be sent.

# Copyright 2010-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# This file is licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License. A copy of the
# License is located at
#
# http://aws.amazon.com/apache2.0/
#
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS
# OF ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.

import base64
import json
import logging
import requests
import boto3
from botocore.exceptions import ClientError


def use_presigned_url_in_html_page(url, fields):
    """Demonstrate how to use a presigned S3 URL to upload a file using an HTML page

    :param url: 'url' value returned by S3Client.generate_presigned_post()
    :param fields: 'fields' dictionary returned by S3Client.generate_presigned_post()

    Copy the URL and fields key:values into an HTML form as demonstrated below.

    <html>
      <head>
        <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
      </head>
      <body>
        <!-- Copy the 'url' value returned by S3Client.generate_presigned_post() -->
        <form action="proxy.php?url=https%3A%2F%2Fgithub.com%2FURL_VALUE" method="post" enctype="multipart/form-data">
          <!-- Copy the 'fields' dictionary key:values returned by S3Client.generate_presigned_post() -->
          <input type="hidden" name="key" value="VALUE" />
          <input type="hidden" name="AWSAccessKeyId" value="VALUE" />
          <input type="hidden" name="policy" value="VALUE" />
          <input type="hidden" name="signature" value="VALUE" />
        File:
          <input type="file"   name="file" /> <br />
          <input type="submit" name="submit" value="Upload to Amazon S3" />
        </form>
      </body>
    </html>

    """
    pass


def create_presigned_post(bucket_name, object_name,
                          fields=None, conditions=None, expiration=3600):
    """Generate a presigned URL S3 POST request to upload a file

    :param bucket_name: string
    :param object_name: string
    :param fields: Dictionary of prefilled form fields
    :param conditions: List of conditions to include in the policy
    :param expiration: Time in seconds for the presigned URL to remain valid
    :return: Dictionary with the following keys:
        url: URL to post to
        fields: Dictionary of form fields and values to submit with the POST
    :return: None if error.
    """

    # Generate a presigned S3 POST URL
    s3_client = boto3.client('s3',
endpoint_url="http://localhost:4572",
                  use_ssl=False,
                  aws_access_key_id='CCESS_KEY',
                  aws_secret_access_key='SECRET_KEY',
                  region_name='eu-central-1')
    try:
        response = s3_client.generate_presigned_post(Bucket=bucket_name,
                                                     Key=object_name,
                                                     Fields=fields,
                                                     Conditions=conditions,
                                                     ExpiresIn=expiration)
    except ClientError as e:
        logging.error(e)
        return None

    # The response contains the presigned URL and required fields
    return response


def main():
    """Exercise create_presigned_post()"""

    # Set these values before running the program
    bucket_name = 'biz-dev-document-upload-files'
    object_name = 'create_presigned_post.py'
    # If the presigned URL is used in an HTML page, the object name
    # can include a subdirectory prefix, as shown below.
    # object_name = 'presigned-uploads/${filename}'
    fields = {}
    conditions = []
    expiration = 60*60*24  # Upload must occur within 24 hours

    # Set up logging
    logging.basicConfig(level=logging.DEBUG,
                        format='%(levelname)s: %(asctime)s: %(message)s')

    # Generate a presigned S3 POST URL
    response = create_presigned_post(bucket_name, object_name,
                                     fields, conditions, expiration=expiration)
    if response is None:
        exit(1)
    logging.info(f'Presigned S3 POST URL: {response["url"]}')
    logging.info("Contents of 'fields' dictionary:")
    logging.info(json.dumps(response['fields']))

    # Write presigned URL and fields to files
    with open('post_url.txt', 'w') as f:
        f.write(response['url'])
    with open('post_fields.json', 'w') as f:
        f.write(json.dumps(response['fields']))

    # FYI: The generated policy can be examined by decoding it
    policy_decoded = base64.b64decode(response['fields']['policy'])

    # Demonstrate how an HTML page can use the presigned URL to upload a file
    use_presigned_url_in_html_page(response['url'], response['fields'])

    # Demonstrate how another Python program can use the presigned URL to upload a file
    # Use the Python requests package, which must be installed manually.
    #    pip install requests
    with open(object_name, 'rb') as f:
        files = {'file': (object_name, f)}
        http_response = requests.post(response['url'], data=response['fields'], files=files)
    # If successful, returns HTTP Status Code 204
    logging.info(f'File upload HTTP status code: {http_response.status_code}')


if __name__ == '__main__':
    main()

@thomaschaaf thomaschaaf changed the title Allow presigned urls to send notification Allow presigned post urls to send notification Oct 10, 2019
@thomaschaaf thomaschaaf changed the title Allow presigned post urls to send notification Fix notifications for s3 uploads made with presigned post requests Oct 10, 2019
@whummer
Copy link
Member

whummer commented Oct 11, 2019

Ok, thanks for digging into this @thomaschaaf . Looks like there is a currently a small code linter error in the TravisCI build - can we please fix it?

Also, I think it would be good to slightly extend the test in this PR to actually test the new functionality (possibly we can reuse some of the code you've shared in the snippet above.) Thanks!

@thomaschaaf
Copy link
Contributor Author

thomaschaaf commented Feb 21, 2020

@whummer I fixed the linting error.

Sadly I don't really know how I could improve the test as it tests the main change which is that it's a post request instead of a put request. I added comments to show where I am testing the new functionality.

body = 'something body'
# get presigned URL
object_key = 'test-presigned-post-key'
presigned_request = self.s3_client.generate_presigned_post(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

new functionality tested here

)
# put object
files = {'file': body}
response = requests.post(presigned_request['url'], data=presigned_request['fields'], files=files, verify=False)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

new functionality tested here

@thomaschaaf thomaschaaf requested a review from whummer February 21, 2020 16:54
@whummer
Copy link
Member

whummer commented Feb 29, 2020

Thanks for updating the PR @thomaschaaf . The change looks good to me - would be great if we could extend the test in test_s3_post_object_on_presigned_post() to include an assertion that the notification is actually received.

Do you think you could add this in a follow-up PR? That would help us prevent regressions in the future. Thanks!

@whummer whummer merged commit f8ddf25 into localstack:master Feb 29, 2020
jgbmattos pushed a commit to jgbmattos/localstack that referenced this pull request Mar 10, 2020
For  mine redrive policy to work on lambda. It was necessary or the raise the  lambda_api.process_sns_notification try except or to handle  the exception on sns_listener.
Since the only place that process_sns_notification is called is on sns_listener i decide to remove the try except inside lamda_api

Minor fix on tests.
Created tests for redrive policy in SNS.
Changed the position of try in case of sqs.
For the lambda execution i would appreciate an opinion.
I discover that the function proccess_sns_notification has two possible return, None, ou an HTTP response. I pretend to check if lambda behaviour that way, or not. Because i guess it would be better if it always return an HTTP Response.

resolve CloudFormation attributes starting with lower case (localstack#2008)

refactor persistence logic; use single file for persistence (localstack#2011)

Fix SQS queue creation attributes and specific attribute retrieval (localstack#2005)

Update elasticmq in order to fix sqs tag on creation (localstack#2017)

fix regex for replacement of S3 ETag hashes (localstack#2021)

Add windows support to the MakeFile (localstack#2024)

Refactor CloudFormation dependency resolution (localstack#2026)

fix resolution of CF stack parameters (localstack#2028)

fix CF unit test

Fix SNS tag listing to remove duplicate tags (localstack#2014)

Fix CloudFormation dependency resolution loop; async stack deployment (localstack#2031)

add ExportName to CloudFormation stack outputs (localstack#2033)

Fix CloudFormation support for IAM::Role (localstack#2034)

Remove None strings from SNS results; refactor resolution of CF resource name placeholders (localstack#2036)

fix persistence for ES API (localstack#2040)

Prefix CloudWatch event file names with timestamps (localstack#2035)

Mark Java LocalstackExtension deprecated (localstack#2047)

add persistence for Elasticsearch Service API calls (localstack#2048)

add API to confirm SNS subscriptions (localstack#2043)

fix setting of empty SQS queue attribute values (localstack#2052)

Expose java options for local lambda executors (localstack#2050)

Fix kinesis stream get_cfn_attribute (localstack#2063)

Upgrade testcontainers Maven dependency to version 1.12.5 (localstack#2059)

minor: fix base image and add sasl libs (localstack#2065)

support ExtendedS3DestinationConfiguration in Firehose streams (localstack#2068)

fix RawMessageDelivery subscription values for SNS - SQS integration (localstack#2067)

Check for None before iterating (localstack#2064)

Fix s3 notification event object size (localstack#2069)

Update PYTHONPATH for Python 3.8 (localstack#2070)

support static refs in CloudFormation Fn::Sub strings (localstack#2076)

Fix CloudWatch log streams lambda timstamp format (localstack#2078)

Return ConsumedCapacity for DynamoDB Query action (localstack#2071)

Add basic /health check endpoint (localstack#2080)

move Java sources to separate project (localstack#2084)

update README

Return SQS maxReceiveCount attribute as integer (localstack#2081)

Allow deleting a specific version of an object in S3 (localstack#2087)

update exports on CF stack update (localstack#2097)

fix handler lookup for "provided" Lambda runtime (localstack#2098)

Consider LAMBDA_REMOVE_CONTAINERS config for docker-reuse Lambda executor (localstack#2094)

fix returned attributes on ReturnValues=ALL_OLD for DynamoDB PutItem; Fix CreationTime for CloudFormation stacks (localstack#2103)

Configure nodejs Lambdas to skip SSL verification; add CF support for S3::BucketPolicy (localstack#2104)

Fix SNS subscription confirmation message to include signature details (localstack#2100)

Fix s3 notification event objectsize (localstack#2105)

Fix notifications for s3 uploads made with presigned post requests (localstack#1640)

Optimize plugin loading to speed up boot time (localstack#2109)

fix deployment of EC2 subnets with CidrIpv6 (localstack#2112)

Make Lambda batch size configurable for Kinesis event source mappings (localstack#2110)

Fix creation of SQS tags via CloudFormation (localstack#2114)

Release version 0.10.8; minor fix in DynamoDB ListStreams (localstack#2117)

Fix forwarding of Lambda output into CloudWatch Logs (localstack#2118)

Fix empty response for PutTargets in CloudWatch Events (localstack#2129)

Fix setting of attributes in existing SQS queues (localstack#2130)

integrate S3 starter into multiserver to improve performance (localstack#2132)

Generate default Lambda FunctionName in CloudFormation (localstack#2134)

Fix order of resource checks for CF deployment in single process (localstack#2136)

update elasticmq to 0.15.5 in base Docker image (localstack#2137)

add postgresql-dev to base Docker image

update elasticmq, assert messages with invalid characters are rejected by SQS (localstack#2135)

Support S3 bucket notifications in CF deployments (localstack#2138)

add generated ReceiptHandle for SQS messages forwarded to Lambda (localstack#2139)

Import fix.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Presigned S3 url doesnt notify sqs

3 participants