How to compress image and then upload it to AWS S3 bucket using FastAPI?

Question:

Here is my code for uploading the image to AWS S3:

@app.post("/post_ads")
async def create_upload_files(files: list[UploadFile] = File(description="Multiple files as UploadFile")):
    main_image_list = []
    for file in files:
          s3 = boto3.resource(
             's3',
              aws_access_key_id =   aws_access_key_id,
               aws_secret_access_key = aws_secret_access_key
                            )
           bucket = s3.Bucket(aws_bucket_name)
           bucket.upload_fileobj(file.file,file.filename,ExtraArgs={"ACL":"public-read"}) 

Is there any way to compress the image size and upload the image to a specific folder using boto3? I have this function for compressing the image, but I don’t know how to integrate it into boto3.

         for file in files:
                im = Image.open(file.file)
                im = im.convert("RGB")
                im_io = BytesIO()
                im = im.save(im_io, 'JPEG', quality=50)  
                
                s3 = boto3.resource(
                                's3',
                                aws_access_key_id =   aws_access_key_id,
                                aws_secret_access_key = aws_secret_access_key
                            )
                bucket = s3.Bucket(aws_bucket_name)
                bucket.upload_fileobj(file.file,file.filename,ExtraArgs={"ACL":"public-read"})

Update #1

After following Chris’s recommendation, my problem has been resolved:

Here is Chris’s solution:

im_io.seek(0)
bucket.upload_fileobj(im_io,file.filename,ExtraArgs={"ACL":"public-read"})
Asked By: hawaj

||

Answers:

You seem to be saving the image bytes to a BytesIO stream, which is never used, as you upload the original file object to the s3 bucket instead, as shown in this line of your code:

bucket.upload_fileobj(file.file, file.filename, ExtraArgs={"ACL":"public-read"}) 

Hence, you need to pass the BytesIO object to upload_fileobj() function, and make sure to call .seek(0) before that, in order to rewind the cursor (or "file pointer") to the start of the buffer. The reason for calling .seek(0) is that im.save() method uses the cursor to iterate through the buffer, and when it reaches the end, it does not reset the cursor to the beginning. Hence, any future read operations would start at the end of the buffer. The same applies to reading from the original file, as described in this answer—you would need to call file.file.seek(0), if the file contents were read already and need to read from the file again.

Example on how to load the image into BytesIO stream and use it to upload the file/image can be seen below. Please remember to properly close the file and BytesIO objects, in order to release their memory (see related answer as well).

try:        
    im = Image.open(file.file)
    if im.mode in ("RGBA", "P"): 
        im = im.convert("RGB")  
    buf = io.BytesIO()
    im.save(buf, 'JPEG', quality=50)
    buf.seek(0)
    bucket.upload_fileobj(buf, 'out.jpg', ExtraArgs={"ACL":"public-read"})
except Exception:
    # ...
finally:
    file.file.close()
    buf.close()
    im.close()

As for the URL, using ExtraArgs={"ACL":"public-read"} should work as expected and make your resource (file) publicly accessible. Hence, please make sure you are accessing the correct URL.

Answered By: Chris

aws s3 sync s3://your-pics. for file in "$ (find. -name "*.jpg")"; do gzip "$file"; echo "$file"; done aws s3 sync. s3://your-pics –content-encoding gzip –dryrun This will download all files in s3 bucket to the machine (or ec2 instance), compresses the image files and upload them back to s3 bucket.

This should help you.

Answered By: Arun singh