3

Currently I use three steps to gzip some static assets and then use s3cmd to an S3 bucket (technically it's a Digital Ocean Spaces bucket). Here's what I do:

  1. $ find . -type f -name '*.css' | xargs -I{} gzip -k -9 {}
  2. $ find . -type f -name '*.css.gz' | xargs -I{} s3cmd put --acl-public --add-header='Content-Encoding: gzip' {} s3://mybucket/assets/{}
  3. But then I have to manually change all of the extensions in my bucket to remove the .gz extension.

Is there a way that I won't have to manually do step 3? I'd love to know if it's possible in step 2 to remove the .gz extension in the destination. I do want to keep the original files on my server though, so that's a deal breaker.

cfx
  • 133
  • 5
  • @AnthonyGeoghegan that makes no sense – cfx May 04 '19 at 19:07
  • It just means that the two reviewers who rejected your edit suggestion over-ruled my approval vote but in this case they were wrong. I just thought I'd say that the review system mostly works (just not in this case) and to not let this instance discourage you from making future edit suggestions. – Anthony Geoghegan May 05 '19 at 00:30
  • I understand what happened, still it makes no sense ;) – cfx May 05 '19 at 04:21

1 Answers1

2

You can use the -exec action in find so that you can do shell string manipulation on the filename. The parameter expansion "${var%.*}" can be used to remove the extension. Below is an example.

find . -type f -name '*.css.gz' -exec bash -c 's3cmd put --acl-public --add-header="Content-Encoding: gzip" "$1" "s3://mybucket/assets/${1%.*}"' -- {} \;

jordanm
  • 41,988
  • 9
  • 116
  • 113