Protect S3 Data from Deletion using Multi-Factor Authentication

One of the interesting features available in Amazon S3 is Multi Factor Authentication Delete.

This features allows you to specify a TOTP-compatible virtual MFA device or a physical MFA device and link it to a versioned S3 bucket. This means that any future attempt to change the versioning state of the bucket, or delete any version of a file from a bucket MUST be accompanied with a valid token code. (You can’t delete the entire bucket object either – as you can’t delete a bucket holding any files/versions.)

This is especially useful in environments where you want to store security-related audit/event/log data – especially where some level of dual-control and/or physical segregation is required.

When I was trying to see what was available to do this, I looked at a couple of GUI solutions which supported it – however I wanted a scriptable version that could be used in automation scripts. In my case, it was designed to be manually executed by two custodians (one holding the AWS root account credentials, and one holding the physical MFA token). Theoretically, this could be fully automated by having a virtual TOTP application or HSM.

The process to set-up prerequisites is simple:

  1. Install Ruby – I tend to ensure it can run Ruby scripts (.rb files) and is available in PATH.
  2. Install the AWS SDK for Ruby by typing gem install aws-sdk in a command line.
  3. Save the script provided below as a Ruby (.rb) script file
  4. Execute the script by following the usage instructions described below.

Usage

Structure: 
> enable-mfa-delete-s3.rb ACCESS_KEY SECRET_KEY S3_BUCKET_NAME AWS_ACCOUNT_ID AWS_REGION AWS_TOKEN_CODE

Example: 
> enable-mfa-delete-s3.rb AAAAAAAA BBBBBBBB my-first-s3-bucket 123456789 ap-southeast-2 123456

Script

require 'rubygems'
require 'aws-sdk'

# Fix SSL problem on Windows + Ruby: https://github.com/aws/aws-sdk-core-ruby/issues/166
Aws.use_bundled_cert!

# Read script arguments.
aws_access_key 		= ARGV[0].to_s
aws_secret_key 		= ARGV[1].to_s
aws_bucket_name		= ARGV[2].to_s
aws_account_id 		= ARGV[3].to_s
aws_region 		= ARGV[4].to_s
aws_token_code 		= ARGV[5].to_s

# Create a client object to interface with S3.
client = Aws::S3::Client.new(region: aws_region, :access_key_id => aws_access_key, :secret_access_key => aws_secret_key)

# Assemble the MFA string.
aws_mfa_device_string = 'arn:aws:iam::' + aws_account_id + ':mfa/root-account-mfa-device ' + aws_token_code

# Update the bucket versioning policy.
client.put_bucket_versioning({
  bucket: aws_bucket_name, # required
  mfa: aws_mfa_device_string,
  versioning_configuration: { # required
    mfa_delete: "Enabled", # accepts Enabled, Disabled
    status: "Enabled", # accepts Enabled, Suspended
  },
  use_accelerate_endpoint: false,
})

# Output the new state.
resp = client.get_bucket_versioning({
  bucket: aws_bucket_name, # required
  use_accelerate_endpoint: false,
})

print 'Bucket Name: ' + aws_bucket_name + "\n"
print 'Versioning Status: ' + resp.status + "\n"
print 'MFA Delete Status: ' + resp.mfa_delete + "\n"

Full credit to the people who’ve posted their Ruby-based solutions [1] [2] previously (but no longer seem to work given changes to the AWS SDK and/or Ruby.)

ECDSA Certificates with Apache 2.4 & Lets Encrypt

Update: Modified the configuration so instead of whitelisting TLS versions, it is blacklisting insecure TLS versions based on feedback. Thanks Johannes Pfrang.

A little while ago, I wrote a post about running dual RSA and ECDSA certificates on my websites. Since then, I’ve found that there is little to no impact of running my websites with only an ECDSA certificate. You can also get these certificates for free, since Let’s Encrypt is now signing ECDSA certificates with their RSA root.

Infrastructure Set-Up

The infrastructure supporting this all is as follows:

  1. Ubuntu 15.10
  2. Ondřej Surý’s Apache 2.4.x PPA
  3. OpenSSL 1.0.2 (via the PPA above)
  4. Let’s Encrypt Auto Script

Capturing TLS Usage Logs

One of the things that did help with deciding how to proceed with this setup was to configure a new Apache access log file to capture TLS request data.

LogFormat "%t,%h,%H,%v,%{SSL_PROTOCOL}x,%{SSL_CIPHER}x,\"%{User-agent}i\"" ssl
CustomLog /var/log/apache2/ssl.log ssl

This outputs a log file that shows the time of the request, the source IP address, the HTTP version, Apache Virtual Host, TLS Version, TLS Cipher and the Browser User Agent. Example:

[28/Feb/2016:13:42:20 +1100],1.2.3.4,HTTP/2,blog.joelj.org,TLSv1.2,ECDHE-ECDSA-AES128-GCM-SHA256,"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36"
[28/Feb/2016:13:45:31 +1100],4.5.6.7,HTTP/1.1,blog.joelj.org,TLSv1.2,ECDHE-ECDSA-AES128-GCM-SHA256,"Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
[28/Feb/2016:13:45:52 +1100],1.2.3.4,HTTP/2,blog.joelj.org,TLSv1.2,ECDHE-ECDSA-AES128-GCM-SHA256,"Mozilla/5.0 (Windows NT 10.0; WOW64; rv:44.0) Gecko/20100101 Firefox/44.0"
[28/Feb/2016:13:46:06 +1100],1.2.3.4,HTTP/2,blog.joelj.org,TLSv1.2,ECDHE-ECDSA-AES256-GCM-SHA384,"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36 Edge/12.10240"

In the end, I discovered it was fairly easy to run some basic queries against this data. I ended up doing the following:

  1. Using the AWS Cloudwatch Logs Agent to send the whole log file in real-time to AWS Cloudwatch Logs.
  2. Using the AWS Management Console to export all logs to Amazon S3 and then download them locally.
  3. Using q – Text as Data to run SQL queries directly the GZIP’d log data.

Once I had all the AWS data from S3 in a folder, I could run a simple query such as:

q -d"," -T "SELECT c6, count(*) FROM ..\data-q\*.gz GROUP BY c6 ORDER BY count(*) DESC"

This gives me some nice, basic statistics (percentages added in post-processing):

Ciphersuite Requests %
ECDHE-RSA-AES128-GCM-SHA256 45557 93.5
ECDHE-RSA-AES256-GCM-SHA384 1914 3.9
ECDHE-RSA-AES128-SHA 625 1.3
ECDHE-RSA-AES128-SHA256 293 0.6
ECDHE-RSA-AES256-SHA  184 0.4
ECDHE-RSA-AES256-SHA384  149 0.3

Apache TLS Hardening

I had already decided that DHE support wasn’t needed on my website. This was largely due to the fact I was running multiple SSL virtual hosts on a single IP (so lot’s of the older clients which didn’t support SNI wouldn’t work anyway). The only SNI supporting client I would lose, according to the Qualys SSL Labs Server test, was OpenSSL 0.9.8. Given it’s not a typical user-facing client, I decided this wasn’t a major loss.

Therefore, the move to ECDSA certificates had no noticeable impact on my client compatibility. The ciphersuites (and order) I landed on are as follows:

  1. ECDHE-ECDSA-AES256-GCM-SHA384
  2. ECDHE-ECDSA-CHACHA20-POLY1305
  3. ECDHE-ECDSA-AES128-GCM-SHA256
  4. ECDHE-ECDSA-AES256-SHA384
  5. ECDHE-ECDSA-AES256-SHA
  6. ECDHE-ECDSA-AES128-SHA256
  7. ECDHE-ECDSA-AES128-SHA

Note: ChaCha20-Poly1305 is in there for a bit of future proofing once OpenSSL 1.1 is released.

In the end, all the following SSL hardening (which I’ve adapted from the CIS Apache Server Benchmark, Mozilla Server-Side TLS recommendations and various online sources), ends up looking like this:

# Disable insecure renegotiation.
SSLInsecureRenegotiation Off

# Disable Compression.
SSLCompression Off

# Set up OCSP stapling.
SSLUseStapling On
SSLStaplingResponderTimeout 5
SSLStaplingReturnResponderErrors off
SSLStaplingCache shmcb:logs/ssl_stapling(32768)

# Disable Session Tickets.
SSLSessionTickets Off

# Turn on TLS.
SSLEngine on

# Disallow insecure protocols, allow current and future protocols. (TLS 1.0 is arguably still required.)
SSLProtocol -SSLv2 -SSLv3

# Ask the client to prefer the order of support ciphers advertised by the server.
SSLHonorCipherOrder On

# The actual ciphersuite and order to use.
SSLCipherSuite "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:!aNULL:!eNULL:!EXPORT:!RC4:!DES:!SSLv2:!MD5:!SSLV3:!3DES:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:KRB5-DES-CBC3-SHA:"

# Bump up the strength of the ECDHE key exchange.
SSLOpenSSLConfCmd ECDHParameters secp384r1
SSLOpenSSLConfCmd Curves secp384r1

This pretty much sums up all of the TLS related hardening. The only thing I can think of that’s missing would be to support Certificate Transparency via providing Signed Certificate Timestamps (SCT)’s. It looks like there isn’t a way to do this on Apache 2.4. The current Apache module (mod_ssl_ct) seems to require you to compile a development/trunk version of Apache (and exists in the documentation as an unreleased Apache 2.5).

Manually Generating Signed ECDSA Certificates from Let’s Encrypt

I’m sure an option to generate ECDSA certificates is on its way with the automatic Let’s Encrypt tool, but here is how you can manually generate certificates for now:

# Create the private key.
openssl ecparam -genkey -name secp384r1 > "/letsencrypt/ecdsa/blog.joelj.org/privkey.pem"

# Create OpenSSL configuration file.
cat /etc/ssl/openssl.cnf > "/letsencrypt/ecdsa/blog.joelj.org/openssl.cnf"
echo "[SAN]" >> "/letsencrypt/ecdsa/blog.joelj.org/openssl.cnf"

# Pick one based on whether its a root domain or not. E.G:
echo "subjectAltName=DNS:blog.joelj.org" >> "/letsencrypt/ecdsa/blog.joelj.org/openssl.cnf"
OR
echo "subjectAltName=DNS.1:joelj.org,DNS.2:www.joelj.org" >> "/letsencrypt/ecdsa/joelj.org/openssl.cnf"	
	
# Create Certificate Signing Request.
openssl req -new -sha256 -key "/letsencrypt/ecdsa/blog.joelj.org/privkey.pem" -nodes -out "/letsencrypt/ecdsa/blog.joelj.org/request.csr" -outform pem -subj "/O=blog.joelj.org/emailAddress=someone@joelj.org/CN=blog.joelj.org" -reqexts SAN -config "/letsencrypt/ecdsa/blog.joelj.org/openssl.cnf"

# Get signed certificate.
/letsencrypt/letsencrypt-auto certonly --webroot -w /var/www/blog.joelj.org/ -d blog.joelj.org --email "someone@joelj.org" --csr "/letsencrypt/ecdsa/blog.joelj.org/request.csr"

If this is the first time you’ve done this, all you need to do now is point to the latest certificate chain and private key in your relevant Apache configuration location.

SSLCertificateFile /letsencrypt/ecdsa/blog.joelj.org/0001_chain.pem
SSLCertificateKeyFile /letsencrypt/ecdsa/blog.joelj.org/privkey.pem

Qualys SSL Labs Testing

I was fairly satisified with the range of flexibility and client support this configuration allowed for!

ecdsa-blog-score

ecdsa-blog-client-simulation

Using Amazon Web Services to Capture CSP and HPKP Reports

I’ve been working to review and harden the security on my personal websites lately (maybe some other posts about cipher suite choices and server logging to AWS coming up).

One thing I’ve never utilised before are the reporting features available in both Content Security Policy (CSP) and HTTP Public Key Pinning (HPKP). This reporting lets you help tune your policies (in the case of CSP), and see violations for both HPKP and CSP. For both forms of reporting, you need to provide a URL in the CSP/HPKP header. The client’s browser, which detects whether there is a violation of either scheme, will simply send a HTTP POST request containing the report to that URL.

You generally have two choices here – develop your own mini-application to capture these reports, or use somebody else’s service. I wanted to keep the number of live web applications I had to maintain down to a bare minimum (so option 1 was not looking good), and I didn’t want to a third-party receiving my reports in perpetuity (so option 2 was struck-out).

Having an Amazon Web Services account, I decided to see whether it was possible to utilise some of the tooling available there to help me out. The idea was to provision something in my personal AWS instance that could maintain an endpoint to collect the logs and then export/view/search/alert on the logs as needed. Obviously, the idea was to keep the cost down as much as possible (this is only a personal endeavour, after all!)

To summarise what I ended up doing, my first solution to this problem is as follows:

AWS API Gateway allows you to create an “API” to receive requests. It’s reasonably cheap (you only pay a cup of coffee per million API queries plus cents per GB for data traffic). There’s also a free tier for the first 12 months – which means I’ll be able to estimate roughly how much this will cost me longer term.

Some good things I’ve discovered about the AWS API Gateway service:

  • The AWS API Gateway service can be configured using the AWS Management Console GUI to log all requests to CloudWatch Logs (including full header and responses).
  • You only need to set up a single POST API on the root URL for receiving the report requests – this is only a couple of quick clicks in the AWS Management Console GUI.
  • You can set that single POST API to either do nothing (mock execution) or act as a HTTP proxy for another service (theoretically, I can think this means you could daisy-chain to a secondary service as well, like Report-URI).

The raw logs appear from the API Gateway service (Which includes the violation reports), then get thrown into a Log Group in AWS CloudWatch Logs. You can then use in the inbuilt search/parse tool in Cloudwatch Logs in the Management Console, export the entire log file to Amazon S3 to dice up offline, or pass onto a couple of other AWS services (which I haven’t looked into just yet).

Setting up the API:

API Gateway 1

Configuring the logging:
API Gateway 2

Viewing the reports:
Cloudwatch Logs 1

What I’d like to do in the future:

  1. See if I can use a custom domain with Amazon API Gateway (Definitely possible, but I’d want to see if the new AWS Certificate Manager would provide a free SSL certificate for that endpoint.)
  2. See if I can stream this data into an AWS service for live-analysis when I need to search it (more than the string based search the AWS Management Console provides on Cloudwatch Log data.
  3. See whether I can get alerting in place based on particular keywords of interest and/or volumes of violations seen per day, or on a certain page.