Setting up pfSense with IPv6

I recently had the opportunity to redesign my home network (including with dual-stack IPv6). My setup is as follows:

Physical Devices

Logical Setup

  • A “secure” VLAN – for ‘trusted’ devices.
  • A “restricted” VLAN- for devices on my wireless network and ‘untrusted’ wired devices (i.e. smart TV’s).

On the IPv4 side, this is the usual NAT setup, with each network having a separate private /24 subnet. IPv6 was a little different – essentially my ISP provides a /56 prefix of IPv6 address via DHCP. Given IPv6 minimum subnet sizes of /64, this gives me 8 bits to play with (i.e. I can therefore have 256 /64 networks. Therefore, my IPv6 network followed this model:
[ISP 56-Bits]:[My Subnet 8-Bits]:[My Device Address 64-Bits] = 128-bit IPv6 address.

Interesting Findings

  • Chromecast support across multiple VLANs on both IPv4 and IPv6 required:
    • Installing the Avahi pfSense package. The default settings seem to work fine to make the Chromecasts discoverable – but I noticed Avahi can crash every now and then, so I set up the separate Service Watchdog pfSense package to reboot it if it crashes.
    • Ensuring each Chromecast received a static IP address, ensuring no firewall rules block access to the Chromecast devices across subnets and ensuring multicast UDP traffic was allowed to flow freely to IPv4 ( and IPv6 (ff00::/8)
  • To ensure IPv6 works effectively when you’re running in stateless mode (SLAAC), you can no longer set Windows desktops / servers to “block all incoming” connections on the host firewall. If you do this, you’ll end up blocking in the inbound router advertisement (RA) packets that set ups the IPv6 default gateway. E.G.:
  • When I first set up IPv6 DNS servers for IPv6 connections, I noticed that there was a big latency difference between IPv4 Google Public DNS (<4ms from Sydney) and IPv6 Google Public DNS (~200ms latency). Between my first draft of this post and me publishing it, this appears to have been resolved - with Google confirming they’ve extend the geographic range of Google Public DNS out to Australia (my IPv6 DNS pings are now ~4ms, the same as IPv4)
  • When I first tested my IPv6 connectivity, I was using the two major test sites (Exhibit A | Exhibit B). I noticed one generally gave me a 18/20 despite everything looking good. When I did some more reading, I discovered that IPv6 requires you to allow certain types of ICMP traffic inbound from the public internet (which you didn’t need in an IPv4-only world). Most operating-system host firewalls still block it though(!). Based on section 4.3.1 of the IETF spec, I’ve unblocked the 6 types of ICMP traffic that “MUST NOT be blocked”:

Protect S3 Data from Deletion using Multi-Factor Authentication

One of the interesting features available in Amazon S3 is Multi Factor Authentication Delete.

This features allows you to specify a TOTP-compatible virtual MFA device or a physical MFA device and link it to a versioned S3 bucket. This means that any future attempt to change the versioning state of the bucket, or delete any version of a file from a bucket MUST be accompanied with a valid token code. (You can’t delete the entire bucket object either – as you can’t delete a bucket holding any files/versions.)

This is especially useful in environments where you want to store security-related audit/event/log data – especially where some level of dual-control and/or physical segregation is required.

When I was trying to see what was available to do this, I looked at a couple of GUI solutions which supported it – however I wanted a scriptable version that could be used in automation scripts. In my case, it was designed to be manually executed by two custodians (one holding the AWS root account credentials, and one holding the physical MFA token). Theoretically, this could be fully automated by having a virtual TOTP application or HSM.

The process to set-up prerequisites is simple:

  1. Install Ruby – I tend to ensure it can run Ruby scripts (.rb files) and is available in PATH.
  2. Install the AWS SDK for Ruby by typing gem install aws-sdk in a command line.
  3. Save the script provided below as a Ruby (.rb) script file
  4. Execute the script by following the usage instructions described below.



> enable-mfa-delete-s3.rb AAAAAAAA BBBBBBBB my-first-s3-bucket 123456789 ap-southeast-2 123456


require 'rubygems'
require 'aws-sdk'

# Fix SSL problem on Windows + Ruby:

# Read script arguments.
aws_access_key 		= ARGV[0].to_s
aws_secret_key 		= ARGV[1].to_s
aws_bucket_name		= ARGV[2].to_s
aws_account_id 		= ARGV[3].to_s
aws_region 		= ARGV[4].to_s
aws_token_code 		= ARGV[5].to_s

# Create a client object to interface with S3.
client = aws_region, :access_key_id => aws_access_key, :secret_access_key => aws_secret_key)

# Assemble the MFA string.
aws_mfa_device_string = 'arn:aws:iam::' + aws_account_id + ':mfa/root-account-mfa-device ' + aws_token_code

# Update the bucket versioning policy.
  bucket: aws_bucket_name, # required
  mfa: aws_mfa_device_string,
  versioning_configuration: { # required
    mfa_delete: "Enabled", # accepts Enabled, Disabled
    status: "Enabled", # accepts Enabled, Suspended
  use_accelerate_endpoint: false,

# Output the new state.
resp = client.get_bucket_versioning({
  bucket: aws_bucket_name, # required
  use_accelerate_endpoint: false,

print 'Bucket Name: ' + aws_bucket_name + "\n"
print 'Versioning Status: ' + resp.status + "\n"
print 'MFA Delete Status: ' + resp.mfa_delete + "\n"

Full credit to the people who’ve posted their Ruby-based solutions [1] [2] previously (but no longer seem to work given changes to the AWS SDK and/or Ruby.)

ECDSA Certificates with Apache 2.4 & Lets Encrypt

Update: Modified the configuration so instead of whitelisting TLS versions, it is blacklisting insecure TLS versions based on feedback. Thanks Johannes Pfrang.

A little while ago, I wrote a post about running dual RSA and ECDSA certificates on my websites. Since then, I’ve found that there is little to no impact of running my websites with only an ECDSA certificate. You can also get these certificates for free, since Let’s Encrypt is now signing ECDSA certificates with their RSA root.

Infrastructure Set-Up

The infrastructure supporting this all is as follows:

  1. Ubuntu 15.10
  2. Ondřej Surý’s Apache 2.4.x PPA
  3. OpenSSL 1.0.2 (via the PPA above)
  4. Let’s Encrypt Auto Script

Capturing TLS Usage Logs

One of the things that did help with deciding how to proceed with this setup was to configure a new Apache access log file to capture TLS request data.

LogFormat "%t,%h,%H,%v,%{SSL_PROTOCOL}x,%{SSL_CIPHER}x,\"%{User-agent}i\"" ssl
CustomLog /var/log/apache2/ssl.log ssl

This outputs a log file that shows the time of the request, the source IP address, the HTTP version, Apache Virtual Host, TLS Version, TLS Cipher and the Browser User Agent. Example:

[28/Feb/2016:13:42:20 +1100],,HTTP/2,,TLSv1.2,ECDHE-ECDSA-AES128-GCM-SHA256,"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36"
[28/Feb/2016:13:45:31 +1100],,HTTP/1.1,,TLSv1.2,ECDHE-ECDSA-AES128-GCM-SHA256,"Mozilla/5.0 (compatible; Googlebot/2.1; +"
[28/Feb/2016:13:45:52 +1100],,HTTP/2,,TLSv1.2,ECDHE-ECDSA-AES128-GCM-SHA256,"Mozilla/5.0 (Windows NT 10.0; WOW64; rv:44.0) Gecko/20100101 Firefox/44.0"
[28/Feb/2016:13:46:06 +1100],,HTTP/2,,TLSv1.2,ECDHE-ECDSA-AES256-GCM-SHA384,"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36 Edge/12.10240"

In the end, I discovered it was fairly easy to run some basic queries against this data. I ended up doing the following:

  1. Using the AWS Cloudwatch Logs Agent to send the whole log file in real-time to AWS Cloudwatch Logs.
  2. Using the AWS Management Console to export all logs to Amazon S3 and then download them locally.
  3. Using q – Text as Data to run SQL queries directly the GZIP’d log data.

Once I had all the AWS data from S3 in a folder, I could run a simple query such as:

q -d"," -T "SELECT c6, count(*) FROM ..\data-q\*.gz GROUP BY c6 ORDER BY count(*) DESC"

This gives me some nice, basic statistics (percentages added in post-processing):

Ciphersuite Requests %
ECDHE-RSA-AES128-GCM-SHA256 45557 93.5
ECDHE-RSA-AES256-GCM-SHA384 1914 3.9
ECDHE-RSA-AES128-SHA 625 1.3
ECDHE-RSA-AES128-SHA256 293 0.6
ECDHE-RSA-AES256-SHA  184 0.4
ECDHE-RSA-AES256-SHA384  149 0.3

Apache TLS Hardening

I had already decided that DHE support wasn’t needed on my website. This was largely due to the fact I was running multiple SSL virtual hosts on a single IP (so lot’s of the older clients which didn’t support SNI wouldn’t work anyway). The only SNI supporting client I would lose, according to the Qualys SSL Labs Server test, was OpenSSL 0.9.8. Given it’s not a typical user-facing client, I decided this wasn’t a major loss.

Therefore, the move to ECDSA certificates had no noticeable impact on my client compatibility. The ciphersuites (and order) I landed on are as follows:


Note: ChaCha20-Poly1305 is in there for a bit of future proofing once OpenSSL 1.1 is released.

In the end, all the following SSL hardening (which I’ve adapted from the CIS Apache Server Benchmark, Mozilla Server-Side TLS recommendations and various online sources), ends up looking like this:

# Disable insecure renegotiation.
SSLInsecureRenegotiation Off

# Disable Compression.
SSLCompression Off

# Set up OCSP stapling.
SSLUseStapling On
SSLStaplingResponderTimeout 5
SSLStaplingReturnResponderErrors off
SSLStaplingCache shmcb:logs/ssl_stapling(32768)

# Disable Session Tickets.
SSLSessionTickets Off

# Turn on TLS.
SSLEngine on

# Disallow insecure protocols, allow current and future protocols. (TLS 1.0 is arguably still required.)
SSLProtocol -SSLv2 -SSLv3

# Ask the client to prefer the order of support ciphers advertised by the server.
SSLHonorCipherOrder On

# The actual ciphersuite and order to use.

# Bump up the strength of the ECDHE key exchange.
SSLOpenSSLConfCmd ECDHParameters secp384r1
SSLOpenSSLConfCmd Curves secp384r1

This pretty much sums up all of the TLS related hardening. The only thing I can think of that’s missing would be to support Certificate Transparency via providing Signed Certificate Timestamps (SCT)’s. It looks like there isn’t a way to do this on Apache 2.4. The current Apache module (mod_ssl_ct) seems to require you to compile a development/trunk version of Apache (and exists in the documentation as an unreleased Apache 2.5).

Manually Generating Signed ECDSA Certificates from Let’s Encrypt

I’m sure an option to generate ECDSA certificates is on its way with the automatic Let’s Encrypt tool, but here is how you can manually generate certificates for now:

# Create the private key.
openssl ecparam -genkey -name secp384r1 > "/letsencrypt/ecdsa/"

# Create OpenSSL configuration file.
cat /etc/ssl/openssl.cnf > "/letsencrypt/ecdsa/"
echo "[SAN]" >> "/letsencrypt/ecdsa/"

# Pick one based on whether its a root domain or not. E.G:
echo "" >> "/letsencrypt/ecdsa/"
echo "," >> "/letsencrypt/ecdsa/"	
# Create Certificate Signing Request.
openssl req -new -sha256 -key "/letsencrypt/ecdsa/" -nodes -out "/letsencrypt/ecdsa/" -outform pem -subj "/" -reqexts SAN -config "/letsencrypt/ecdsa/"

# Get signed certificate.
/letsencrypt/letsencrypt-auto certonly --webroot -w /var/www/ -d --email "" --csr "/letsencrypt/ecdsa/"

If this is the first time you’ve done this, all you need to do now is point to the latest certificate chain and private key in your relevant Apache configuration location.

SSLCertificateFile /letsencrypt/ecdsa/
SSLCertificateKeyFile /letsencrypt/ecdsa/

Qualys SSL Labs Testing

I was fairly satisified with the range of flexibility and client support this configuration allowed for!