Justin Bachus

Thoughts on Travel, Technology, and Entrepreneurship

Automated EC2 Snapshots

I’ve been running several VMs on Amazon’s EC2 through their AWS services and need to run automated backups just in case something happens. Luckily, they have a nice API to make this really easy.

EBS snapshots are incremental, so they don’t take a ton of space if you’re doing daily snapshots. Root snapshots should only be made while the host is down, but user data should be partitioned off of the root filesystem anyway.

We just have two volumes to snapshot – /home and /data. One of those does house MySQL database files, so we want to have that flushed while the snapshot is running. I found a tool called ec2-consistent-snapshot that makes it easy to freeze the filesystems, flush the DB, and take the snapshot.

Step 1: Install the tool

1
2
yum --enablerepo=epel install perl-Net-Amazon-EC2 perl-File-Slurp perl-DBI perl-DBD-MySQL perl-Net-SSLeay perl-IO-Socket-SSL perl-Time-HiRes perl-Params-Validate perl-DateTime-Format-ISO8601 perl-Date-Manip perl-Moose ca-certificates
git clone https://github.com/alestic/ec2-consistent-snapshot.git ec2-consistent-snapshot

Step 2: Gather your data
Gather your volume IDs from the AWS console and inventory any database instances, usernames, passwords, etc. Create your .awscredentials file and .my.cnf with your credentials for AWS and MySQL.

Step 3: Create the scripts

1
2
3
4
5
6
7
8
9
10
11
12
ec2-consistent-snapshot \
 --freeze-filesystem /home \
 --region ap-southeast-1 \
 --description "Snapshot for home partition $(date +'%Y-%m-%d %H:%M:%S')" \
 vol-XXXXXXXX

 ec2-consistent-snapshot \
 --freeze-filesystem /data \
 --region ap-southeast-1 \
 --mysql \
 --description "Snapshot for data partition $(date +'%Y-%m-%d %H:%M:%S')" \
 vol-XXXXXXXX

Step 4: Put it in a script and schedule it in cron

Easy!

Progress Review

My last day of emplyment at my former job was March 18th – about 5 months ago. I haven’t really set aside any time to review progress on my goals in that time, so I’d like to review what I’ve accomplished in those 5 months. There’s still quite a lot that I’d like to do, but it is nice to review how far I’ve come so far at least.

Accomplishments

BlastHosting

For my web hosting and server administration business, I’ve used the extra time to cut many expenses, land additional sales for existing customers, and automate much of the business. So far through July, I’ve increased profits by 43% over last year and have taken the time to eliminate or reduce the root causes of many pain points in running the business.

Having a majority of my clients in third-world countries with a 12-hour time difference obviously presents a unique set of problems that include constantly virus or malware infected email clients, less-than-stellar coding security practices, and having the majority of those problems surface during the middle of the night for me. By adding many anti-spam measures, working with clients to harden their passwords, blocking unusual network traffic, and increasing the montioring and notification for the worst case scenarios of an outbreak, everything is running quite smoothly now with very little ongoing maintenance required.

AirBNB

I’ve redecorated the spare bedroom of my condo and have listed it in airbnb.com, a service that allows individuals to provide lodging to travelers for a fee. Being located downtown, business has been better than expected, and I’ve had a steady stream of guests. Over the next few months, I’ll be at full occupancy due to several long-term guests. Though I did take advantage of the free professional photography services provided, I didn’t feel like they added much over my existing photos. You can’t beat free though! If you’re interested in staying, the listing is at https://www.airbnb.com/rooms/3168336.

This Blog!

I didn’t start with a clear mission of this blog other than a place to write an entry whenever I felt like it, but it has been a great outlet for some technical notes I’ve accumulated. It has helped me to document some of the challenges I have encountered and the solutions I’ve devised for them. I’m glad I chose the platform (Octopress on AWS) as it has kept things extremely simple and quick.

SLCFlightDeals.com

Continuing my hobby of sharing cheap flight deals to friends and family, then on a Facebook page (http://www.facebook.com/SLCflightdeals), I created the website to distribute flight deals from Salt Lake City to a much larger audience. I’ve built up a small email distribution list, have gained over 150 Facebook followers, and get some decent traffic to the site. However, as I knew when I started it, the revenue options are very limited. I experimented with credit card affiliate programs for a while, but generated only one lead, so I was kicked out of the program. Down the line, I think a subscription service with more timely alerts would make a small amount of revenue, but generally the site has always been a way to promote some of my other travel-related projects.

BookItWithMiles.com

This was planned to be one of my first travel-related websites to get up and running, but it took significantly longer to publish than expected. Once I got the site up and listed on FlyerTalk, I did receive a few leads, but nothing serious. I did finally get a customer a few weeks ago, and I was able to get their family to Budapest next summer in business class using Delta miles. They also decided on a week-long stopover in Italy on the way back, which they didn’t know was possible. They came away extremely pleased and left me a very good review on FlyerTalk, but now I need to make sure I have a good way to collect reviews on my website. I felt quite happy to have saved them over $18,000 and except for having to argue with the Delta agent for an hour, I enjoyed the planning as well. I will eventually have to raise prices to account for how time-consuming it is, but I do enjoy it quite well. I’m hoping to help more and more people, but ultimately the scalibility of this venture is limited.

Travel

I have taken some unforgettable trips to Chile, Colombia, New Orleans, Hong Kong, Indonesia, and many areas of Utah and the western US. Though somewhat limited in duration because of AirBNB guest obligations, I’ve spent at least 2 of the past 5 months traveling! I do constantly feel a bit of guilt that I should be traveling when I’m working and working when I’m traveling, but that’s a balance I’ll have to work more on. I’ve also got plans booked for a trip to Mexico in a few weeks and a trip to South Africa in November. Maybe some other opportunities will come up as well.

Exporting Zimbra Domains From LDAP to Cbpolicyd

The Problem

In the ongoing fight against compromised user accounts, I set up cbpolicyd to rate limit outgoing emails from my servers. However, since I never set up a list of local domains, I occasionally get false positives triggered by users either moving many messages to spam (and automatically being forwarded to the spam autolearn address) or sending to local users. Since the list of domains can change often, I wanted a way to populate this list on a schedule.

Devising a Solution

First, I started with the question of how to extract the list of domains from LDAP. To get the ldap password, you have to be logged in as zimbra and run zmlocalconfig -s | grep ldap. Since ldapsearch outputs in LDIF format, some text sanitization had to be done to leave only the list of domains. The final product of this sanitization resulted in this:

1
ldapsearch -H ldap://<domain name>:389 -LLL -w <ldap password> -D "uid=zimbra,cn=admins,cn=zimbra" "(objectClass=zimbraDomain)" zimbraDomainName | grep -e "zimbraDomainName: " | sed -e 's/zimbraDomainName: //g' | sort | uniq

Next, I had to work on getting this list populated into the sqlite database used by cbpolicyd. First I had to locate where this database was stored by looking in /opt/zimbra/conf/cbpolicyd.conf at the [database] parameter. Once I had this located, I needed to check what group ID I was using, so logged in and checked like so:

1
2
3
4
5
6
7
$ sqlite3 cbpolicyd.sqlitedb
SQLite version 3.6.20
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> select * from policy_groups;
1|internal_ips|0|
2|internal_domains|0|

It looks like my policy group ID is 2, so I’ll use that in my scripts.

I created a bash script that first gets the list of domains and redirects the output to a temporary file. Then it deletes any existing policy group members from the internal_domains group, and finally re-populates it with the domains from the ldap list (the domains need to be prefixed by @). In order to avoid any locking issues, I’m shutting down cbpolicyd while I delete and repopulate the domain list. Then I delete the temp file.

The Final Script

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#!/bin/bash
PATH=/opt/zimbra/bin:/opt/zimbra/postfix/sbin:/opt/zimbra/openldap/bin:/opt/zimbra/snmp/bin:/opt/zimbra/rsync/bin:/opt/zimbra/bdb/bin:/opt/zimbra/openssl/bin:/opt/zimbra/java/bin:/usr/sbin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin

#Generate domain list
ldapsearch -H ldap://<domain name>:389 -LLL -w <password> -D "uid=zimbra,cn=admins,cn=zimbra" "(objectClass=zimbraDomain)" zimbraDomainName | grep -e "zimbraDomainName: " | sed -e 's/zimbraDomainName: //g' | sort | uniq > /opt/zimbra/data/cbpolicyd/curdomains

#Shut down cbpolicyd to avoid locking issues
zmcbpolicydctl stop

#Purge existing internal_domains entries
sqlite3 /opt/zimbra/data/cbpolicyd/db/cbpolicyd.sqlitedb "DELETE FROM policy_group_members WHERE PolicyGroupID='<Policy Group ID>'";

#Repopulate the db with the domain list
for a in `cat /opt/zimbra/data/cbpolicyd/curdomains`; do
  B="@"$a
    sqlite3 /opt/zimbra/data/cbpolicyd/db/cbpolicyd.sqlitedb "INSERT INTO policy_group_members VALUES (NULL,'<Policy Group ID>','$B', '0', NULL)"
done

#We're done with the db, start cbpolicyd
zmcbpolicydctl start

#Delete our temp file
rm /opt/zimbra/data/cbpolicyd/curdomains

Implementation

I have this running in cron daily which should be sufficient for the frequency domains are added and deleted for my servers.

Book It With Miles Launched!

I launched my airline miles award booking service, Book it with Miles, today! You can see it at http://bookitwithmiles.com.

Anyway, some of the technical details. It was built using Jekyll, a static site generator, and I used the theme bundled with Jekyll. It is hosted on Amazon AWS, specifically on S3 and CloudFront CDN for speed around the world (since it has an international audience). I chose zopim for a live chat and created a basic form-to-email script to email me the results of the contact form. After being contacted, I will work with the client on either a Campfire or HipChat chat room, then collect payment with PayPal.

First OS X App

One of my to-dos for today was to create a “Hello World” OS X app since I plan to learn Swift development through a tool I want to create to search for award seats using airline miles. After watching a few youtube videos, particularly this one: https://www.youtube.com/watch?v=REjj1ruFQww I was able to create my own Hello World app fairly easily. It didn’t take nearly as long as I expected. I also wanted to make sure I know how to build and distribute an app, so I tried that and got it uploaded at https://s3.amazonaws.com/jbuploads/HelloWorld.zip. Simple enough!

I was also able to get a basic design for http://www.bookitwithmiles.com up and running, though it still needs a lot of work. This will be the site for my award ticket booking service.

WHMCS Domain Due Dates

Problem

I use WHMCS as my billing system for the small web hosting company I own and operate. One annoying thing about WHMCS is that the due date for domains is set by default to the expiration date. I also like to process domain renewals manually since it is a very low margin product for me and making a few mistakes of renewing a domain that was not paid for can erase my profit. Thus, I want to give myself a little buffer time between the deadline for payment and the expiration of the domain name.

Solution

Since WHMCS is a database-driven application, I can simply update the database with the dates I’d like to use for the invoice date and due date. Since the application already syncs expiration dates with the registrar, I can use that to base my dates off of. Looking at the database, there’s a table called tbldomains with columns labeled expirydate, nextduedate, and nextinvoicedate. These are the columns we will be working with. In my case, I want the invoice date to be 1 month before the expiry date and the due date to be 2 weeks before the expiry date. Thus, I simply execute the following SQL:

1
update tbldomains set nextinvoicedate=DATE_SUB(expirydate, INTERVAL 1 MONTH), nextduedate=DATE_SUB(expirydate, INTERVAL 2 WEEK);

In order to run this every day, I want to set up a cron job to run my query on the appropriate database. The mysql -e flag will do what I want, but I don’t want my password in plaintext inside the crontab, so I create a .my.cnf inside my home directory with the following contents:

1
2
[client]
password=your_password

To make sure only you can read and edit it, make sure to chmod 600 the file as well. My final cron command looks like so:

1
/usr/bin/mysql -udb_user db_name -e "update tbldomains set nextinvoicedate=DATE_SUB(expirydate, INTERVAL 1 MONTH), nextduedate=DATE_SUB(expirydate, INTERVAL 2 WEEK);" > ~/duedate.log 2>&1

Securing Zimbra SMTP and Installing Policyd for Throttling

Why?

Running an email server is a constant battle against spammers and hackers. I’ve enabled many settings and installed several tools to help prevent these attacks on Zimbra servers I administer. I’m documenting them here so I don’t forget!

Some Local Configs

1
2
3
4
5
6
7
8
# switch to zimbra user
su - zimbra
# immediately fail messages to over quota inboxes
zmprov mcf zimbraLmtpPermanentFailureWhenOverQuota TRUE
# retrying bounced messages for 5 days is excessive
zmlocalconfig -e postfix_bounce_queue_lifetime=1d
# I want to see the SASL username in the headers
zmlocalconfig -e postfix_smtpd_sasl_authenticated_header=yes

Also, in the Zimbra admin, under “Global Settings”, I have “reject_non_fqdn_sender” and “reject_unknown_sender_domain” enabled. I also use the b.barracudacentral.org RBL.

Enabling policyd for Throttling

Let’s enable policyd through Zimbra’s handy provisioning:

1
zmprov ms <servername> +zimbraServiceEnabled cbpolicyd

Wait a few minutes for provisioning to finish. After that, we want to enable the Web UI. This must be done as root:

1
cd /opt/zimbra/httpd/htdocs/ && ln -s ../../cbpolicyd/share/webui

After that, edit the ./webui/includes/config.php. Comment out anything that’s not commented out and then make this the only active line:

1
$DB_DSN="sqlite:////opt/zimbra/data/cbpolicyd/db/cbpolicyd.sqlitedb";

If you don’t use spellcheck, httpd may be down. Start it with a zmhapachectl start as the zimbra user. Then you should be able to navigate to http://hostname:7780/webui/index.php. You’ll want to secure this or leave apache down after you’re finished configuring it.

Policyd Configuration for Throttling

I’m still adjusting my configuration, but to get a basic throttling setup going, I did the following:

  1. In Policies>Main, I disabled all policies except “Default System Policy” and “Default Outbound System Policy”
  2. For the “Default Outbound System Policy”, I modified the Members to make a Source of %internal_ips and Destination of any.
  3. In Policies>Groups, edit the Members of the internal_ips group and enter your subnet as the only member.
  4. In Quotas>Configure, I set up two quotas, one for Sender:user@domain and one for SASLUsername. Both of mine are configured to dump excess email into the Hold queue. Once those are set up, set the limit for each one. Everything you create will default to disabled and must be edited to change that to enabled.
  5. Monitor your hold queue very closely. I provided a script at /blog/2014/07/02/zimbra-abuse-alerts/ that I’m using.

Zimbra Abuse Alerts

Why?

I administer several Zimbra servers for one of my clients located in Asia. Since the client computers typically don’t adhere to the best security practices – like system updates and antivirus software – a recurring problem is that user email accounts are routinely compromised and used to send spam. I have taken many precautions like preventing brute force login attempts, enabling very strict SMTP restrictions, forcing SSL/SASL on all connections, and most recently enabling policyd to prevent any email floods that might indicate a compromised client. Policyd is configured to divert email volume over 100 messages per hour to the held queue in Postfix, so I want to monitor the held queue and lock any accounts whose email ends up in this queue. I also want notification when this happens, or if something falls through the cracks and the deferred queue grows to over 100 messages, which is usually indicitive of a compromised account. One additional requirement is that the alerts need to be sent through a separate SMTP server so they still get delivered in the event of a backlog on the local SMTP server.

Gathering Tools

Since the stock mail command can’t send mail to a remote SMTP server, I needed to find a simple tool to make this easy. I could have written something in Perl or PHP, but I knew there was something simple out there. After searching google and finding a few tools: ssmtp and mailx, I didn’t find them in my yum repo and didn’t want to maintain separate packages, so I decided to search my yum repositories. A simple yum search smtp came up with a tool called simply email that would fit my needs. A simple yum install email got me up and running and the man page has all the information I need.

Configuring and Testing email

The default configuration file for email is at /etc/email/email.conf. I wanted to connect directly to the Gmail SMTP server to ensure delivery, so I set SMTP_SERVER to ‘smtp.gmail.com’, the SMTP_PORT to 587, filled out the MY_NAME and MY_EMAIL variables, set USE_TLS to ‘true’, commented out signature and address book files, and finally set SMTP_AUTH to ‘LOGIN’, filled out my SMTP_AUTH_USER and SMTP_AUTH_PASS to my Gmail credentials, and saved the file. Be sure to chmod 600 /etc/email/email.conf as well to protect your credentials. In order to test that everything worked, I ran echo "Test123" | email -s "Test email" myaddress@gmail.com and verified that it was sent successfully and appeared in my inbox.

Set Postfix to Print SASL User in Headers

In order to more easily keep track of which SASL username is compromised, we need to configure Postfix to print this information in the message headers. This can be done using the smtpd_sasl_authenticated_header config value. Since Zimbra wraps the Postfix configs, we have to set it like so:

1
2
3
4
5
6
zmlocalconfig -e postfix_smtpd_sasl_authenticated_header=yes
#Restart Postfix to apply changes
zmmtactl restart
#Validate config variable is set
postconf -v | grep smtpd_sasl_authenticated_header
> smtpd_sasl_authenticated_header = yes

Simple Alert Script

This could be improved, especially tying in to a nagios or other monitoring/alerting system, but this does fine for my needs at the moment. I set it up to run every hour in my crontab.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
#!/bin/bash

PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

MAXDEFERRED=100
CURDEFERRED=`find /opt/zimbra/data/postfix/spool/deferred -type f | wc -l`

CURHELD=`find /opt/zimbra/data/postfix/spool/hold -type f | wc -l`

if [[ $CURDEFERRED -gt $MAXDEFERRED ]]; then
  email -b -s "Server has $CURDEFERRED deferred messages in the queue" admin@domain.com
fi

if [ $CURHELD -gt 0 ]; then
  find /opt/zimbra/data/postfix/spool/hold -type f | xargs postcat > /tmp/heldmsg.txt
  SASL_SENDER=`grep "Authenticated sender:" /tmp/heldmsg.txt`
  #Send the full held queue to the admin
  email -s "Server has $CURHELD held messages in the queue" admin@domain.com < /tmp/heldmsg.txt

  #Lock the account if we have the username
  if [ -n "${SASL_SENDER}" ]; then
    #Extract the logins
    grep "Authenticated sender:" /tmp/heldmsg.txt | sed 's/[^@]* \([a-zA-Z0-9.]*@[^ ]*\).*)/\1/' | uniq > /tmp/disableaccts
    #Lock the accounts in Zimbra
    cat /tmp/disableaccts | xargs -i su - zimbra -c "/opt/zimbra/bin/zmprov ma {} zimbraAccountStatus locked"
    #Send a notification email
    cat /tmp/disableaccts | xargs -i email -b -s "Server has locked {} for sending messages too fast" admin@domain.com
    #Restart Postfix to force reauthentication
    su - zimbra -c "zmmtactl restart"
  fi
#Cleanup
rm -f /tmp/heldmsg.txt
rm -f /tmp/disableaccts
fi

Creating Shortcuts for Octopress

Why?

Now that I have Octopress installed along with the s3_website plugin so that I can easily publish to Amazon S3 and CloudFront, I wanted to make it a little easier to do common tasks like creating a new blog post or page, generate the static pages, and publish the site to the web.

Prompting for Page and Post Names

To ensure all appropriate characters are converted for the URL names, I wanted to have the new post and new page tools prompt for the title rather than passing it as an argument, which can sometimes cause problems in parsing special characters or spaces along with making aliasing the commands much more complicated. Luckily the new_post function already has this ability, but the new_page tool does not, so it needs to be added to the Rakefile. Here’s the diff to add that to the Octopress Rakefile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
diff --git a/Rakefile b/Rakefile
index 7c15332..654cc54 100644
--- a/Rakefile
+++ b/Rakefile
@@ -122,12 +122,16 @@ end
 # usage rake new_page[my-new-page] or rake new_page[my-new-page.html] or rake new_
 desc "Create a new page in #{source_dir}/(filename)/index.#{new_page_ext}"
 task :new_page, :filename do |t, args|
+  if args.filename
+    pagename = args.filename
+  else
+    pagename = get_stdin("Enter a name for your page: ")
+  end
   raise "### You haven't set anything up yet. First run `rake install` to set up a
-  args.with_defaults(:filename => 'new-page')
   page_dir = [source_dir]
-  if args.filename.downcase =~ /(^.+\/)?(.+)/
+  title = pagename
+  if pagename.downcase =~ /(^.+\/)?(.+)/
     filename, dot, extension = $2.rpartition('.').reject(&:empty?)         # Get f
-    title = filename
     page_dir.concat($1.downcase.sub(/^\//, '').split('/')) unless $1.nil?  # Add p
     if extension.nil?
       page_dir << filename

Shell Shortcuts

In addition, I wanted to create some terminal shortcuts for creating a new post, page, for generating the static pages, and finally for publishing the changes to the web. To do that, since I use bash as my shell, I simply added the following to my ~/.profile:

1
2
3
4
alias newpost="rake new_post"
alias newpage="rake new_page"
alias generate="rake generate"
alias publish="s3_website push --site=public"

Now every step can be accomplished with a single word so I no longer have to remember the syntax and commands, even though they aren’t terribly complex anyway. Remember, you must source your new .bashrc file to activate it after making any changes. This is simply done by executing . ~/.bashrc.

Welcome to My New Blog

Why?

For a long time, I’ve wanted a place to put random snippets of my work or things I’ve learned for the day. Something simple. I could have put up a Wordpress blog fairly easily, but I wanted something more of a minimalist approach. Something fast and secure. Something I wouldn’t have to endlessly maintain and fix. I finally decided to go static.

Behind the Scenes

I searched for a while for a simple static blog and eventually settled on Octopress which is written in Ruby and has a mature codebase with enough contributions to make it easy to get up and running quickly. To fill the requirement of being maintenance free and fast, I decided to publish the blog to a bucket on Amazon’s S3 service and use that bucket as a source for Amazon’s CloudFront CDN. This would make the blog extremely quick anywhere in the world. It would be mirrored all over the world and each user would be directed to the fastest mirror for them. The DNS is also hosted on Amazon’s Route 53 DNS service which ensures quick name resolution all around the world as well. I also settled on the Whitespace theme which, while quite minimalist already, will require some further optimization for my needs.

What I’ll Be Writing About

This site will be used more as a personal and professional repository of things I’ve learned and want to share. Of course, my main interests of travel, technology, and entrepreneurialism will be most prevalent. More travel-related topics will be posted at Miles of Adventure, which is another blog I’m working on. I will be discussing more about the how and why of my various projects on this site, especially my journey of self-employment and attempt to start several businesses. Please feel free to follow my writing and send along any comments and feedback that you feel would be helpful. I’m always looking for feedback.