A common thing to do with Chef and app server configuration is to create a ‘deploy’ user. This user will be involved with the deployment of code and often needs read-only access to the source repository. In my case, this was Bitbucket, but this procedure should copy across with a few tweaks for GitHub or most other providers too.
In the case of Bitbucket (and GitHub) a deploy user is given read-only access to a repository through their ssh key. Because we’re creating our deploy user through Chef anyway, along with their ssh key, it makes a lot of sense to send this off to Bitbucket and that’s what this little recipe does:
# create the deploy user
user "deploy" do
shell "/bin/bash"
home "/home/deploy"
supports :manage_home => true
end
chef_gem 'httparty'
# create their ssh key
execute 'generate ssh key for deploy' do
user 'deploy'
creates '/home/deploy/.ssh/id_rsa'
command 'ssh-keygen -t rsa -q -f /home/deploy/.ssh/id_rsa -P ""'
notifies :create, "ruby_block[add_ssh_key_to_bitbucket]"
notifies :run, "execute[add_bitbucket_to_known_hosts]"
end
# add bitbucket.org to known hosts, so future deploys won't be interrupted
execute "add_bitbucket_to_known_hosts" do
action :nothing # only run when ssh key is created
user 'deploy'
command 'ssh-keyscan -H bitbucket.org >> /home/deploy/.ssh/known_hosts'
end
# send id_rsa.pub over to Bitbucket as a new deploy key
ruby_block "add_ssh_key_to_bitbucket" do
action :nothing # only run when ssh key is created
block do
require 'httparty'
url = "https://api.bitbucket.org/1.0/repositories/#{node['bitbucket_user']}/repo-name/deploy-keys"
response = HTTParty.post(url, {
:basic_auth => {
:username => node['bitbucket_user'],
:password => node['bitbucket_pass']
},
:body => {
:label => 'deploy@' + node['fqdn'],
:key => File.read('/home/deploy/.ssh/id_rsa.pub')
}
})
unless response.code == 200 or response.code == 201
Chef::Log.warn("Could not add deploy key to Bitbucket, response: #{response.body}")
Chef::Log.warn("Add the key manually:")
Chef::Log.info(File.read('/home/deploy/.ssh/id_rsa.pub'))
end
end
end
The bitbucket_user
and bitbucket_pass
attributes should be set somewhere, and in the url you’ll want to change repo-name
to the actual repo you’re deploying to. Bitbucket only lets you add deploy keys per repository, so if this user will be deploying from multiple repositories this is a good place to do it - just update the Ruby block so it loops through all your repositories and sends a deploy key off for each one.
You’ll most likely want to run this only on production or staging environments, otherwise you could end up adding dozens of junk deploy keys to Bitbucket while you’re spinning up all those Vagrant VMs!
I ran into a little hiccup when trying to configure Munin to monitor PostgreSQL. After linking the ‘postgres_’ plugins and restarting munin-node, no Postgres stats were appearing and I was seeing error messages in the munin-node.log like this:
Service 'postgres_size_ALL' exited with status 1/0
Service 'postgres_locks_ALL' exited with status 1/0
Service 'postgres_cache_ALL' exited with status 1/0
Not very helpful but, it turns out, easy to fix. The Munin Postgres plugins use Perl and the DBD::Pg
module to talk to your PostgreSQL database so if either of these are missing, you’ll get these errors.
The solution is to install the DBD::Pg
module from CPAN. If you’re using Chef, add the perl
cookbook and then run cpan_module 'DBD::Pg
in a recipe somewhere.
Hetzner is a German server provider and has some great prices for leasing a dedicated server. For example, you can grab the EX 4S with a Core i7 and 32GB of memory for 60€ a month.
With this much CPU and memory available it makes sense to turn one of these into your own personal VPS provider. This is easy to do and shouldn’t take you too long. I’ll show you how to replicate my setup, which is a Ubuntu 12.04 (Precise Pangolin) host and guests, where each guest has a static IP and is externally accessible.
This guide doesn’t need any prior Xen knowledge but familiarity with Linux, ssh and the terminal is assumed.
It’s pretty common these days to let your users either sign up with or connect to Twitter from within your application. The typical way to do this is to redirect the user to Twitter, have them authorise and then bring them back.
Although this works fine, I wanted this to take place in a popup window, which avoids having the user leave your page and also means the whole thing can be handled in Javascript (invite the user to connect, wait for them to finish, and then act accordingly without a page refresh).
Facebook has a handy Javascript SDK for this situation and it works great. With Twitter, we need to do this manually, but even so it’s not too difficult. I’ll explain how to do this using the Ruby OmniAuth gem, but it’ll be easy to adapt for other libraries.
Spree is a nifty e-commerce platform on Rails. It’s open-source and fairly easy to customise. In particular, the ordering process uses state_machine which lets you hook in to any part you need to. I’m using Spree v1.2.0, which is the latest version right now.
Adding a post-purchase hook is easy. First you’ll need to have the following in a file in your lib/
folder. I just used lib/spree_site.rb
:
Dir.glob(File.join(File.dirname(__FILE__), "../app/**/*_decorator*.rb")) do |c|
Rails.configuration.cache_classes ? require(c) : load(c)
end
This tells Spree to look for files named _decorator in your app/
directory and load them in.
Next up we want to override the Order model. Create the file app/models/spree/order_decorator.rb
and stick this in:
Spree::Order.class_eval do
def say_hello
puts 'Hello!'
puts "This order cost #{total}"
# do something interesting, like notify an external webservice about this order
end
end
Spree::Order.state_machine.after_transition :to => :complete,
:do => :say_hello
The first part uses Ruby’s classeval to add a new method to the _Order model. The second part tells the state machine to run the new ‘say_hello’ method after the :complete (end of checkout) transition occurs.
And that’s all there is to it!
While using this bundle of techs I hit a curious problem. I had a RPC declaration on my producer which looked something like this:
local:
sendPost: (post, fn, errorFn) ->
$.ajax
type: 'post'
url: '/posts'
dataType: 'json'
data:
post: post
success: (data) ->
fn data
And in my consumer I had this:
rpc.sendPost post, (data) ->
if data.accepted
alert 'Post was accepted'
else
alert 'Post not accepted'
You’d expect the callback in that call to trigger once the ajax request finished back on the producer. However, the callback fires immediately and you’d actually get a error: Uncaught TypeError: Cannot read property 'accepted' of undefined
The reason is so simple I almost forgot it could happen: CoffeeScript always adds a return to your functions. But easyXDM explicitly looks for a return value in an RPC function and, if it gets one, it’ll run your callback immediately. So in this case, CoffeeScript is causing the $.ajax method itself to be returned, which means an object like this: { readyState: 1 }
. Not so good.
Luckily it’s easy to stop CoffeeScript from doing this: simply add either ‘return’ or 'undefined’ as the last statement in the function, and easyXDM will wait for the callback instead. Be sure to comment this, because otherwise it’ll probably look like a mistake ;)
While playing around with PostgreSQL’s hstore in Rails, I kept running into this error despite having run CREATE EXTENSION hstore;
Closer inspection of CREATE EXTENSION
shows that it installs an extension into the current database. I ran it as my superuser (postgres) in the main postgres database, which meant Rails and its application database couldn’t see it.
Rather than manually install hstore in the application databases, you can install hstore in the template1
database. Postgres copies this database when creating a new one, so every new database will have hstore installed by default.
psql template1 -c 'create extension hstore;'
When any of your application databases are created, hstore will now be installed by default. To install it in your existing databases, use psql as a superuser:
psql application_db -c 'create extension hstore;'
These methods avoid giving your application user superuser
permissions, which would be required if you wanted to install hstore as part of your migrations.
A long term MySQL user, I’ve recently taken to using PostgreSQL on a few projects. From a MySQL background, Postgres can seem a little confusing. I decided to write down exactly how the basic stuff works, alongside the way you might do it in MySQL for comparison.
You don’t actually need to grok MySQL for this guide to be of use, but it’ll probably help. I’m also using PostgreSQL 9.1, so older versions may not match up with my instructions.
Most solutions for providing Retina, or Hi-DPI, images to a client involve media queries or a bit of Javascript to replace standard images with Retina ones when appropriate. Both of these solutions result in the standard images being downloaded by every client (and with media queries, they also often involve the Retina images being downloaded by every client too!)
If you’re willing to require Javascript (which may already be the case, especially for mobile apps) you can avoid the multi-download problem and serve exactly which images are required, saving both you and your users bandwidth.
Although nanoc’s great, when you have a bunch of gems all doing their thing and you’re just trying to fix a CSS bug or tweak some markup the compile times can be unmanageable.
This may seem like a no-brainer, but a simple trick for this is to use conditionals in your Rules
file tied to an environment variable. Something like this works a treat:
if ENV['NANOC_ENV'] == 'production'
filter :colorize_syntax, :default_colorizer => :pygmentize
end
For normal development, just continue to run nanoc compile
and those slow filters will be ignored (seriously, pygmentize is great but it’s unbelievably slow for me). When you’re ready to see what it looks like for real, run NANOC_ENV=production nanoc compile
and wait it out.
A few months ago I blogged about building a static blog using nanoc and as I recently finished off my little site with a portfolio section, I figured I’d throw up a guide on how to do that (it’s really easy!)
The concept is almost identical to how blog posts work in nanoc: each portfolio entry is its own file, with some kind of identifier for the nanoc parser to pick up. This is combined with a custom helper to pull out these entries and from there they can be displayed in whatever way makes sense.
I won’t be as hands-on with this guide as I was with the previous, so if you feel lost give that one a read first.
I recently had the unfortunate task of setting up a basic SOAP server for the purposes of some cross-University communication. Java tends to be very good at this (or as good as you can be, dealing with SOAP) but it’s still quite long-winded and, to save time, I also wanted something I could easily deploy to Heroku.
After spending a little while looking at some options, I settled on Python and rpclib. This let me create a SOAP server without any pain, and of course it was simple to deploy to Heroku as well. The biggest time-saver is that rpclib is not by necessity a contract-first SOAP server - so you don’t need to write your own WSDL, but can simply write a service class in Python and have rpclib autogenerate the WSDL.
I can’t imagine there’s many people who’ll need to know how to do this (a SOAP server sitting on Heroku? How common is that?) but I’m throwing it up here for future reference. Although if I ever have to work with SOAP again I may shoot myself.
Having a static site may feel a bit like a throwback, but the benefits are well noted and there are various frameworks around to turn your text and templates into HTML. For my site, I opted for nanoc, which is Ruby based and extremely flexible.
nanoc is simple to set up and use, but because it’s so generic it doesn’t (by default) do the things you might expect from a blog, like tags, archives, timestamps and the like. For something a bit more ‘out the box’, I’d suggest looking at Jekyll, or Octopress (which is even more feature-packed).
I wanted to use nanoc as it doesn’t restrict your choice of template/rendering engine, and because it’s lightweight and gets out of the way, making it easy to hammer into shape. In this post I’ll explain how to flex nanoc into a simple blogging platform.
Like most people, I’m a routine user of internet banking. Although my bank, First Direct, do have an banking web application, I want to get at my financial data on my own terms so I can use it for more interesting projects. Since First Direct don’t offer any sort of API I decided to use NodeJS and Zombie (a headless web browser) to do the job instead.
So, if you’re a First Direct customer and a programmer, and want to get your data out too, this might help. If you’re a member of a different bank, you might still find this helpful as the advice should be fairly generic (although, if your banking website is very Javascript-heavy, it’ll be harder).