A Map Tile Server on CentOS
Do you need to build a map tile server that uses OpenStreetMap’s mod_tile for CentOS 6.4? You’re in luck!
Halvard and I put together a set of Ansible playbooks that builds one for you.
Enjoy.
Bytes that get stuck in your teeth.
Pieces I’ve written.
Do you need to build a map tile server that uses OpenStreetMap’s mod_tile for CentOS 6.4? You’re in luck!
Halvard and I put together a set of Ansible playbooks that builds one for you.
Enjoy.
Like a gentleman I use chruby and Bundler to manage Ruby versions and gems in my projects.
Instead of typing bundle exec to run gem executables within a project, I prefer saving keystrokes and using an executable’s name on its own1. I also want to avoid installing another tool like Gem home.
So off to binstubs land I go. Bundler generates them for you and these days Rails even ships with a few as standard. These stub files live in your project and ensure the right set of gems for your project are loaded when they’re executed.
Security risks aside I could just prepend my path with ./bin: and walk away—except that chruby auto-switching spoils the party. When I enter a project directory with a .ruby-version file, chruby prepends the current Ruby version paths at the beginning of PATH thereby matching before my previously prepended ./bin:.
Chruby recommends using rubygems-bundler but I don’t want to install another gem to get this to work. So I tweaked my zsh setup to use preexec_functions like chruby to patch my PATH. I add my function to preexec_functions after chruby loads so that my code patches the PATH after chruby does its work.
As for security I use the same scheme as Tim Pope. Add a git alias for marking a git repository as trusted and then only add a project’s bin directory to PATH if it is marked as such.
Now I just mark a repo as trusted via git trust, and its local binstubs are automatically added to my path.
Changes in my .zshenv:
# Remove the need for bundle exec ... or ./bin/...
# by adding ./bin to path if the current project is trusted
function set_local_bin_path() {
# Replace any existing local bin paths with our new one
export PATH="${1:-""}`echo "$PATH"|sed -e 's,[^:]*\.git/[^:]*bin:,,g'`"
}
function add_trusted_local_bin_to_path() {
if [[ -d "$PWD/.git/safe" ]]; then
# We're in a trusted project directory so update our local bin path
set_local_bin_path "$PWD/.git/safe/../../bin:"
fi
}
# Make sure add_trusted_local_bin_to_path runs after chruby so we
# prepend the default chruby gem paths
if [[ -n "$ZSH_VERSION" ]]; then
if [[ ! "$preexec_functions" == *add_trusted_local_bin_to_path* ]]; then
preexec_functions+=("add_trusted_local_bin_to_path")
fi
fi
The git trust alias from my .gitconfig:
[alias]
# Mark a repo as trusted
trust = "!mkdir -p .git/safe"
Even though I’ve aliased bundle exec to be in my shell I still feel like an animal when I have to type it. ↩︎
Recent posts from Glen Maddern and Thoughtbot inspired me to try my hand at some ES6.
I put together a toy app using jspm and liked what I saw.
I then noticed that the latest beta release of React.js supports ES6 classes.
This led me to dust off an old side project that uses React.js and add jspm to it. The app is built using rails so I spent a little time working out a way to add jspm to that.
I’ve stayed away from the asset pipeline and placed the libraries and application Javascript managed by jspm in the public folder of the app directly.
I’ve extracted the results and placed them up on Github.
The Microservices train is leaving the station baby and everyone is getting on board. Lots of folks have written about Microservices and their benefits but a recent project experience has left me more interested in when you should use the approach.
Here are two posts which jibe with some of what I’ve recently felt.
When you’re starting out, and when you’re small, the speed at which you can make changes and improvements makes all the difference in the world. Having a bunch of separate services with interfaces and contracts just means that you have to make the same change in more places and have to do busywork to share code.
What can you do to reduce the friction required to push out that new feature or fix that bug? How can you reduce the number of steps that it takes to get a change into the hands of your users? Having code in a single repository, using an established web framework like Rails or Django can help a lot in reducing those steps. Don’t be scared of monolithic web apps when you’re small. Being small can be an advantage. Use it.
I joined Netflix in ‘07 and the architecture then was a monolithic development; a two week Agile sort of train model sprint if you like. And every two weeks the code would be given to QA for a few days and then Operations would try to make it work, and eventually … every two weeks we would do that again; go through that cycle. And that worked fine for small teams and that is the way most people should start off. I mean if you’ve got a hand full of people who are building a monolith, you don’t know what you are doing, you are trying to find your business model, and so it’s the ability to just keep rapidly throwing code at the customer base is really important.
Once you figure out how… Once you’ve got a large number of customers, and assuming that you are building some Web-based, SasS-based kind of service, you start to get a bigger team, you start to need more availability.
Large projects with long-term timelines seem like good candidates for using the Microservices approach1.
On the other hand, new products or services may not be the right situation to immediately dive in with a Microservices approach. It’s likely that the idea itself is being fleshed out and investing anywhere outside of that core goal is ultimately waste. Carving process boundaries throughout your domain in this early turbulent stage is going to slow you down when you inevitably need to move them.
Pushing infrastructure style functionality—such as logging or email delivery—out into services makes sense, but waiting to see how things develop seems worthwhile when it comes to the core domain. Initially focussing on understanding the domain and investing in getting changes out to production as quickly as possible is likely more important then developing loads of cross-process plumbing.
A monolithic application isn’t such a bad place to start. The trick, as always, is to know when to change that plan.
In fact a brown field or system refresh project seems like an ideal situation to test the waters of implementing them. These projects have a runway long enough to justify the investment required to put all of the required communication, deployment, and monitoring ligatures in place. ↩︎
Programming well is hard. Here are a few books that have helped me improve that I recommend.
This contains plenty of great advice even if you don’t code in Ruby. It focusses in on the message passing aspect of OO and how to structure your code around that ideal whilst keeping it amenable to change.
This is really about all code and is full of strategies to isolate and deal with problematic code in large untested code bases.
A meditation on what makes code “good”. General advice that covers many aspects of code including readability, clarity of intention, and separation of responsibilities.
Short, sharp, and to the point advice for writing Javascript. Points out the rough edges in the language and gives you concise advice on how to deal with them.
A touch dated in areas but the core principles it espouses are still good and will hold true for a while to come.
A book focussed on “the last mile” in software. Getting your code out the door and setup in a way that you can monitor and change it. It also provides interesting techniques for dealing with production issues in distributed systems such as cascading failures.
A look at techniques to improve the readability and style of your code. Tips on elimating conditionals, using null objects, and more.
I’ve used Vim for a long time and this book taught me plenty. A must read if you use Vim as your editor.
A great way to learn recursion and some Lisp.