Comparing two technologies on their configuration style


At IP Street, most of our technology stack is open-source. Something happened last week that threw our components’ different design philosophies into stark relief.

We use Solr (with Zookeeper) for many of our search and pivot tasks, and Redis as a Swiss Army Knife. They do different things and have different consistency requirements. You can easily critique any juxtaposition as comparing apples to oranges. I think it’s instructive, because Solr and Redis are both high-performance, production-quality, and powerful tools.

Working on them within the same day, I experienced exact opposites in configuration philosophy!

Let’s meet contestant number 1

Solr is a powerful search engine. Their Cloud feature lets you shard and scale your index, and Solr will do the internal shard and node routing. Or you can direct your queries to the appropriate node for a small performance win. Being short-handed understaffed frugal with our peons worker bees people, we let Solr do the routing. “Here’s a document, store it.” “I want this document.” “Here’s a pivot within a search, do it and assemble the results for me, pronto.” Etc.

Solr nodes are peers, though internally there are leaders and replicas. Solr uses Zookeeper, an Apache technology for distributed persistent configuration. Nodes do the right thing when other nodes come and go.

Let’s meet contestant number 2

Redis is a fast, reliable, straightforward key:value store. Its commands remind me of assembly language opcodes. “Here’s a key and a value, remember it.” “Give me the value at this key.” “Give me the hashed value at this key.” Etc.

Redis has asynchronous non-blocking master-slave replication. A master can have any number of slaves, and a slave can have any number of slaves. Slaves are read-only by default, though you can make a tree of cascading read/write slaves if you’re game.

Third-party packages exist to do read/write routing, and to even hot failover. For now, we’re keeping things simple and doing our own read/write splitting.

Deployment automation

Because of the relatively few number of servers in our production system, we don’t use a configuration management system (e.g., Chef, Puppet, Ansible) yet. We use Fabric for deployment automation, and we just upgrade in place using Fabric tasks. It’s not as automated as a CM tool, but OTOH it’s one fewer technology with which to grapple. We don’t support hundreds of nodes.

So last week…

We added a Redis slave, which I wanted to codify in our operations fabfile. The slave was for redundancy (though not hot-failover) and future scaling. We send writes to the master, and we read from a load balancer VIP that consists of the master and the slave. I needed to update our fabfile.py to account for this.

We upgraded to Solr 4.0 in January, and then to 4.1. We did it all interactively. We’ve got an awesome Solr consultant advising us, and sometimes doing the work.  The Solr 4.x installation and configuration instructions were documented, but our Fabric scripts hadn’t yet been upgraded. So, I also wanted to codify the installation and configuration in our fabfile.py.

Off I went to upgrade our Fabric script.

First, Redis

Every Redis node has two configuration files. One is unique for the node, and the other has parameters that are common to every node. The former links to the latter. (Of course, you could also just have one (larger) file on each node.)

To set up a slave, you include a SLAVEOF command in your configuration file:

SLAVEOF <master_ip> <master_port>

If your master node has a password, there’s one other command to supply it.

When a node knows it’s a slave, it sends the master a SYNC command. It doesn’t matter if it’s the first time it has connected or if it’s a reconnection. The master then starts background saving, and collects all new commands received that will change the dataset. When the background saving is complete, the master transfers the database file to the slave, which saves it on disk, and then loads it into memory. The master then sends the slave all accumulated commands, and all new commands received from clients that will change the dataset. This command stream is in the format of the Redis protocol itself. (This paragraph is shamelessly copied.)

The most obvious critique is the lack of partial upgrades and timestamps. If a slave loses connection, it re-syncs, to which the master will respond by re-sending the entire database. This is inefficient; there may have been few (or no) database changes since the slave was last synced. OTOH, it’s simple. You might say, foolproof.

Upgrading our fabric script was easy. The “provision redis” task knows when it’s provisioning a master or a slave. (The master is list entry [0].) When provisioning a slave, it uses sed to insert a SLAVEOF into the configuration file.

Then, Solr

Solr is written in Java, so its configuration includes the usual Javaesque parameters to Jetty or Tomcat.

I started codifying our Solr 4.1 installation and configuration into our fabfile. First, we need to edit /etc/security/limits.conf to increase the maximum file descriptors. That was only a couple of sed commands.

Next up was changes to /etc/pam.d/su. Also easy.

Next, for Zookeeper, we have to put this node’s unique identifying number into /etc/zookeeper/conf/myid.

Wait, WTAF? Each Solr node needs a unique identifying number — 1, 2, 3, etc. A node finds its number by looking in /etc/zookeeper/conf/myid. But, erm, every node has a unique IP address. You’re running two instances on the same server? Fine, toss in the port number. Why can’t Zookeeper use that?!

I got a can of Coke and wrote a simple function that returned a node’s unique identifying number. This wasn’t the end of the world, but it was unnecessary.

Then, we have to update /etc/default/zookeeper to adjust JAVA_OPTS.

Then we have to edit /etc/zookeeper/conf/zoo.cfg to insert the server’s IP address in some commands. Holy crap, it has a unique integer and a unique IP address?!

I was now getting a headache.

Are we done? No. Then we run /usr/share/zookeeper/bin/cli_mt to do something. Then /usr/share/zookeeper/bin/zkCli.sh to do something else. Then run zktreeutil. Then check /var/log/zookeeper/zookeeper.log to see if the world has exploded.

Are we done? No. Then we install Jetty, which includes editing/etc/default/jetty to set a smorgasbord of JAVA_OPTIONS and define JAVA_HOME.

Then we install Solr. For this we have to mkdir, jar, cp, and chown various files. Then edit /etc/solr/solr.xml. (Using XML in configuration files is idiotic, but there we are.)

Then we can start Jetty.

But we’re not done. Although Solr’s configuration files have shard information, Solr doesn’t use it. You have to use curl to tell Solr about its shards.

Now are we done? Beats the shit out of me, because I gave up five paragraphs ago.

OMG my head exploded

It seems to me that Solr and Zookeeper’s configuration was designed without any consideration of deployment automation, or what an IT staff must do to install and configure them. They were designed (I use that word loosely) for an interactive deployment.

Many files must be edited, in the right order, with no easy markers on which to hang sed commands. Each step can be explained to a human, but is non-trivial to code even in a powerful language like Python. The total number of steps makes you throw up your hands.

Almost every step is individually reasonable. Although I itch at Javaesque parameters and XML configuration files, I don’t question any one step, except for the “myid”.

But think about how this stuff must be deployed, repeatedly, in real-world situations. When you consider what must be done to install and configure Zookeeper/Solr, your head explodes like those poor SOB’s in Scanners. “All” of the configuration isn’t amenable to automation with commonplace tools.

I’m not sure what to do about this. For now, I’ve moved on to another task, and I’ll mull what to do about Solr this week. Maybe just stash the post-edited copies of the configuration files in our pool?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: