My spouse traveled to Canada for a few days. She just went a few miles over the border into Vancouver, BC.
She neglected to add an international data plan to her mobile number before she left. Because of this, she racked up $300 of data charges in 24 hours.
Every wireless carrier has at least one, and you have to add it to your account before you travel outside the country, and then delete it when you return home. But, why? My carrier knows when I’m out of the country! In fact, multiple systems between my cellphone and my account know it!
We had more fun with a vendor today.
We license a vendor’s services for corporate information, like annual revenue and office locations. Their name shall be kept confidential. I’ve written about them before.
About two weeks ago, we noticed a slowdown in our API calls into their system.
We asked them about it, and they replied that they would take a look. A bit later, they said they had found the problem and were working on a solution.
Today, after working on new code, I ran my unit tests. A few tests make calls to this vendor. (Yeah, I could have mocked out the calls. But there are good reasons to not mock out calls in unit tests.) I was surprised to see those tests now fail.
Curiously, they failed because the API calls returned the response, “Customer Disabled”.
I immediately switched to a browser window and tried a part of our product that used their API. I found that our product now failed with the same error. Uh oh.
I e-mailed the vendor and asked what’s up. Their answer:
We found that our service was being slowed down by your API calls. So we disabled your API key.
I am not kidding. Continue reading after you’ve caught your breath.
At IP Street, most of our technology stack is open-source. Something happened last week that threw our components’ different design philosophies into stark relief.
We use Solr (with Zookeeper) for many of our search and pivot tasks, and Redis as a Swiss Army Knife. They do different things and have different consistency requirements. You can easily critique any juxtaposition as comparing apples to oranges. I think it’s instructive, because Solr and Redis are both high-performance, production-quality, and powerful tools.
Working on them within the same day, I experienced exact terminal opposites in configuration philosophy!
Let’s meet contestant number 1
Solr is a powerful search engine. Their Cloud feature lets you shard and scale your index, and Solr will do the internal shard and node routing. Or you can direct your queries to the appropriate node for a small performance win. Being
short-handed understaffed frugal with our peons worker bees people, we let Solr do the routing. “Here’s a document, store it.” “I want this document.” “Here’s a pivot within a search, do it and assemble the results for me, pronto.” Etc.
Solr nodes are peers, though internally there are leaders and replicas. Solr uses Zookeeper, an Apache technology for distributed persistent configuration. Nodes do the right thing when other nodes come and go.
We license a vendor’s services for corporate information, like annual revenue and office locations. Their name shall be kept confidential in this story.
We access their API via http calls. They call it a REST API. But like 95% of the “REST” APIs in the world, it’s not REST at all, and in fact nowhere near REST. The term “REST” has
been corrupted to be become synonymous with, “web API”.
But whatever. It’s an API accessed with http calls.
One of service calls has a parameter called, “countryCode”, which was documented as an ISO 3166 country code.
My friend Kirk has run his dev team in a mostly Agile system. Code sprints, agreeing on tickets for the sprint, declaring victory at the end of the sprint, etc.
But now Kirk’s boss says:
I need you to commit to achieve certain goals by various dates over the next year. Once you agree to them, you need to commit to delivering them on time.
How is this situation silly? Let me count the ways…
A friend, whom I’ll call “Kirk,” works in a startup. A really good developer, whom I’ll call “Amy,” reports to him.
Kirk lobbied his boss for a big raise for Amy. He thought about this the right way:
I’ve researched the current market rates for developers of Amy’s level and abilities. She’s very good, she’s worked hard for us, and I expect great things from her this year. The plan calls for raising her salary to $X, But I suggest we raise her salary to $(X + n) because that’s the going salary for someone like her in this area.
Kirk’s boss thought about it the wrong way:
A raise to $(X + n/2) would be better. It’ll be a large increase over her current salary.
I talked with Erik Carlin of Rackspace about last week’s Rackspace post.
He explained that I experienced a bug in their dynamic image configuration. When you instantiate a VM, a number of things happen behind the scenes to the base server image. It’s not as simple as copying a directory tree from A to B. A bug was introduced into their code, and they caught the bug and fixed it, but not before it bit some users.
So, Rackspace didn’t intentionally change the server image this time. I apologize for drawing that conclusion.
My November 2011 post about mutating server bits is still correct. We talked about Rackspace’s challenge in balancing “simplicity of use” vs. “power users’ information needs” when a server image changes.
Once again, Rackspace has changed the contents of an already-published server image without any notice to its users.
22 days ago, I provisioned a staging system with Ubuntu 11.10. In upgrading from 11.04, I had the typical difficulties — e.g., removing 11.04 package workarounds, and upgrading some software that we built from sources. When I finished, my Fabric script provisioned my 11.10 servers, and I wouldn’t have to futz with it again until we advanced to Ubuntu 12.04.
So imagine my surprise when I tried re-provisioning our staging system yesterday, and the script threw an oddball installation failure for PostgreSQL, and all the servers had oddball network flakiness.
Daylight Saving Time is a gimmick and a crock and flipping stupid and I hate it.
Personality cults are odd. At a conference, I see this most often in the backchannels. Like on Twitter. If Fred tweets XYZ, it probably won’t be RT’d; and if it is, it’ll be RT’d at most twice. But if a community cognoscenti tweets the same thing, it’s RT’d 18 times as a gem of profound wisdom. That this phenomenon is so obvious only adds to its oddness.
Rackspace changed their Ubuntu 11.04 (Natty) server image without telling their customers. Our installation scripts unexpectedly broke. In the cloud, the rug can be pulled out from underneath you without warning, even in a very simple setup.
My employer is a small shop, and we use Rackspace Cloud Servers for our QA and Production systems. We use unmanaged VMs, from 256 MB to 16 GB in size, running Ubuntu.
Rackspace has generally been a very good hosting provider. My only significant complaint is with their cloud administrative dashboard — it’s slow, clunky, and often hangs. But we’ve learned to live with it.
When we upgraded from Ubuntu 10.10 to 11.04, we had some typical upgrade pain with our Operations scripts. We had to remove some 10.10 package workarounds, and we switched some software from source builds to packages, because the 11.04 repository’s version was now acceptable.
We got past all that, and moved our systems to 11.04. Since then, re-building servers meant selecting Ubuntu 11.04 as the server image, running our Fabric scripts, and everything working predictably without surprises.
Until November 21…
How badly can you build and QA an application? If you’re WordPress, you can do a bang-up horrible job with your crap iPhone app. It changes titles, inserts and removes newlines, and applies other wonderful transforms to your blog’s posts at will.
It’s a pity Apple doesn’t allow negative stars in a review. The WordPress app is less than worthless.