06 Aug 2018
Some years ago when I was living in an Ashram in rural Virginia, I met
a wise, old man. I knew he was a wise, old man because he embodied
certain stereotypes about wise, old men. First, he was a Gray-Bearded
Yogi. Before this he was a New Yorker and a practicing Freudian
Psychoanalyst. Sometimes he would say a lot of interesting and funny
things, and at other times he would smile and nod and say nothing at
all. I can’t remember if he ever stroked his beard.
One day he said to me, “Asoka,” (the name under which I was going at
the time). “Asoka,” he said, “do you know what is the single driving
force behind all our desires, motives and actions?” I thought about
this for some time. I had my own ideas but, knowing he was a Freudian,
suspected that the answer was going to be something to do with the
libido.
“You probably suspect that the answer is going to be something to do
with the libido,” he said. “But it’s not.” I listened patiently. “It’s
the need … to be right.” I laughed. While I wasn’t totally surprised
not to have got the right answer, this particular one for some reason
blew me away. I wasn’t prepared. I had never framed human nature in
those terms before.
I wouldn’t expect anyone else to have the same reaction. I suspect
others would find this to be either obvious, banal, or plainly wrong,
and if this is you, I don’t intend to convince you otherwise (there
might be a certain irony in trying to do so). What I want to do
instead is document what became for me a personal manifesto, and a
lens through which I began to look at the world. As a lens, you are
free to pick it up, take a look through it, and ultimately discard it
if you wish. But I rather like it a lot.
What happened that day was really only the start of a long
process. Eventually I would see that a preoccupation with being right
was essentially an expression of power and that rectifying (from the
Late Latin rectificare - to “make right”) was about exerting power
over others. I would also see that this preoccupation had perhaps more
to do with the appearance of being right, and that the cost of
maintaining it would be in missed opportunities for learning. And I
would also see that, while the rectification obsession was not a
uniquely male problem, there seemed to be a general movement of
rightness from that direction, and we would do well to examine that
too.
I was the principle subject of my examination, and it has become a
goal to continue to examine and dismantle the ways in which I assert
“rightness” in the world.
A little bit about myself
Allegedly I come from a long line of know-it-alls. Unsurprisingly,
it’s a behavior that passes down the male side of my family. Of
course, I don’t really believe this is a genetic disposition, and it’s
easy to see how this might work.
As a child I remember my family’s praising me for being ‘brainy’. They
gave me constant positive feedback for being right. As long as I
appeared to be right all the time I felt like I was winning. In
actuality, though, I was losing. I learned to hide my ignorance of
things so as never to appear wrong. I’ve spent most of my life missing
answers to questions I didn’t ask. I became lazy, unconsciously
thinking that my smarts would allow me to coast through life.
Once I left School, and with it a culture principally concerned with
measuring and rewarding rightness, I had a hard time knowing how to
fit in or do well. It would take years of adjustments before I felt
any kind of success. Whenever something became hard, I’d try something
new, and I was always disappointed to find that opportunities were not
handed to me simply because I was ‘smart’. When I didn’t get into the
top colleges I applied to it devastated me. I would later drop out of
a perfectly good college, get by on minimum wage jobs when I was lucky
enough even to have one, fail to understand why I didn’t get any of
the much better jobs I applied for.
I stumbled upon a section in Richard Wiseman’s 59 Seconds: Think a
Little, Change a Lot that claimed that children who are praised for
hard work will be more successful than those that are praised for
correctness or cleverness (there is some research that supports
this). It came as a small comfort to learn that I was not alone. More
importantly, it planted in me a seed whose growth I continue to
nurture today.
I still don’t fully grasp the extent to which these early experiences
have shaped my thinking and my behavior, but I have understood it well
enough to have turned things around somewhat, applied myself, and have
some awareness of my rectifying behavior, even if I can’t always
anticipate it.
It is one thing to intervene in your own actions toward others, to
limit your own harmful behavior. It is quite another when dealing with
the dynamics of a group of people all competing for rightness. What
I’m especially interested in currently is the fact that I don’t
believe I’ve ever seen such a high concentration of people who are
utterly driven by the need to be right all the time as in the tech
industry.
Let’s look at some of the different ways that being right has manifest
itself negatively in the workplace.
On Leadership and Teamwork
There is a well-known meme about the experience of being a programmer,
and it looks like this:

There is some truth to this illustration of the polarization of
feelings felt through coding. However, it is all too common for
individuals to wholly identify with one or the other. On the one side
we have our rock stars, our 10x developers and brogrammers. On the
other we have people dogged by imposter syndrome. In reality, the two
abstract states represent a continuous and exaggerated part of us
all. Having said that, I believe that everyone is in the middle, but
much closer to the second state than the first. All of us.
In my personal experience I have felt a strong feeling of camaraderie
when I’m working with people who all humbly admit they don’t really
know what they’re doing. This qualification is important - nobody is
saying they are truly incompetent, just that there are distinct limits
to their knowledge and understanding. There is the sense that we don’t
have all the answers, but we will nonetheless figure it out
together. It promotes a culture of learning and teamwork. When
everyone makes themselves vulnerable in this way great things can
happen. The problem is that it only takes one asshole to fuck all that
up.
When a team loses its collective vulnerability as one person starts to
exert rightness (and therefore power) downwards onto it, we lose all
the positive effects I’ve listed above. I’ve seen people become
competitive and sometimes downright hostile under these
conditions. Ultimately it rewards the loudest individuals who can make
the most convincing semblance of being right to their peers and
stifles all other voices.
This is commonly what we call “leadership”, and while I don’t want to
suggest that leadership and teamwork are antagonistic to each other, I
do want to suggest that a certain style of leadership, one concerned
principally with correctness, is harmful to it. A good leader will
make bold decisions, informed by their team, to move forward in some
direction, even if sometimes that turns out to be the wrong one. It’s
OK to acknowledge this and turn things around.
On Productivity
A preoccupation with being right can have a directly negative effect
on productivity. One obvious way is what I will call refactoring
hypnosis - a state wherein the programmer forgets the original intent
of their refactoring efforts and continues to rework code into a more
“right” state, often with no tangible benefit while risking
breakages at every step.
Style is another area that is particularly prone to pointless
rectification. It is not unusual for developers to have a preference
for a certain style in whatever language they are using. It is
interesting that while opposing styles can seem utterly “wrong” to the
developer it seems that this is the area of software development in
which there are the fewest agreements over what we consider to be good
or “right”. In Ruby there have been attempts to unify divergent
opinion in the Ruby Style Guide but it has been known to go back
and forth on some of its specifics (or merely to state that there are
competing styles), and the fact that teams and communities eventually
grow their own style guides (AirBnb, GitHub, thoughtbot, Seattle.rb)
shows that perhaps the only thing we can agree on is that a codebase
be consistent. Where it lacks consistency there lie opportunities to
rectify, but this is almost always a bad idea if done for its own
sake.
Finally, being right simply isn’t agile. One of the core tenets of the
Agile Manifesto is that while there is value in following a plan,
there is more value in responding to change. This seems to suggest
that our plans, while useful, will inevitably be wrong in crucial
ways. An obsession with rightness will inevitably waste time -
accepting that we will be wrong encourages us to move quickly, get
feedback early on and iterate to build the right thing in the shortest
time.
On Culture
As I’ve asserted above, none of us really knows what we are doing
(for different values of “really”), and indeed this sentiment has been
commonly expressed even among some of the most experienced and
celebrated engineers. I think that there is both humor and truth in
this but, while I believe the sentiment is well-intentioned, words are
important and can sometimes undermine what’s being expressed
here. I’ve seen people I look up to utter something of the form, look,
I wrote [some technology you’ve probably heard of], and I still do
[something stupid/dumb] - what an idiot! This doesn’t reassure me at
all. All I think is, wow, if you have such a negative opinion of
yourself, I can’t imagine what you’d think of me.
Perhaps instead of fostering a culture of self-chastisement we can
celebrate our wrongness. We know that failure can sometimes come at
great cost, but it’s almost always because of flaws in the systems we
have in place. A good system will tolerate certain mistakes well, and
simply not let us make other kinds of mistakes. A mistake really is a
cause for celebration because it is also a learning, and celebrating
creates an opportunity to share that learning with others while
simultaneously destigmatizing its discovery. I am happy that my team
has recently formalized this process as part of our weekly
retrospectives - I would encourage everyone to do this.
One of the most harmful ways I’ve seen the rectification obsession
play out is in code reviews. The very medium of the code review
(typically GitHub) is not well set up for managing feelings when
providing close criticism of one’s work. We can exacerbate this with
an obsession with being right, especially when there are multiple
contenders in the conversation.
I have been on teams where this obsession extends into code review to
the point where, in order for one to get one’s code merged, a reviewer
has to deem it “perfect”. Ironically, this seems less an indicator of
high code quality in the codebase and more of the difficulty of ever
making changes to the code subsequently. Having your work routinely
nitpicked can be a gruelling experience - worse so when review take
place in multiple timezones and discussions go back and forth over
multiple days or even weeks. Personally, I’ve been much happier when
the team’s standard for merging is “good enough”, encouraging
iterative changes and follow up work for anything less crucial.
It is hard to overstate the importance of language when looking at
these interactions. There has been much talk recently about the use of
the word “just” (as in “just do it this way”) in code review, and I am
glad that this is undergoing scrutiny. It seems to suggest that not
only is the recipient wrong, but deeply misguided - the “right” way is
really quite simple. This serves to exert power in a humiliating way,
one that minimizes our effort and intellect along the way. Of course,
there are countless more ways that we can do harm through poorly
chosen words, but I am glad that we have started to examine this.
On Mansplaining
It is telling to me that the standard introduction to any
mansplanation, well, actually…., is almost the ultimate expression
of rectification. It is appropriate that we have identified this
behavior as an expression of masculine insecurity - the man uses sheer
volume and insistence to counter a position he poorly
understands. More innocent mansplanations still work in the same way -
without contradicting a man may simply offer some explanation (I am
right!), believing this to be helpful to the person whose ignorance he
has assumed.
I am aware that there could be some irony in trying to frame the whole
of this phenomenon in terms of my manifesto, but it is not my
intention to do so. It is rather that mansplaining reveals a great
deal about the harm done and intentions behind rectifying behavior.
Doing the Right Thing
I do not want to suggest a feeling of smug superiority - just about every
harmful behavior I have described above I have also engaged in at some
point. I know I will continue to do so, too. But I want this to be
better, and I want to work with people who are also committed to these
goals.
Looking back to the start of my journey, I have to question now the
intent of the wise, old man in his original assertion about human
behavior. Was this yet another example of some unsolicited advice from
a person who exploited their maleness and seniority to add more weight
to their pronouncements than perhaps they deserved? Is this all that
wise, old men do? Almost certainly.
As it turned out, I did not wholly embrace it as truth (none of the
above makes any claims to social science or psychology), but neither
rejected it wholesale. I discovered that while it may not be literally
true, I might arrive at smaller truths by entertaining it as an idea
(the contradiction is probably what made me laugh). I’m grateful that
it was shared with me.
That there is nothing wrong with being right. Rather, it is the
desire to be right that colors our judgment, that leads us on the
wrong path. Being right is also not the same thing as doing the right
thing. And I want to focus my efforts now on this, while trying to
free myself from the tyranny of being right.
19 Feb 2018
When I started my first job I was programming in Ruby sans Rails,
and knew I had to get up to speed quickly. I purchased Russ Olsen’s
Eloquent Ruby (2011) and read it cover to cover, though I struggled
a bit towards the end. I tried some things out, and then I read it
again. Then I played around some more. I learned the joys of
metaprogramming and then burnt my fingers. I went back and read it a
third time, now with the understanding that some of the things in the
back were dangerous.
Up until Eloquent Ruby, the resources I had used to learn all had
the same agenda: how to tell the computer to do stuff. In other words,
they would describe the features of things without spending too much
time on the attitudes toward them. And it is much harder to do the
latter.
The concept of Eloquent Ruby came as a revelation to me at the
time - the idea that there were not just rules that make up a
language, but idioms too, attitudes that might even change over
time. I loved the idea that I could learn to speak Ruby not just
adequately but like a native.
By this time I felt bold enough to call myself a Rubyist, and I owed
much of the enthusiasm I felt toward the language, and the success I
had early on in my career, to this book. I bought another of Olsen’s
books on design patterns and read it cover to cover, again, multiple
times. I was ambitious and knew that “good” programmers had experience
and expertise working in different programming paradigms while still
not sure in what direction I would go. So I learned with great
interest that he was either working on, or had at least declared an
intention to write, a book about Clojure.
I had no idea at this point what Clojure or even Lisp was, but the
author had gained my trust enough for me to want to read about his new
book, whatever it was about.
And of course I had no clue at the time that this book would be in the
pipeline for years. I understand; these things take time. But, being
impatient, when I felt confident enough to start learning another
language, I decided to go ahead with Clojure anyway.
I have now played with it for about 3 years, have pored through some
books that were good at what they set out to do (exhaustively survey
the features of the language), have built some things of a certain
size. Alas, not getting to code and think in my second language every
day, I have never felt that I really “got” Clojure, that I really knew
it in the same way that I knew Ruby. I could not properly call myself
a Clojurist (do they even call themselves that? See, I don’t know).
So I was pretty psyched when I learned that Olsen’s book was nearing
completion, and that its title was, perfectly, Getting Clojure. When
it came out in beta form, I did something I almost never do - I bought
the eBook (I typically like to do my reading away from the computer).
And it has not disappointed. I am so happy that all the elements of
Olsen’s style are present and on top form - his gentle humor, his
writing with compassion for the learner. He knows crucially what to
leave out and what to explain deeper, to illustrate. There are
examples, contrived, yet so much more compelling than most (it’s hard
to formulate new examples that are sufficiently complex yet small and
interesting enough to sustain interest). There are lots of details on
why you might want to use a particular thing, or more importantly,
why you might not want to - in Olsen’s words “staying out of
trouble” - details that are so vital to writing good, idiomatic code
in whatever language. And there are examples of real Clojure in use,
often pulled from the Clojure source itself, that not only illustrate
but give me the confidence to dive in myself, something that wouldn’t
have occurred to me alone.
It seems ironic that Getting Clojure isn’t the book I wanted way
back when I first heard about it, but it is the book that I need now.
I enjoyed going over the earlier chapters, cementing knowledge along
the way, forming new narratives that simplify the surface area of the
language while giving me the things I need to know to find out
more. And it gave me the confidence to dive way deeper than I thought
would be comfortable. For example, Olsen encourages you to write your
own Lisp implementation as an aside while he looks at the core of
Clojure’s. I went ahead and did this and am so glad that I did - I
feel like I have gained a much deeper understanding of computer
programs in general, something that may have been lacking in my not
coming from a Computer Science background.
I have no doubt that this book will appeal to many others from
different backgrounds, different places in their development. But I
can confidently say that if, like myself, you are self taught, or
don’t come from a “traditional” background, perhaps Ruby or a similar
dynamic language is your bread and butter but you are trying to expand
your horizons, if you need materials that focus on the learner and
building up understanding in a more structured way, Getting Clojure
gets my highest possible recommendation.
01 Jul 2017
Without much warning I recently decided to learn Colemak.
What?
Colemak is an alternative layout for keyboards. It aims to improve on
both the traditional QWERTY and the only slightly better-known Dvorak
by placing the commonest keys on the home row, along with certain
other considerations, to improve ergonomics and comfort while typing.
Why?
This came as a bit of a surprise to me as I have always felt somewhat
opposed to learning a new keyboard layout. This may have stemmed from
my own frustration in the past
in doubling on
Clarinet and Saxophone. While the two are keyed similarly, they
correspond to different “notes” as they are written down. Though it is
very common for people to do this, I really don’t enjoy the feeling of
disorientation at all.
The drawbacks I identified as:
- the initial effort of learning
- having to “double” when confronted with a QWERTY keyboard
- really, having to collaborate with anyone on anything ever again
The supposed benefits of faster typing speed and prevention of RSI I
never saw as a net gain. Which is not to say that I don’t care about
those things (I take injury prevention very seriously,
having
blogged about this before). It’s
just such an inexact science that I would welcome both of those
benefits if they came, but couldn’t reasonably expect them as
guaranteed.
But I think there was one other factor that has completely swung this
for me that has probably not been present at any other time that I’ve
been thinking about this. It is that I am incredibly bored. So bored
that I don’t want to learn anything exciting like a new programming
language, or even a new natural language, or how to ride a unicycle or
spin poi. I’ve been craving the dull repetition that I’ve felt as a
musician, a quiet confidence that if I do this dance with my hands
slowly and correctly enough times, I’ll program myself to perform a
new trick. I’ve been actually longing for the brain ache you get when
you’re trying to do something different and your muscle memory won’t
quit.
How?
There are many of these online, but I
found The Typing Cat particularly good in
getting started out. Not wanting to take the plunge straight away,
this let me emulate the new layout while I went through the exercises,
preserving QWERTY for everything else. For the first couple of weeks
I’d do QWERTY during the day and practice 1-2 hours of Colemak in the
evening, until I got up to an acceptable typing speed (for me, 30 wpm,
while still very slow, would not interfere too much).
Once I was ready to take the leap, I was confronted by a great number
of ways to do this, ranging from reconfiguring the keyboard at the
system level (useless, since X ignores it), configuring X from the
command line (annoying, because those changes aren’t preserved when I
make any customizations in the Gnome Tweak Tool), to discovering I
could do most of this by adjusting settings in the UI. I’ll describe
only what I eventually settled on in detail, in case you are trying to
do this yourself and are running a similar setup to me (Debian
9/Stretch, Gnome 3, US keyboard).
To set up Colemak, simply open Settings, go to Region & Language, hit
the +
under Input Sources, click English
then English (Colemak)
and you’re done. You should now see a new thing on the top right that
you can click on and select the input source you wish to use. You can
also rotate input sources by hitting Super (aka Windows key) and
Space.
Unfortunately I wasn’t done there because I had a few issues with some
of the design choices in the only variant of Colemak offered. Namely,
I didn’t want Colemak to reassign my Caps Lock key to Backspace (as I
was already reassigning it to Escape), and I wanted to use my right
Alt key as Meta, something I use all the time in Emacs and pretty much
everything that supports the basic Emacs keybindings (see: everything
worth using). While there may have been a way to customize this from
the command line, I never found out what that was, and besides I
wanted to find a solution that jelled as much as possible with the
general solution I’ve outlined above. It was with this spirit that I
decided to add my own, customized keyboard layout. If you’re having
similar grumbles, read on.
First, a word of caution. You’re going to have to edit some
configuration files that live in /usr/share
. If that makes you
queasy, I understand. I don’t especially love this solution, but I
think it is the best of all solutions known to me. Either way, as a
precautionary measure, I’d go ahead and backup the files we’re going
to touch:
sudo cp /usr/share/X11/xkb/symbols/us{,.backup}
sudo cp /usr/share/X11/xkb/rules/evdev.xml{,.backup}
Next we’re going to add a keyboard layout to the
/usr/share/X11/xkb/symbols/us
file. It’ll be an edited version of
the X.Org configuration which you can
find here. It can
probably go anywhere, but I inserted it immediately after the existing
entry for Colemak:
// /usr/share/X11/xkb/symbols/us
partial alphanumeric_keys
xkb_symbols "colemak-custom" {
include "us"
name[Group1]= "English (Colemak Custom)";
key <TLDE> { [ grave, asciitilde ] };
key <AE01> { [ 1, exclam ] };
key <AE02> { [ 2, at ] };
key <AE03> { [ 3, numbersign ] };
key <AE04> { [ 4, dollar ] };
key <AE05> { [ 5, percent ] };
key <AE06> { [ 6, asciicircum ] };
key <AE07> { [ 7, ampersand ] };
key <AE08> { [ 8, asterisk ] };
key <AE09> { [ 9, parenleft ] };
key <AE10> { [ 0, parenright ] };
key <AE11> { [ minus, underscore ] };
key <AE12> { [ equal, plus ] };
key <AD01> { [ q, Q ] };
key <AD02> { [ w, W ] };
key <AD03> { [ f, F ] };
key <AD04> { [ p, P ] };
key <AD05> { [ g, G ] };
key <AD06> { [ j, J ] };
key <AD07> { [ l, L ] };
key <AD08> { [ u, U ] };
key <AD09> { [ y, Y ] };
key <AD10> { [ semicolon, colon ] };
key <AD11> { [ bracketleft, braceleft ] };
key <AD12> { [ bracketright, braceright ] };
key <BKSL> { [ backslash, bar ] };
key <AC01> { [ a, A ] };
key <AC02> { [ r, R ] };
key <AC03> { [ s, S ] };
key <AC04> { [ t, T ] };
key <AC05> { [ d, D ] };
key <AC06> { [ h, H ] };
key <AC07> { [ n, N ] };
key <AC08> { [ e, E ] };
key <AC09> { [ i, I ] };
key <AC10> { [ o, O ] };
key <AC11> { [ apostrophe, quotedbl ] };
key <AB01> { [ z, Z ] };
key <AB02> { [ x, X ] };
key <AB03> { [ c, C ] };
key <AB04> { [ v, V ] };
key <AB05> { [ b, B ] };
key <AB06> { [ k, K ] };
key <AB07> { [ m, M ] };
key <AB08> { [ comma, less ] };
key <AB09> { [ period, greater ] };
key <AB10> { [ slash, question ] };
key <LSGT> { [ minus, underscore ] };
key <SPCE> { [ space, space ] };
};
Next you need to register it as a variant of the US keyboard layout:
<!-- /usr/share/X11/xkb/rules/evdev.xml -->
<xkbConfigRegistry version="1.1">
<!-- ... -->
<layoutList>
<layout>
<!-- ... -->
<configItem>
<name>us</name>
<!-- ... -->
</configItem>
<variantList>
<!-- Insert this stuff =-> -->
<variant>
<configItem>
<name>colemak-custom</name>
<description>English (Colemak Custom)</description>
</configItem>
</variant>
Finally, you’ll need to bust the xkb cache. I read about how to do
this
here,
but it didn’t seem to work for me (most likely differences between
Ubuntu and Debian, or different versions). So to prevent giving you
the same disappointment, I’m going to tell you the best way to get
this done that is sure to work: restart your damn computer. If you can
figure out a better way, that’s great.
Having done all the above, you should now be able to select your
Colemak (Custom)
layout in the same way by going through the
settings in the UI.
Since I’ve made the switch, I’ve seen my speed steadily increasing up
to 50-60 wpm. That’s still kind of slow for me, but I have every
confidence that it will continue to increase. I think doing drills has
helped with that. Since I have no need for emulation anymore, I’ve
found the CLI utility gtypist
to be particularly good. I try to do
the “Lesson C16/Frequent Words” exercises for Colemak every day.
20 Feb 2017
Disclaimer! The title of this piece is actually a bit of a lie,
because factories, or rather the things that they build, are
technically fixtures, depending on your definition of "fixture". In
the terminology of Gerard Meszaros, author of xUnit Test
Patterns, the default Rails fixtures are more specifically
shared fixtures, meaning they are created in the database at
the start of your test suite and hang around until the end. Factories,
on the other hand, are persistent fresh fixtures, meaning
that they still live in the database (persistent), but their lifecycle
is confined to individual tests (fresh).
But not everyone uses this terminology, and I'm going to go with
another convention of referring to the first kind as "fixtures" from
hereon, and the second kind as "factories".
As someone who learned both to program and to test for the first time
with Rails, I was quickly exposed to a lot of opinions about testing
at once, with a lot of hand-waving. One of these was, as I remember
it, that Rails tests with fixtures by default, that fixtures are
problematic, that Factory Girl is a solution to those problems, so we
just use Factory Girl. I probably internalized this at the time as
“use Factory Girl to build objects in tests” without really
questioning why.
Some years later now, I sincerely regret not learning to use
fixtures first, to experience those pains for myself (or not), to find
out to what problem exactly Factory Girl was a solution. For, I’ve
come to discover, Factory Girl doesn’t prevent you from having some of
the same issues that you’d find with fixtures.
To understand this a bit better, let’s do a simple refactoring from
fixtures to factories to demonstrate what problems we are solving
along the way.
Consider the following:
# app/models/user.rb
class User < ApplicationRecord
validates :name, presence: true
validates :date_of_birth, presence: true
def adult?
date_of_birth + 21.years >= Date.today
end
end
# spec/fixtures/users.yml
Alice:
name: "Alice"
date_of_birth: <%= 21.years.ago %>
Bob:
name: "Bob"
date_of_birth: <%= 21.years.ago - 1.day %>
# spec/models/user_spec.rb
specify "a person of > 21 years is an adult" do
user = users(:Alice)
expect(user).to be_adult
end
specify "a person of < 21 years is not an adult" do
user = users(:Bob)
expect(user).not_to be_adult
end
Here we have two fixtures that contrast two different kinds of
user. If done well, your fixtures will be a set of objects that live
in the database that together weave a kind of narrative that is
revealed in tiny installments through your unit tests. Elsewhere in
our test suite, we’d continue with this knowledge that Alice is an
adult and Bob is a minor.
So what’s the problem? Well, one is what Meszaros calls the “mystery
guest”, a kind of “obscure test” smell. What that means is that the
main players in our tests - Alice and Bob, are defined far off in the
spec/fixtures/users.yml
file. Just looking at the test body, it’s
hard to know exactly what it was about Alice and Bob that made one an
adult, the other not. (Sure, we should know the rules about adulthood
in whatever country we’re in, but it’s easy to see how a slightly more
complicated example might not be so clear).
Let’s try to address that concern head on by removing the fixtures:
# spec/models/user_spec.rb
specify "a person of > 21 years is an adult" do
user = User.create!(name: "Alice", date_of_birth: 21.years.ago)
expect(user).to be_adult
end
specify "a person of < 21 years is not an adult" do
user = User.create!(name: "Bob", date_of_birth: 21.years.ago - 1.day)
expect(user).not_to be_adult
end
We’ve solved the mystery guest problem! Now we can see at a glance
what the relationship is between the attributes of each user and the
behavior exhibited by them.
Unfortunately, we have a new problem. Because a user requires a
:name
attribute, we have to specify a name in order to build a valid
user object in each test (we might in certain instances be able to get
away with using invalid objects, but it is probably not a good
idea). Here, the fact that we’ve had to give our users names has given
us another obscure test smell - we have introduced some noise in that
it’s not clear at a glance which attributes were relevant to the
behavior that’s getting exercised.
Another problem that might emerge is if we added a new attribute to
User
that was validated against - every test that builds a user
could fail for reasons that could be wholly unrelated to the behavior
they are trying to exercise.
Let’s try this again, extracting out a factory method:
# spec/models/user_spec.rb
specify "a person of > 21 years is an adult" do
user = create_user(date_of_birth: 21.years.ago)
expect(user).to be_adult
end
specify "a person of < 21 years is not an adult" do
user = create_user(date_of_birth: 21.years.ago - 1.day)
expect(user).not_to be_adult
end
def create_user(attributes = {})
User.create!({name: "Alice", date_of_birth: 30.years.ago}.merge(attributes))
end
Problem solved! We have some sensible defaults in the factory method,
meaning that we don’t have to specify attributes that are not relevant
in every test, and we’ve overridden the one that we’re testing -
date_of_birth
- in those tests on adulthood. If new validations are
added, we have one place to update to make our tests pass again.
I’m going to pause here for some reflection before we complete our
refactoring. There is another thing that I regret about the way I
learned to test. And it is simply not using my own factory methods as
I have above, before finding out what problem Factory Girl was trying
to address with doing that. Nothing about the code above strikes me
yet as needing a custom DSL, or a gem to extract. Ruby already does a
great job of making this stuff easy.
Sure, the above is a deliberately simple and contrived example. If we
find ourselves doing more complicated logic inside a factory method,
maybe a well-maintained and feature-rich gem such as Factory Girl can
help us there. Let’s assume that we’ve reached that point and plough
on so we can complete the refactoring.
# spec/factories/user.rb
FactoryGirl.define do
factory :user do
name "Alice"
date_of_birth 30.years.ago
end
end
# spec/models/user_spec.rb
specify "a person of > 21 years is an adult" do
user = create(:user, date_of_birth: 21.years.ago)
expect(user).to be_adult
end
specify "a person of < 21 years is not an adult" do
user = create(:user, date_of_birth: 21.years.ago - 1.day)
expect(user).not_to be_adult
end
This is fine. Our tests look pretty much the same as before, but
instead of a factory method we have a Factory Girl factory. We haven’t
solved any immediate problems in this last step, but if our User
model gets more complicated to set up, Factory Girl will be there with
lots more features for handling just about anything we might want to
throw at it.
It seems clear to me now that the problem that Factory Girl solved
wasn’t anything to do with fixtures, since it’s straightforward to
create your own factory methods. It was presumably the problem of
having cumbersome factory methods that you had to write yourself.
However. This is not quite the end of the story for some folks, and
that there’s a further refactoring we can seize upon:
# spec/factories/user.rb
FactoryGirl.define do
factory :user do
name "Alice"
date_of_birth 30.years.ago
trait :adult do
date_of_birth 21.years.ago
end
trait :minor do
date_of_birth 21.years.ago - 1.day
end
end
end
# spec/models/user_spec.rb
specify "a person of > 21 years is an adult" do
user = create(:user, :adult)
expect(user).to be_adult
end
specify "a person of < 21 years is not an adult" do
user = create(:user, :minor)
expect(user).not_to be_adult
end
Here, we’ve used Factory Girl’s traits API to define what it means to
be both an adult and a minor in the factory itself, so if we ever have
to use that concept again the knowledge for how to do that is
contained in one place. Well done to us!
But hang on. Haven’t we just reintroduced the mystery guest smell that
we were trying so hard to get away from? You might observe that these
tests look fundamentally the same as the ones that we started out
with.
Used in this way, factories are just a different kind of shared
fixture. We have the same drawback of having test obscurity, and we’ve
taken the penalty of slower tests because these objects have to be
built afresh for every single example. What was the point?
Okay, okay. Traits are more of an advanced feature in Factory
Girl. They might be useful, but they don’t solve any problems that we
have at this point. How about we just keep things simple:
# spec/factories/user.rb
FactoryGirl.define do
factory :user do
name "Alice"
date_of_birth 30.years.ago
end
end
# spec/models/user_spec.rb
it "tests adulthood" do
user = create(:user)
expect(user).to be_adult
end
This example is actually worse, and is quite a popular
anti-pattern. An obvious problem is that if I needed to change one of
the factory default values, tests are going to break, which should
never happen. The goal of factories is to build an object that passes
validation with the minimum number of required attributes, so you
don’t have to keep specifying every required attribute in every single
test you write. But if you’re depending on the specific value of any
of those attributes set in the factory in your test, you’re Doing It
Wrong ™️.
You’ll also notice that the test provides little value in not testing
around the edges (in this case dates of birth around 21 years
ago).
Let’s compare with our earlier example (the one before things started
to go wrong):
# spec/factories/user.rb
FactoryGirl.define do
factory :user do
name "Alice"
date_of_birth 30.years.ago
end
end
# spec/models/user_spec.rb
specify "a person of > 21 years is an adult" do
user = create(:user, date_of_birth: 21.years.ago)
expect(user).to be_adult
end
specify "a person of < 21 years is not an adult" do
user = create(:user, date_of_birth: 21.years.ago - 1.day)
expect(user).not_to be_adult
end
Crucially we don’t use the default date_of_birth
value in any of our
tests that exercise it. This means that if I changed the default value
to literally anything else that still resulted in a valid user object,
my tests would still pass. By using specific values for
date_of_birth
around the edge of adulthood, I know that I have
better tests. And by providing those values in the test body, I can
see the direct relationship between those values and the behavior
exercised.
Like a lot of sharp tools in Ruby, Factory Girl is rich with features
that are very powerful and expressive. But in my opinion, its more
advanced features are prone to overuse. It’s also easy to confuse
Factory Girl for a library for creating shared fixtures - Rails
already comes with one, and it’s better at doing that. Neither of
these are faults of Factory Girl, rather I believe they are faults in
the way we teach testing.
So don’t use Factory Girl to create shared fixtures - if that’s the
style you like then you may want to consider going back to Rails’
fixtures instead.
01 Aug 2016
Testing JSON structures with arbitarily deep nesting can be
hard. Fortunately RSpec comes with some lesser-known composable
matchers that not only make for some very readable expectations but
can be built up quite arbitrarily too, mirroring the structure of your
JSON. They can provide you with a single expectation on your response
body that is diffable and will give you a pretty decent report on what
failed.
While I don’t necessarily recommend you test every aspect of your API
through full-stack request specs, you are probably going to have to
write a few of them, and they can be painful to write. Fortunately
RSpec offers a few ways to make your life easier.
First, though, I’d like to touch on a couple of other things I do when
writing request specs to get the best possible experience when working
with these slow, highly integrated tests.
Order of expectations
Because request specs are expensive, you’ll often want to combine a
few expectations into a single example if they are essentially testing
the same behavior. You’ll commonly see expectations on the response
body, headers and status within a single test. If you do this,
however, it’s important to bear in mind that the first expectation to
fail will short circuit the others by default. So you’ll want to put
the expectations that provide the best feedback on what went wrong
first. I’ve found the expectation on the status to be least useful, so
always put this last. I’m usually most interested in the response
body, so I’ll put that first.
Using failure aggregation
One way to get around the expectation order problem is to use failure
aggregation, a feature first introduced in RSpec 3.3. Examples that
are configured to aggregate failures will execute all the expectations
and report on all the failures so you aren’t stuck with just the
rather opaque “expected 200, got 500”. You can enable this in a few
ways, including in the example itself:
it "will report on both these expectations should they fail", aggregate_failures: true do
expect(response.parsed_body).to eq("foo" => "bar")
expect(response).to have_http_status(:ok)
end
Or in your RSpec configuration. Here’s how to enable it for all your
API specs:
# spec/rails_helper.rb
RSpec.configure do |c|
c.define_derived_metadata(:file_path => %r{spec/api}) do |meta|
meta[:aggregate_failures] = true
end
end
Using response.parsed_body
Since I’ve been testing APIs I’ve always written my own JSON parsing
helper. But in version 5.0.0.beta3 Rails added a method to the
response object to do this for you. You’ll see me using
response.parsed_body
throughout the examples below.
Using RSpec composable matchers to test nested structures
I’ve outlined a few common scenarios below, indicating which matchers
to use when they come up.
Use eq
when you want to verify everything
expected = {
"data" => [
{
"type" => "posts",
"id" => "1",
"attributes" => {
"title" => "Post the first"
},
"links" => {
"self" => "http://example.com/posts/1"
}
}
]
"links" => {
"self" => "http://example.com/posts",
"next" => "http://example.com/posts?page[offset]=2",
"last" => "http://example.com/posts?page[offset]=10"
}
"included" => [
{
"type" => "comments",
"id" => "1",
"attributes" => {
"body" => "Comment the first"
},
"relationships" => {
"author" => {
"data" => { "type" => "people", "id" => "2" }
}
},
"links" => {
"self" => "http://example.com/comments/1"
}
}
]
}
expect(response.parsed_body).to eq(expected)
Not a composable matcher, but shown here to contrast with the examples
that follow. I typically don’t want to use this - it can make for some
painfully long-winded tests. If I wanted to check every aspect of the
serialization, I’d probably want to write a unit test on the
serializer anyway. Most of the time I just want to check that a few
things are there in the response body.
Use match
when you want to be more flexible
expected = {
"data" => kind_of(Array),
"links" => kind_of(Hash),
"included" => anything
}
expect(response.parsed_body).to match(expected)
match
is a bit fuzzier than eq
, but not as fuzzy as include
(below). match
verifies that the expected values are not only
correct but also that they are sufficient - any superfluous attributes
will fail the above example.
Note that match
allows us to start composing expectations out of
other matchers such as kind_of
and anything
(see below), something
we couldn’t do with eq
.
Use include
/a_hash_including
when you want to verify certain key/value pairs, but not all
expected = {
"data" => [
a_hash_including(
"attributes" => a_hash_including(
"title" => "Post the first"
)
)
]
}
expect(response.parsed_body).to include(expected)
include
is similar to match
but doesn’t care about superfluous
attributes. As we’ll see, it’s incredibly flexible and is my go-to
matcher for testing JSON APIs.
a_hash_including
is just an alias for include
added for
readability. It will probably make most sense to use include
at the
top level, and a_hash_including
for things inside it, as above.
Use include
/a_hash_including
when you want to verify certain keys are present
expect(response.parsed_body).to include("links", "data", "included")
The include
matcher will happily take a list of keys instead of
key/value pairs.
Use a hash literal when you want to verify everything at that level
expected = {
"data" => [
{
"type" => "posts",
"id" => "1",
"attributes" => {
"title" => "Post the first"
},
"links" => {
"self" => "http://example.com/posts/1"
}
}
]
}
expect(response.parsed_body).to include(expected)
Here we only care about the root node "data"
since we are using the
include
matcher, but want to verify everything explicitly under it.
Use a_collection_containing_exactly
when you have an array, but can’t determine the order of elements
expected = {
"data" => a_collection_containing_exactly(
a_hash_including("id" => "1"),
a_hash_including("id" => "2")
)
}
expect(response.parsed_body).to include(expected)
Use a_collection_including
when you have an array, but don’t care about all the elements
expected = {
"data" => a_collection_including(
a_hash_including("id" => "1"),
a_hash_including("id" => "2")
)
}
expect(response.parsed_body).to include(expected)
Guess what? a_collection_including
is just another alias for the
incredibly flexible include
, but can be used to indicate an array
for expressiveness.
Use an array literal when you care about the order of elements
expected = {
"data" => [
a_hash_including("id" => "1"),
a_hash_including("id" => "2")
]
}
expect(response.parsed_body).to include(expected)
expected = {
"data" => all(a_hash_including("type" => "posts"))
}
expect(response.parsed_body).to include(expected)
Here we don’t have to say how many elements "data"
contains, but we
do want to make sure they all have some things in common.
Use anything
when you don’t care about some of the values, but do care about the keys
expected = {
"data" => [
{
"type" => "posts",
"id" => "1",
"attributes" => {
"title" => "Post the first"
},
"links" => {
"self" => "http://example.com/posts/1"
}
}
]
"links" => anything,
"included" => anything
}
expect(response.parsed_body).to match(expected)
Use a_string_matching
when you want to verify part of a string value, but don’t care about the rest
expected = {
"links" => a_hash_including(
"self" => a_string_matching(%r{/posts})
)
}
expect(response.parsed_body).to include(expected)
Yep, another alias for include
.
Use kind_of
if you care about the type, but not the content
expected = {
"data" => [
a_hash_including(
"id" => kind_of(String)
)
]
}
expect(response.parsed_body).to include(expected)
That’s about it! Composable matchers are one of my favorite things
about RSpec. I hope you will love them too!