19 Feb 2018
When I started my first job I was programming in Ruby sans Rails,
and knew I had to get up to speed quickly. I purchased Russ Olsen’s
Eloquent Ruby (2011) and read it cover to cover, though I struggled
a bit towards the end. I tried some things out, and then I read it
again. Then I played around some more. I learned the joys of
metaprogramming and then burnt my fingers. I went back and read it a
third time, now with the understanding that some of the things in the
back were dangerous.
Up until Eloquent Ruby, the resources I had used to learn all had
the same agenda: how to tell the computer to do stuff. In other words,
they would describe the features of things without spending too much
time on the attitudes toward them. And it is much harder to do the
latter.
The concept of Eloquent Ruby came as a revelation to me at the
time - the idea that there were not just rules that make up a
language, but idioms too, attitudes that might even change over
time. I loved the idea that I could learn to speak Ruby not just
adequately but like a native.
By this time I felt bold enough to call myself a Rubyist, and I owed
much of the enthusiasm I felt toward the language, and the success I
had early on in my career, to this book. I bought another of Olsen’s
books on design patterns and read it cover to cover, again, multiple
times. I was ambitious and knew that “good” programmers had experience
and expertise working in different programming paradigms while still
not sure in what direction I would go. So I learned with great
interest that he was either working on, or had at least declared an
intention to write, a book about Clojure.
I had no idea at this point what Clojure or even Lisp was, but the
author had gained my trust enough for me to want to read about his new
book, whatever it was about.
And of course I had no clue at the time that this book would be in the
pipeline for years. I understand; these things take time. But, being
impatient, when I felt confident enough to start learning another
language, I decided to go ahead with Clojure anyway.
I have now played with it for about 3 years, have pored through some
books that were good at what they set out to do (exhaustively survey
the features of the language), have built some things of a certain
size. Alas, not getting to code and think in my second language every
day, I have never felt that I really “got” Clojure, that I really knew
it in the same way that I knew Ruby. I could not properly call myself
a Clojurist (do they even call themselves that? See, I don’t know).
So I was pretty psyched when I learned that Olsen’s book was nearing
completion, and that its title was, perfectly, Getting Clojure. When
it came out in beta form, I did something I almost never do - I bought
the eBook (I typically like to do my reading away from the computer).
And it has not disappointed. I am so happy that all the elements of
Olsen’s style are present and on top form - his gentle humor, his
writing with compassion for the learner. He knows crucially what to
leave out and what to explain deeper, to illustrate. There are
examples, contrived, yet so much more compelling than most (it’s hard
to formulate new examples that are sufficiently complex yet small and
interesting enough to sustain interest). There are lots of details on
why you might want to use a particular thing, or more importantly,
why you might not want to - in Olsen’s words “staying out of
trouble” - details that are so vital to writing good, idiomatic code
in whatever language. And there are examples of real Clojure in use,
often pulled from the Clojure source itself, that not only illustrate
but give me the confidence to dive in myself, something that wouldn’t
have occurred to me alone.
It seems ironic that Getting Clojure isn’t the book I wanted way
back when I first heard about it, but it is the book that I need now.
I enjoyed going over the earlier chapters, cementing knowledge along
the way, forming new narratives that simplify the surface area of the
language while giving me the things I need to know to find out
more. And it gave me the confidence to dive way deeper than I thought
would be comfortable. For example, Olsen encourages you to write your
own Lisp implementation as an aside while he looks at the core of
Clojure’s. I went ahead and did this and am so glad that I did - I
feel like I have gained a much deeper understanding of computer
programs in general, something that may have been lacking in my not
coming from a Computer Science background.
I have no doubt that this book will appeal to many others from
different backgrounds, different places in their development. But I
can confidently say that if, like myself, you are self taught, or
don’t come from a “traditional” background, perhaps Ruby or a similar
dynamic language is your bread and butter but you are trying to expand
your horizons, if you need materials that focus on the learner and
building up understanding in a more structured way, Getting Clojure
gets my highest possible recommendation.
01 Jul 2017
Without much warning I recently decided to learn Colemak.
What?
Colemak is an alternative layout for keyboards. It aims to improve on
both the traditional QWERTY and the only slightly better-known Dvorak
by placing the commonest keys on the home row, along with certain
other considerations, to improve ergonomics and comfort while typing.
Why?
This came as a bit of a surprise to me as I have always felt somewhat
opposed to learning a new keyboard layout. This may have stemmed from
my own frustration in the past
in doubling on
Clarinet and Saxophone. While the two are keyed similarly, they
correspond to different “notes” as they are written down. Though it is
very common for people to do this, I really don’t enjoy the feeling of
disorientation at all.
The drawbacks I identified as:
- the initial effort of learning
- having to “double” when confronted with a QWERTY keyboard
- really, having to collaborate with anyone on anything ever again
The supposed benefits of faster typing speed and prevention of RSI I
never saw as a net gain. Which is not to say that I don’t care about
those things (I take injury prevention very seriously,
having
blogged about this before). It’s
just such an inexact science that I would welcome both of those
benefits if they came, but couldn’t reasonably expect them as
guaranteed.
But I think there was one other factor that has completely swung this
for me that has probably not been present at any other time that I’ve
been thinking about this. It is that I am incredibly bored. So bored
that I don’t want to learn anything exciting like a new programming
language, or even a new natural language, or how to ride a unicycle or
spin poi. I’ve been craving the dull repetition that I’ve felt as a
musician, a quiet confidence that if I do this dance with my hands
slowly and correctly enough times, I’ll program myself to perform a
new trick. I’ve been actually longing for the brain ache you get when
you’re trying to do something different and your muscle memory won’t
quit.
How?
There are many of these online, but I
found The Typing Cat particularly good in
getting started out. Not wanting to take the plunge straight away,
this let me emulate the new layout while I went through the exercises,
preserving QWERTY for everything else. For the first couple of weeks
I’d do QWERTY during the day and practice 1-2 hours of Colemak in the
evening, until I got up to an acceptable typing speed (for me, 30 wpm,
while still very slow, would not interfere too much).
Once I was ready to take the leap, I was confronted by a great number
of ways to do this, ranging from reconfiguring the keyboard at the
system level (useless, since X ignores it), configuring X from the
command line (annoying, because those changes aren’t preserved when I
make any customizations in the Gnome Tweak Tool), to discovering I
could do most of this by adjusting settings in the UI. I’ll describe
only what I eventually settled on in detail, in case you are trying to
do this yourself and are running a similar setup to me (Debian
9/Stretch, Gnome 3, US keyboard).
To set up Colemak, simply open Settings, go to Region & Language, hit
the + under Input Sources, click English then English (Colemak)
and you’re done. You should now see a new thing on the top right that
you can click on and select the input source you wish to use. You can
also rotate input sources by hitting Super (aka Windows key) and
Space.
Unfortunately I wasn’t done there because I had a few issues with some
of the design choices in the only variant of Colemak offered. Namely,
I didn’t want Colemak to reassign my Caps Lock key to Backspace (as I
was already reassigning it to Escape), and I wanted to use my right
Alt key as Meta, something I use all the time in Emacs and pretty much
everything that supports the basic Emacs keybindings (see: everything
worth using). While there may have been a way to customize this from
the command line, I never found out what that was, and besides I
wanted to find a solution that jelled as much as possible with the
general solution I’ve outlined above. It was with this spirit that I
decided to add my own, customized keyboard layout. If you’re having
similar grumbles, read on.
First, a word of caution. You’re going to have to edit some
configuration files that live in /usr/share. If that makes you
queasy, I understand. I don’t especially love this solution, but I
think it is the best of all solutions known to me. Either way, as a
precautionary measure, I’d go ahead and backup the files we’re going
to touch:
sudo cp /usr/share/X11/xkb/symbols/us{,.backup}
sudo cp /usr/share/X11/xkb/rules/evdev.xml{,.backup}
Next we’re going to add a keyboard layout to the
/usr/share/X11/xkb/symbols/us file. It’ll be an edited version of
the X.Org configuration which you can
find here. It can
probably go anywhere, but I inserted it immediately after the existing
entry for Colemak:
// /usr/share/X11/xkb/symbols/us
partial alphanumeric_keys
xkb_symbols "colemak-custom" {
include "us"
name[Group1]= "English (Colemak Custom)";
key <TLDE> { [ grave, asciitilde ] };
key <AE01> { [ 1, exclam ] };
key <AE02> { [ 2, at ] };
key <AE03> { [ 3, numbersign ] };
key <AE04> { [ 4, dollar ] };
key <AE05> { [ 5, percent ] };
key <AE06> { [ 6, asciicircum ] };
key <AE07> { [ 7, ampersand ] };
key <AE08> { [ 8, asterisk ] };
key <AE09> { [ 9, parenleft ] };
key <AE10> { [ 0, parenright ] };
key <AE11> { [ minus, underscore ] };
key <AE12> { [ equal, plus ] };
key <AD01> { [ q, Q ] };
key <AD02> { [ w, W ] };
key <AD03> { [ f, F ] };
key <AD04> { [ p, P ] };
key <AD05> { [ g, G ] };
key <AD06> { [ j, J ] };
key <AD07> { [ l, L ] };
key <AD08> { [ u, U ] };
key <AD09> { [ y, Y ] };
key <AD10> { [ semicolon, colon ] };
key <AD11> { [ bracketleft, braceleft ] };
key <AD12> { [ bracketright, braceright ] };
key <BKSL> { [ backslash, bar ] };
key <AC01> { [ a, A ] };
key <AC02> { [ r, R ] };
key <AC03> { [ s, S ] };
key <AC04> { [ t, T ] };
key <AC05> { [ d, D ] };
key <AC06> { [ h, H ] };
key <AC07> { [ n, N ] };
key <AC08> { [ e, E ] };
key <AC09> { [ i, I ] };
key <AC10> { [ o, O ] };
key <AC11> { [ apostrophe, quotedbl ] };
key <AB01> { [ z, Z ] };
key <AB02> { [ x, X ] };
key <AB03> { [ c, C ] };
key <AB04> { [ v, V ] };
key <AB05> { [ b, B ] };
key <AB06> { [ k, K ] };
key <AB07> { [ m, M ] };
key <AB08> { [ comma, less ] };
key <AB09> { [ period, greater ] };
key <AB10> { [ slash, question ] };
key <LSGT> { [ minus, underscore ] };
key <SPCE> { [ space, space ] };
};
Next you need to register it as a variant of the US keyboard layout:
<!-- /usr/share/X11/xkb/rules/evdev.xml -->
<xkbConfigRegistry version="1.1">
<!-- ... -->
<layoutList>
<layout>
<!-- ... -->
<configItem>
<name>us</name>
<!-- ... -->
</configItem>
<variantList>
<!-- Insert this stuff =-> -->
<variant>
<configItem>
<name>colemak-custom</name>
<description>English (Colemak Custom)</description>
</configItem>
</variant>
Finally, you’ll need to bust the xkb cache. I read about how to do
this
here,
but it didn’t seem to work for me (most likely differences between
Ubuntu and Debian, or different versions). So to prevent giving you
the same disappointment, I’m going to tell you the best way to get
this done that is sure to work: restart your damn computer. If you can
figure out a better way, that’s great.
Having done all the above, you should now be able to select your
Colemak (Custom) layout in the same way by going through the
settings in the UI.
Since I’ve made the switch, I’ve seen my speed steadily increasing up
to 50-60 wpm. That’s still kind of slow for me, but I have every
confidence that it will continue to increase. I think doing drills has
helped with that. Since I have no need for emulation anymore, I’ve
found the CLI utility gtypist to be particularly good. I try to do
the “Lesson C16/Frequent Words” exercises for Colemak every day.
20 Feb 2017
As someone who learned both to program and to test for the first time
with Rails, I was quickly exposed to a lot of opinions about testing
at once, with a lot of hand-waving. One of these was, as I remember
it, that Rails tests with fixtures by default, that fixtures are
problematic, that Factory Girl is a solution to those problems, so we
just use Factory Girl. I probably internalized this at the time as
“use Factory Girl to build objects in tests” without really
questioning why.
Some years later now, I sincerely regret not learning to use
fixtures first, to experience those pains for myself (or not), to find
out to what problem exactly Factory Girl was a solution. For, I’ve
come to discover, Factory Girl doesn’t prevent you from having some of
the same issues that you’d find with fixtures.
To understand this a bit better, let’s do a simple refactoring from
fixtures to factories to demonstrate what problems we are solving
along the way.
Consider the following:
# app/models/user.rb
class User < ApplicationRecord
validates :name, presence: true
validates :date_of_birth, presence: true
def adult?
date_of_birth + 21.years >= Date.today
end
end
# spec/fixtures/users.yml
Alice:
name: "Alice"
date_of_birth: <%= 21.years.ago %>
Bob:
name: "Bob"
date_of_birth: <%= 21.years.ago - 1.day %>
# spec/models/user_spec.rb
specify "a person of > 21 years is an adult" do
user = users(:Alice)
expect(user).to be_adult
end
specify "a person of < 21 years is not an adult" do
user = users(:Bob)
expect(user).not_to be_adult
end
Here we have two fixtures that contrast two different kinds of
user. If done well, your fixtures will be a set of objects that live
in the database that together weave a kind of narrative that is
revealed in tiny installments through your unit tests. Elsewhere in
our test suite, we’d continue with this knowledge that Alice is an
adult and Bob is a minor.
So what’s the problem? Well, one is what Meszaros calls the “mystery
guest”, a kind of “obscure test” smell. What that means is that the
main players in our tests - Alice and Bob, are defined far off in the
spec/fixtures/users.yml file. Just looking at the test body, it’s
hard to know exactly what it was about Alice and Bob that made one an
adult, the other not. (Sure, we should know the rules about adulthood
in whatever country we’re in, but it’s easy to see how a slightly more
complicated example might not be so clear).
Let’s try to address that concern head on by removing the fixtures:
# spec/models/user_spec.rb
specify "a person of > 21 years is an adult" do
user = User.create!(name: "Alice", date_of_birth: 21.years.ago)
expect(user).to be_adult
end
specify "a person of < 21 years is not an adult" do
user = User.create!(name: "Bob", date_of_birth: 21.years.ago - 1.day)
expect(user).not_to be_adult
end
We’ve solved the mystery guest problem! Now we can see at a glance
what the relationship is between the attributes of each user and the
behavior exhibited by them.
Unfortunately, we have a new problem. Because a user requires a
:name attribute, we have to specify a name in order to build a valid
user object in each test (we might in certain instances be able to get
away with using invalid objects, but it is probably not a good
idea). Here, the fact that we’ve had to give our users names has given
us another obscure test smell - we have introduced some noise in that
it’s not clear at a glance which attributes were relevant to the
behavior that’s getting exercised.
Another problem that might emerge is if we added a new attribute to
User that was validated against - every test that builds a user
could fail for reasons that could be wholly unrelated to the behavior
they are trying to exercise.
Let’s try this again, extracting out a factory method:
# spec/models/user_spec.rb
specify "a person of > 21 years is an adult" do
user = create_user(date_of_birth: 21.years.ago)
expect(user).to be_adult
end
specify "a person of < 21 years is not an adult" do
user = create_user(date_of_birth: 21.years.ago - 1.day)
expect(user).not_to be_adult
end
def create_user(attributes = {})
User.create!({name: "Alice", date_of_birth: 30.years.ago}.merge(attributes))
end
Problem solved! We have some sensible defaults in the factory method,
meaning that we don’t have to specify attributes that are not relevant
in every test, and we’ve overridden the one that we’re testing -
date_of_birth - in those tests on adulthood. If new validations are
added, we have one place to update to make our tests pass again.
I’m going to pause here for some reflection before we complete our
refactoring. There is another thing that I regret about the way I
learned to test. And it is simply not using my own factory methods as
I have above, before finding out what problem Factory Girl was trying
to address with doing that. Nothing about the code above strikes me
yet as needing a custom DSL, or a gem to extract. Ruby already does a
great job of making this stuff easy.
Sure, the above is a deliberately simple and contrived example. If we
find ourselves doing more complicated logic inside a factory method,
maybe a well-maintained and feature-rich gem such as Factory Girl can
help us there. Let’s assume that we’ve reached that point and plough
on so we can complete the refactoring.
# spec/factories/user.rb
FactoryGirl.define do
factory :user do
name "Alice"
date_of_birth 30.years.ago
end
end
# spec/models/user_spec.rb
specify "a person of > 21 years is an adult" do
user = create(:user, date_of_birth: 21.years.ago)
expect(user).to be_adult
end
specify "a person of < 21 years is not an adult" do
user = create(:user, date_of_birth: 21.years.ago - 1.day)
expect(user).not_to be_adult
end
This is fine. Our tests look pretty much the same as before, but
instead of a factory method we have a Factory Girl factory. We haven’t
solved any immediate problems in this last step, but if our User
model gets more complicated to set up, Factory Girl will be there with
lots more features for handling just about anything we might want to
throw at it.
It seems clear to me now that the problem that Factory Girl solved
wasn’t anything to do with fixtures, since it’s straightforward to
create your own factory methods. It was presumably the problem of
having cumbersome factory methods that you had to write yourself.
However. This is not quite the end of the story for some folks, and
that there’s a further refactoring we can seize upon:
# spec/factories/user.rb
FactoryGirl.define do
factory :user do
name "Alice"
date_of_birth 30.years.ago
trait :adult do
date_of_birth 21.years.ago
end
trait :minor do
date_of_birth 21.years.ago - 1.day
end
end
end
# spec/models/user_spec.rb
specify "a person of > 21 years is an adult" do
user = create(:user, :adult)
expect(user).to be_adult
end
specify "a person of < 21 years is not an adult" do
user = create(:user, :minor)
expect(user).not_to be_adult
end
Here, we’ve used Factory Girl’s traits API to define what it means to
be both an adult and a minor in the factory itself, so if we ever have
to use that concept again the knowledge for how to do that is
contained in one place. Well done to us!
But hang on. Haven’t we just reintroduced the mystery guest smell that
we were trying so hard to get away from? You might observe that these
tests look fundamentally the same as the ones that we started out
with.
Used in this way, factories are just a different kind of shared
fixture. We have the same drawback of having test obscurity, and we’ve
taken the penalty of slower tests because these objects have to be
built afresh for every single example. What was the point?
Okay, okay. Traits are more of an advanced feature in Factory
Girl. They might be useful, but they don’t solve any problems that we
have at this point. How about we just keep things simple:
# spec/factories/user.rb
FactoryGirl.define do
factory :user do
name "Alice"
date_of_birth 30.years.ago
end
end
# spec/models/user_spec.rb
it "tests adulthood" do
user = create(:user)
expect(user).to be_adult
end
This example is actually worse, and is quite a popular
anti-pattern. An obvious problem is that if I needed to change one of
the factory default values, tests are going to break, which should
never happen. The goal of factories is to build an object that passes
validation with the minimum number of required attributes, so you
don’t have to keep specifying every required attribute in every single
test you write. But if you’re depending on the specific value of any
of those attributes set in the factory in your test, you’re Doing It
Wrong ™️.
You’ll also notice that the test provides little value in not testing
around the edges (in this case dates of birth around 21 years
ago).
Let’s compare with our earlier example (the one before things started
to go wrong):
# spec/factories/user.rb
FactoryGirl.define do
factory :user do
name "Alice"
date_of_birth 30.years.ago
end
end
# spec/models/user_spec.rb
specify "a person of > 21 years is an adult" do
user = create(:user, date_of_birth: 21.years.ago)
expect(user).to be_adult
end
specify "a person of < 21 years is not an adult" do
user = create(:user, date_of_birth: 21.years.ago - 1.day)
expect(user).not_to be_adult
end
Crucially we don’t use the default date_of_birth value in any of our
tests that exercise it. This means that if I changed the default value
to literally anything else that still resulted in a valid user object,
my tests would still pass. By using specific values for
date_of_birth around the edge of adulthood, I know that I have
better tests. And by providing those values in the test body, I can
see the direct relationship between those values and the behavior
exercised.
Like a lot of sharp tools in Ruby, Factory Girl is rich with features
that are very powerful and expressive. But in my opinion, its more
advanced features are prone to overuse. It’s also easy to confuse
Factory Girl for a library for creating shared fixtures - Rails
already comes with one, and it’s better at doing that. Neither of
these are faults of Factory Girl, rather I believe they are faults in
the way we teach testing.
So don’t use Factory Girl to create shared fixtures - if that’s the
style you like then you may want to consider going back to Rails’
fixtures instead.
01 Aug 2016
Testing JSON structures with arbitarily deep nesting can be
hard. Fortunately RSpec comes with some lesser-known composable
matchers that not only make for some very readable expectations but
can be built up quite arbitrarily too, mirroring the structure of your
JSON. They can provide you with a single expectation on your response
body that is diffable and will give you a pretty decent report on what
failed.
While I don’t necessarily recommend you test every aspect of your API
through full-stack request specs, you are probably going to have to
write a few of them, and they can be painful to write. Fortunately
RSpec offers a few ways to make your life easier.
First, though, I’d like to touch on a couple of other things I do when
writing request specs to get the best possible experience when working
with these slow, highly integrated tests.
Order of expectations
Because request specs are expensive, you’ll often want to combine a
few expectations into a single example if they are essentially testing
the same behavior. You’ll commonly see expectations on the response
body, headers and status within a single test. If you do this,
however, it’s important to bear in mind that the first expectation to
fail will short circuit the others by default. So you’ll want to put
the expectations that provide the best feedback on what went wrong
first. I’ve found the expectation on the status to be least useful, so
always put this last. I’m usually most interested in the response
body, so I’ll put that first.
Using failure aggregation
One way to get around the expectation order problem is to use failure
aggregation, a feature first introduced in RSpec 3.3. Examples that
are configured to aggregate failures will execute all the expectations
and report on all the failures so you aren’t stuck with just the
rather opaque “expected 200, got 500”. You can enable this in a few
ways, including in the example itself:
it "will report on both these expectations should they fail", aggregate_failures: true do
expect(response.parsed_body).to eq("foo" => "bar")
expect(response).to have_http_status(:ok)
end
Or in your RSpec configuration. Here’s how to enable it for all your
API specs:
# spec/rails_helper.rb
RSpec.configure do |c|
c.define_derived_metadata(:file_path => %r{spec/api}) do |meta|
meta[:aggregate_failures] = true
end
end
Using response.parsed_body
Since I’ve been testing APIs I’ve always written my own JSON parsing
helper. But in version 5.0.0.beta3 Rails added a method to the
response object to do this for you. You’ll see me using
response.parsed_body throughout the examples below.
Using RSpec composable matchers to test nested structures
I’ve outlined a few common scenarios below, indicating which matchers
to use when they come up.
Use eq when you want to verify everything
expected = {
"data" => [
{
"type" => "posts",
"id" => "1",
"attributes" => {
"title" => "Post the first"
},
"links" => {
"self" => "http://example.com/posts/1"
}
}
]
"links" => {
"self" => "http://example.com/posts",
"next" => "http://example.com/posts?page[offset]=2",
"last" => "http://example.com/posts?page[offset]=10"
}
"included" => [
{
"type" => "comments",
"id" => "1",
"attributes" => {
"body" => "Comment the first"
},
"relationships" => {
"author" => {
"data" => { "type" => "people", "id" => "2" }
}
},
"links" => {
"self" => "http://example.com/comments/1"
}
}
]
}
expect(response.parsed_body).to eq(expected)
Not a composable matcher, but shown here to contrast with the examples
that follow. I typically don’t want to use this - it can make for some
painfully long-winded tests. If I wanted to check every aspect of the
serialization, I’d probably want to write a unit test on the
serializer anyway. Most of the time I just want to check that a few
things are there in the response body.
Use match when you want to be more flexible
expected = {
"data" => kind_of(Array),
"links" => kind_of(Hash),
"included" => anything
}
expect(response.parsed_body).to match(expected)
match is a bit fuzzier than eq, but not as fuzzy as include
(below). match verifies that the expected values are not only
correct but also that they are sufficient - any superfluous attributes
will fail the above example.
Note that match allows us to start composing expectations out of
other matchers such as kind_of and anything (see below), something
we couldn’t do with eq.
Use include/a_hash_including when you want to verify certain key/value pairs, but not all
expected = {
"data" => [
a_hash_including(
"attributes" => a_hash_including(
"title" => "Post the first"
)
)
]
}
expect(response.parsed_body).to include(expected)
include is similar to match but doesn’t care about superfluous
attributes. As we’ll see, it’s incredibly flexible and is my go-to
matcher for testing JSON APIs.
a_hash_including is just an alias for include added for
readability. It will probably make most sense to use include at the
top level, and a_hash_including for things inside it, as above.
Use include/a_hash_including when you want to verify certain keys are present
expect(response.parsed_body).to include("links", "data", "included")
The include matcher will happily take a list of keys instead of
key/value pairs.
Use a hash literal when you want to verify everything at that level
expected = {
"data" => [
{
"type" => "posts",
"id" => "1",
"attributes" => {
"title" => "Post the first"
},
"links" => {
"self" => "http://example.com/posts/1"
}
}
]
}
expect(response.parsed_body).to include(expected)
Here we only care about the root node "data" since we are using the
include matcher, but want to verify everything explicitly under it.
Use a_collection_containing_exactly when you have an array, but can’t determine the order of elements
expected = {
"data" => a_collection_containing_exactly(
a_hash_including("id" => "1"),
a_hash_including("id" => "2")
)
}
expect(response.parsed_body).to include(expected)
Use a_collection_including when you have an array, but don’t care about all the elements
expected = {
"data" => a_collection_including(
a_hash_including("id" => "1"),
a_hash_including("id" => "2")
)
}
expect(response.parsed_body).to include(expected)
Guess what? a_collection_including is just another alias for the
incredibly flexible include, but can be used to indicate an array
for expressiveness.
Use an array literal when you care about the order of elements
expected = {
"data" => [
a_hash_including("id" => "1"),
a_hash_including("id" => "2")
]
}
expect(response.parsed_body).to include(expected)
expected = {
"data" => all(a_hash_including("type" => "posts"))
}
expect(response.parsed_body).to include(expected)
Here we don’t have to say how many elements "data" contains, but we
do want to make sure they all have some things in common.
Use anything when you don’t care about some of the values, but do care about the keys
expected = {
"data" => [
{
"type" => "posts",
"id" => "1",
"attributes" => {
"title" => "Post the first"
},
"links" => {
"self" => "http://example.com/posts/1"
}
}
]
"links" => anything,
"included" => anything
}
expect(response.parsed_body).to match(expected)
Use a_string_matching when you want to verify part of a string value, but don’t care about the rest
expected = {
"links" => a_hash_including(
"self" => a_string_matching(%r{/posts})
)
}
expect(response.parsed_body).to include(expected)
Yep, another alias for include.
Use kind_of if you care about the type, but not the content
expected = {
"data" => [
a_hash_including(
"id" => kind_of(String)
)
]
}
expect(response.parsed_body).to include(expected)
That’s about it! Composable matchers are one of my favorite things
about RSpec. I hope you will love them too!
20 Jun 2016
For the uninitiated, The Moomins is a series of books and a comic
strip by the wonderful Tove Jansson. These Moomins live in the
fictional and idyllic Moominvalley set somewhere in the forests of
Finland. It is a complex landscape rich in imagery, symbolism,
archetypes, and their world has been reimagined many times since
Jansson first wrote about it. One such of these was Moomin, a show
from the 90s that fused the best of this Finnish folklore with zaney
Japanese animation. And it is my favorite from childhood.
So enamored was I with this show that I continue to watch it
unironically to this day, and not just for the feeling of
nostalgia. Though it is full of action and occasionally disturbing
(the groke!), I nonetheless find it really calming to lose myself in
the otherwise zen-like serenity of Moominvalley for 20 minutes or so.
Adventure is of course central to every episode, and sure enough the
Moomins meet lots of interesting and occasionally magical creatures, and
one of these is The Hobgoblin.
I didn’t remember much about the Hobgoblin from childhood, but I was
struck watching it more recently with the following:
- He is a powerful magician.
- He collects Rubies.
- He is in search of the King’s Ruby.
- He rides a puma through the sky.
I am so surprised the Ruby community has not picked up on this yet!