Archive for the ‘Fog Creek’ Category

Random Meet-ups to Maintain Company Culture with Remote Workers

January 27th, 2015 by Gareth Wilson

Company culture is really important. It’s something we know you have to actively work on to build and maintain. This is especially true when you have remote employees. More than half of Fog Creek’s staff now work remotely. This change has come about pretty quickly, with the move to allow remote workers being less than two years ago. Since then we’ve taken on many new hires and existing staff have moved to working remotely too. It has forced us to re-think a few things. The old bag of tricks, like private offices and catered lunches etc. don’t help remote employees. So we set about coming up with new ways to make sure everyone still feels involved and part of a great company.

Meet and Greet Random People in your Organization

One initiative we’re trying at the moment is CoffeeTime. CoffeeTime is an app, created in less than a day by Daniel, one of our developers. It works by pairing people up randomly, to meet and greet each other, often with someone you may not normally interact with. It doesn’t matter what level in the org chart, or role each person plays. Anyone can be matched up for a 30-minute chat (though people can choose opt-out, of course). It aims to encourage the cross-team communication and serendipitous learning which otherwise happens naturally when co-workers share an office.

At its heart is the idea that the most important things to learn are often those you didn’t even know you needed to. By making more connections with the people you work with, it increases the likelihood that you’ll have access to someone who can help you further down the line. Maybe that person is having a similar problem or has experienced it before and can point you in the right direction. Or maybe you just end up making a new friend!

Either way, once a week CoffeeTime runs and you’re matched up with someone else in the organization. Each of you receives an email telling you who that person is. You then take it from there and arrange to meet in person or over a Hangout, to eat lunch or just chat.


So How is it Working Out?

Well, it will take time to tell if it works and whether it is something we’ll stick with. But the initial feedback has been positive. It has resulted in a bunch of meet-ups between people who hadn’t previously had the opportunity to speak to each other much before.


Try It Out – CoffeeTime is Open Source

Interested in trying it out yourself? Sure thing, we’ve open-sourced it. It’s written in Ruby, runs on Heroku and uses Redis with Mandrill to handle the emails. Daniel says that the implementation is not the prettiest, but as a quick way to test out the idea, it works well. If you give it a go, let us know how you get on at @FogCreek.

Introduction to Docker – Tech Talk and Demo

January 23rd, 2015 by Blake Caldwell


In this Tech Talk, Blake, a Software Developer here at Fog Creek, gives an introduction to Docker. He covers what Docker is, why you might want to use it, how it works, as well as explaining some key terminology. He finishes up with a few demos demonstrating the functionality of Docker.


About Fog Creek Tech Talks

At Fog Creek, we have weekly Tech Talks from our own staff and invited guests. These are short, informal presentations on something of interest to those involved in software development. We try to share these with you whenever we can.


Content and Timings

  • What is Docker? (0:00)
  • Basic Terms (3:00)
  • Why Use Docker? (6:45)
  • How Does Docker Work? (10:08)
  • Docker Artifact Server (11:01)
  • Docker Demos (14:45)



What is Docker?

At its core, Docker is a platform for developing, deploying and running services with Linux containers. So what does that mean? Linux containers – this is a feature of the kernel as of, I believe, 2.6.24 or so, it has been around for a few years. It’s a way of just isolating processes from each other. And you can do a lot of cool things with it.

So one way to look at it is, as chroot on steroids. It’s not just filesystem rooting, it’s also isolating you from all of the other processes of the machine and this is a pretty cool thing when you think about it. Like you can do things like running unsafe code, running lots of stuff on your server without really vetting it, or just running multiple instances of something on the same machine and have them be isolated from each other.

It also seems like when I’m describing it like this, that it’s a VM. That’s usually the first way people think about Docker. ‘Oh it’s a VM, let’s go ahead and set one up. I need SSH, I need to run Apache, I need to run my Python service, lets also install those things on this VM and lets SSH into it’. Well, it’s not a VM. It’s really like I said, it’s a lightweight way of running a process in total isolation. So you can treat it like a VM, but that would be wrong.

Their metaphor and the true metaphor for Docker is that of the problem of shipping along time ago. You have rocking chairs, and you have couches, and you have cars, and you have golf balls… and you have all of these things. How am I going to ship all of these things? You can pile them into a giant pile on a big ship, but that isn’t going to work. I’m sure they started using boxes, then they started using crates and then they realized that all of these crates are different shapes and so they came up with a standard. This standard, there’s these containers that we see passing us on the road all of the time and on these ships, if you get a chance to go to the docks. There’s a standard size for these things, there’s a standard place for where the doors go, for how the locks work, and for where the mount points are to pick it up. So that they all fit on ships and they can get them on and off efficiently and there’s weight restrictions and all this. So that the shipping company doesn’t need to know what’s inside these things, that it hits all of the specs and adheres to all of the standards. And there’s a lot that you can do with that, right. You can watch the orchestration of these containers off trucks and on to ships and you can do some really cool things.

So that’s what Docker is. It takes all your services, your apps, or other people’s apps and it bundles them up in a standard way where you can now have orchestration at the software level. You can deploy these to different machines, you can do all kinds of cool things with this. So that’s the Docker metaphor, it’s containers on container ships.

Basic Terms

So to give you some basic terms, just so we’re clear from the start. A Docker Image is a static filesystem, and in every case except the top level, it’s going to have a parent image. So my basic filesystem might just have like let’s say home directory. So I put home directory in there. It might also have like a user banner, it’s going to hold a whole bunch of binaries, I don’t know. And then I create another image on top of that where I add another layer of the filesystem. And another layer on that, where I am adding this and that and that. And then at some point I’m installing a bunch of services, and every one of these actions are another layer on top. The file system is a copy on write. So if I’m going to overwrite something from a parent image, that’s no problem, it’s just going to overlay on top of that. So, there’s benefits to them being static, which I’m going to get to in a little bit.

An Image is like, you can think of it like, if you installed Debian or something, and you snapshotted it and put it on a DVD, there’s your image. It’s something that can’t be modified at this point. And now when I put that DVD in the machine, and this is kind of a weird metaphor because I don’t want anyone to think of this as a machine, but when you put the DVD in the machine you say you’re going to boot from DVD. Well, now you have the equivalent to Docker as a Container. A Container is a writable instance of an Image. It’s based on an Image, and so in this metaphor, if this holds out we’ll see, as you’re running you’re able to overwrite files that are effectively on the DVD but the time when you’re running it, they’re in memory or in some kind of mount. So when you’re running a Container you can write to any file and again it’s a copy on write, it’s going to be writing its files somewhere. But in another metaphor here, you can think of an image like a Class. It’s a definition, it’s ‘here’s what this thing is’ and then a Container is an instance of that Class. So just like in programming when you can have a hundred different person objects, in Docker you can have a hundred different Containers based on one Image. And that might just be that I want to run bin/bash inside a stock Debian image. Right, so I’m going to run bin/bash inside a Container. And that bash command is going to see a fresh file system that no other instance sees because each has their own instance of that image. And then you can throw away the container but the image always exists in your registry.

And one more point about Containers is that they should be ephemeral. An Image is something that is recorded, it’s archived, it’s shared, it’s deployed to different machines. A Container is just a running service inside this Image and then when you’re done you should be able to delete the Container, there should be nothing special about the Container. And so you start thinking about, you know, in production, you have Logs – I can’t throw away Logs. You have secrets – I can’t throw away secrets. Those things aren’t stored in the container, or in the Image, those things are separate – I’ll get to that later. But start thinking of Images as definitions that are deployed, shared, checked-in somewhere. Containers are just started up and then I should be able to delete the Container. You shouldn’t design Containers in such a way that you’re holding on to them, they’re ephemeral.

Why Use Docker?

So why Docker? It’s a way to ship software in a standard way. And the system requirement is only Linux with a minimum kernel of I think 3.8. And then you install Docker, and that’s all you need. I can create an image of Postgres, of Redis, of Apache, of my own custom software and I can just give it to you as an image, and I say ‘just run this image as a container on your machine, oh and all you need is Docker.’ There’s a whole bunch of cool things you can do with that by running multiple containers at the same time. And it’s not just a deployment tool, there’re benefits at all stages in the lifecycle. In dev, in test, in continuous integration, in integration tests, staging and then obviously in production.

For test and QA, they are able to run multiple versions of your containers, so they can run multiple versions of your app at the same time on the same machine, without worrying about port collisions, without worrying about library collisions. So you can run Python 3 and Python 2.7 at the same time. And I know that you can do that with virtual environments, this is just a way of stepping back and making a standardized virtual environment that works for any kind of script. So you can run a bash script, Python, whatever you want.

Another thing, is that they can run test suites, in parallel without worrying about what else is running on that system because you can set them up in such a way that they don’t interfere with each other.

And one thing that is pretty cool, and I’m not entirely sure if this isn’t an anti-pattern yet after spending more time using it and playing with it. But, if you have your whole system, your whole back-end system in containers you can actually setup, you can setup your system in such a way and snapshot it. And now you have, like say you have a particular customer, that gets into a situation and we want to write some integration test to test what happens when I do this, or do that. You can snapshot, you can archive that and then you can run tests specifically for that state. It’s pretty spooky when you see it work. In production, there’s a lot of cool things you can do. You can limit the resources of a container, limit the memory, the CPU, device access and read/write speed, which is pretty cool.

Also, Docker has a remote API, so you can start querying different boxes and asking what containers do you have on there? And I’ll talk a little bit more about that in a minute.

There’s also since you have a standard format of shipping these things, there’s orchestration tools. And then you have things like where you can add new containers to a resource pool behind an HAProxy server and dynamically scale up and scale down if you set up the dynamic container discovery.

How Does Docker Work?

So how does this work? I mentioned briefly that it’s using the LXC, Linux container technology. That laid the groundwork, apparently working with LXC is kind of cumbersome. Docker decided to put a nice abstraction over that and make the thing easier to work with. So I mentioned that you have file system and process isolation and I mentioned also that this isn’t a VM. But what’s nice is, when you start playing with this, at first you’re thinking there’s an overhead here, right? Well, there’s not as much overhead as you might think and in many cases it’s almost zero. All Docker is doing is orchestrating, it’s saying ‘hey Kernel, I’m going to run this process, I want you to run this process in isolation and here’s the filesystem you’re going to use,’ and then Docker steps away. Processes are still running in the host system’s Kernel, which is great because then there’s very little latency to access memory and very little CPU overhead. And no Hypervisor.

Docker Artifact Server

And now I have all of these images running everywhere, so what do I do with them? Let’s say that we have a developer on our team building these images and we plan on using them in staging, prod, test, dev, all of these places. So what do we do with them? How do I give you my image?

Well, you push them to a thing called the Docker Registry. Registry is a terrible work, it conjures up images of the Windows Registry. So I’m calling it the Docker Artifact Server. It’s just a web server that you can run and push versioned images to. So what does that mean? I can download the Debian images, which is a series of layers, but in the end I don’t care. I just see the layer stack. I can build on top of that and I can call this and then I can push that with a version 1.0, I can push that into our own copy of the artifact server. Now I can make a change an push in 1.1, same thing. So the Docker Registry is Open Source, and they have a public hosted version of this, called the Docker Hub. And basically, anyone who wants you to use their software on a server is creating an image, you can create any image, and post it on there and publish it for free. If you want to do private images you can pay them some money, it’s the same business model as GitHub. So public is free, you can push your own private repos in there. Here’s an example of what you see on their main page. So if you went there now and said ‘I want to download pre-made Docker images,’ Redis, ubuntu, MySQL, they’re all just sitting there. There’re thousands, anything you want is in there. If you want to prototype something real quick, and you know you’re going to need a Redis server, a MySQL, WordPress, whatever, you can put together a series of images that are talking to each other as containers. And they come with documentation, so you can be up and running with a Redis server is minutes. You’re not installing Redis on your machine, which is awesome. You’re just spinning up an instance of Redis. Say you don’t like the version, you can roll back to a version. You can have multiple versions on your system all running at the same time. For any of these things. This is a standard way to install whatever you need.

Now pushing makes sense right, this is easy to understand. During development, I’m going to make my version of my Python app, I’m going to box it up in a container, image it, check in the definition of the image, which is called a Docker file, and then I’m going to push this to our Artifact server. And after I tell the testers, and hopefully I’ll have an automated system that’s running all of the tests on these. Once I’ve got their blessing, all that means is production would then just say ‘oh, I need to start up these five containers from these images, I need Blake’s Docker Demo v1.1, I need the official MySQL 1.2, whatever and it’ll go out on to the Internet and go on to the Docker Registry and just pull down those images for you. So it’s a nice way of pushing stuff to different environments and having the exact same build.

It’s easy to get carried away with this. I’m still trying to figure out best practices, but you can definitely go too far with this. There’s some guidelines that I’ve read, like ‘would you want 50 instances of this running on this machine’, If the answer is ‘hell no’ then you probably shouldn’t be Docker containerizing this. So somethings are better off not in containers. It may be your database server, that is always going to be on one machine, maybe it is Redis. I don’t know exactly, but I think when you have a new tech like this you have to be careful that you don’t overdo it.

Docker Demos

Demo Time!

Ending Your Remote Meetings with Style & Panache

January 14th, 2015 by Rich Armstrong

New media take a while to develop their own conventions. Alexander Graham Bell suggested “Ahoy” as the standard telephone greeting before English speakers settled on “Hello.”

As Fog Creek has gone from a single location to a worldwide remote team, we’ve shifted our meetings mostly to Google Hangout. If you do a lot of these meetings, you might notice that it’s often awkward to end them. There’s the “ok, that’s it,” then, “bye, talk to you later”, followed by fumbling for the close button and hoping nobody had just one more thing to add.

Nobody walks away feeling awesome.

Well, forget all that. Here’s how the FogBugz team ends their stand-ups:

Features: unanimous agreement that the meeting is over, a point after which additional talking is moot, plus the bonus of getting to feel for one moment like a Power Ranger.

Design Critiques at Fog Creek

December 18th, 2014 by Pirijan Ketheswaran

No matter how clearly defined the problem, designing an elegant, intuitive and hopefully enjoyable solution is rarely straightforward. Maybe you’ve felt the frustration of days spent circling the drain of a bad idea, or the disappointment that comes with polishing something to perfection that ended up being out of scope? We all have.

And for those reasons, and more, we’ve started experimenting with weekly design critiques at Fog Creek. These involve designers coming together with marketing and development to present what they’re working on in order to gather early feedback and ask for advice.

design_critiques (1)

Why Do We Do Design Critiques?

Critiques done right open design up to an organization. It gives non-designers a window into how we approach problems, the questions we ask, and how our processes have been shaped. For designers, it’s getting that early feedback from domain experts that can be invaluable.

Regular critiques are also a great way of keeping a design team connected even when they’re working on totally separate products.

When everybody has a voice and a clear view into how stuff is made, the result is more consistent, higher quality products.

design_process (1)

How We Run a Design Critique

Each week, around the same time, someone in our #design team chat room will ask if anyone has work they’d like to present to other designers. Anyone outside the team is also more than welcome to attend if they’re interested in what’s being shown that week.

The design work shared at this stage is usually pretty rough, work in progress type stuff. We don’t look for anything resembling pixel perfection or polish at this point. Sessions are kept short and informal, critiques and feedback are just suggestions.

Designers presenting explain what they’re working on by:

  • Defining the problem(s) to be solved or the jobs to be done.
  • Define what success looks like. (A simpler user flow, the ability to do something fun and new, etc.)
  • Any insights from user research, business, or technical considerations that have influenced the design

The other participants then:

  • Ask questions to clarify anything they don’t understand
  • Ask about the reasoning behind specific design/interaction decisions
  • Suggest other possible approaches the designer could take or explore

Those with functional area expertise also raise questions and provide feedback relating to their field. For example:

  • Designers ask about Visual Design and User Experience issues

    Such considerations include usability, consistency and accessibility issues. As well as specific questions around aesthetics and visual choices.

  • Developers provide insight on technical feasibility

    They often provide feedback on design changes and implementation considerations like development time, performance etc.

  • Marketers ensure it meets business goals and is fitting for the audience

    Feedback often relates to the value propositions and calls to action. As well as whether it will work from an SEO perspective, and if key product elements are represented etc.

  • So far, the feedback we’ve gotten to our Fog Creek design critiques has been really encouraging. Of course, when it comes to improving transparency and quality, there’s always more we can do.

    How do you run your critiques? Let us know @FogCreek and we’ll retweet the best tips.

Building with Best Practice at Button – Interview with Chris Maddern

December 3rd, 2014 by Gareth Wilson


In this interview with Chris Maddern, Button Co-Founder, we discuss how they have taken a Best Practice lead approach to building Button. We dive in to what best practices they have implemented, including things like Code Reviews and building as if for open source. He tells us about the impact of this approach on velocity and morale and what benefits they’ve seen.


Content and Timings

  • Introduction (0:00)
  • About Button (1:10)
  • Software Development Best Practices (4:10)
  • Code Reviews (5:50)
  • Building Like It Is Open Source (9:40)
  • Failure and Morale (11:30)
  • Moving Faster with Best Practices (12:42)
  • Benefits of a Best Practice Lead Approach (15:53)




Today we have Chris Maddern from Button, Co-Founder at Button, formerly engineering lead at Venmo, a payments startup which was acquired by Braintree in 2012. So Button’s a mobile acquisition and retention platform. Chris is going to talk to us about what the engineering team at Button have put together as Best Practices. I’m your host Derrick, formerly a Support Engineer for Fog Creek’s FogBugz and Kiln Developer Tools products. So Chris, thanks very much for joining us today.

So yeah, I’m Chris. Originally from London and moved here [New York] about four years ago. Prior to moving here I was involved in building a company called iCarCheck, which is essentially a Carfax product for the European market. So when I moved out here, I started really focussing down on mobile. Spent a year and a half with a company called Animoto, building out their mobile product. Moved to Venmo and spent a little under a year and a half there heading up mobile engineering at Venmo and then Co-founded Button about six months ago.

We’re the only way of that I know to monetize that actually makes your experience better

About Button

And what is Button?

The way that I look at Button is, we’re building a framework for apps to work together. At a really high-level I like to say, that we’re the only way of that I know to monetize that actually makes your experience better. So the idea is how do we help these apps to work together in a way that extends one apps functionality in to another app while driving users, installs and commerce in the second app from the first app.

So like for example, if I make a reservation in Resy, it’s intuitive that I need to book an Uber to get to my reservation. So we provide Resy with the ability to directly productise offering an Uber ride inside of Resy and then when they get driven over to Uber obviously there’s economics in driving that ride.

That’s very cool there, you know it sounds like it’s offering an extra level of service. And you said that the other product that it’s integrating with is called Resy?

Yeh, so Resy is one example, Resy is a great little app that lets you pay to reserve tables that otherwise you and I would find it kind of difficult to get tables at. So like a table at Rosemary’s, which doesn’t really take reservations. A table at , which is typically pretty packed. And so the idea is that we help them to give a value-add piece of functionality which encourages you to use them again, which also gives them a monetisation to use. And we’ve been super fortunate to be able to work with Uber so early on, and they’ve been very very supportive and helpful with us, in trying to bring these amazing productisations to market.

Yeh, that Uber integration is pretty new right for you?

Yeh, so we launched this, this is kind of our second product to market, we launched it around four or five weeks ago, so we’re just getting to see the first data coming in to that.

The integration, which is sort of just putting all of the pieces together, there are not too many mysteries there, right?

Yeh, so the kind of generic technology that we’ve been trying to build is how do you move a user from one application to another application in a totally attributed way and be able to understand and build our around that. And then in the future, potentially tie that back in to our original loyalty card, so say when you move from one app to another app you get some kind of points incentive, or you get some other incentive around that. So yeh, in terms of like what was actually new. We built an SDK around Google’s API, we built really probably one of the things that I’m proudest of when I say we’ve built it, is the ride picker that we drop in to our partners’ apps, so a really beautiful way of choosing the Uber ride that you want to take and then sitting basically on top of the rails that we’ve built for everything else.

Software Development Best Practices at Button

Referring to your engineering blog post, there was a blog post from October about some best practices. What kind of best practices have you adopted at Button?

Yeh, so I was super excited that you guys reached out about this. I think that one of the best things that you can do when you when you try and adopt best practices is actually to just try and define what means to you. And to write it down, maybe share it publicly, maybe share it privately, or just like set it as your desktop wallpaper. But really commit to what that means to you. And so that blog post was originally an internal GitHub Wiki that I’d written basically for myself as the only person writing code at Button at the time. And then it translated to the blog post that you see today. So in terms of the best practices that we have right now, we try and start with really lofty goals that I’ve outlined in that blog post. Like 100% code test coverage, everything being code reviewed. Kind of stuff that that you hear that’s very standard, but being dogmatic about it is rare. And starting with the goal of being extremely obsessive with it, with the acceptance that, you know, you’ll only get 80-85% there because sometimes you just need to ship something. You know honestly, it’s terrible to say this, but sometimes you just can’t be bothered, like sometimes you just can’t be bothered to spend that extra five hours to get that extra 10% code coverage or something. But you set these goals and if you make it 80% of the way there then you feel like you’re going to be better off. So one of the most significant of the ones we follow absolutely religiously is code review. So we code review everything, no matter how small the change is. Just to get two sets of eyes on it. I would ask pretty much anyone to code review my code who has any context on coding at all. Because it’s amazing what new people will see that you just don’t. Even if they don’t have much context on what it is that you’re trying to do. Some things are just obvious, but you just don’t see them.

I would say somewhere between 30% and 40% of new code that was being written at Venmo was actually being contributed to open-source

Code Reviews

Yeh, that’s one of the great benefits of code reviews. So it sounds like everything gets code reviewed for you there, you know, maybe not tagging specific changesets. Would you say something like that gets code reviewed? Probably not

So you mean actually like creating the tag. So for that stuff we try and follow a pretty strict branching process. So anything that’s being marked as ready for a particular release will go down to a feature-release branch for that feature and then that branch will be end-of-lifed with the last commit that increments the version number and custom tag created. So we try and create some process around that because it’s kind of amazing how many times you follow a really robust process to 99% of the way there, and then a tiny little thing at the last minute. Like, how we do code signing, how you implement a version number, causes an issue that’s very real. So we try and go that whole mile.

It sounds like you just do 1 on 1 code reviews, in the sense that there’s one other person looking at it. Do you ever find yourself in the situation where you’re using more than one person?

Yeh, so this is a new and unique problem to us now, now that we’re actually growing. So originally there were two of us. And so if we were reviewing code, it would be the other person. We’re now at 4 and looking to grow more in the coming months too. And so we try and create sideways visibility wherever we can. So have some concept of assigned reviewers, but by default everything communicates via groups that go in to all of our engineering teams, so people know what pull requests are open, a room for new requests to people can review things. And we try and encourage just an awareness of what’s going on and the code culture in each of the codebases. In terms of people who aren’t actually working on that codebase actually contributing to code reviews is really low. And I think that’s simply because everyone is so heads-down. I would hope to see, and certainly whilst I was at Venmo, we would massively encourage pull requests which would have half a dozen people commenting on. And what’s interesting is, your pull requests then become one of your biggest signals in to your style guides and in to your conventions. Because you’ll start discussing things and you’ll realise that there is no codified way of doing that. And it’s something clearly people feel strongly about to pull down the pull request, so you should start to think about how you codify that in to your style guides.

It’s really nice to hear that you value code reviews, it’s something that a lot of people, you know, aren’t sure what to do yet about.

The thing is just to start. Have someone look at your code, and then as soon as you take off the hat that says ‘I want to get this committed as soon as possible’ and put on the hat that says ‘I want to protect the codebase’, which is the hat that you should wearing when code reviewing, then you just start seeing benefits. Even if it’s just you who can take off the hat and put on the other hat. I’ll routinely review my own pull requests before anyone else, to make sure there’s nothing that I’m going to be embarrassed about.

I would rather have a 70% completion rate from sprints with zero hot fixes, than 100% completion with even one hot fix

Building Like It Is Open Source

It sounds like your experience in the past has helped you to move really quickly here at Button. Is there something sort of specific from your experience at Venmo that has helped you here?

Yeh, so at Venmo we built with open-source in mind with pretty much anything that we would do. It’s something of an off-the-cuff estimate, but I would say somewhere between 30% and 40% of new code that was being written at Venmo was actually being contributed to open-source. Whether that be in one of our container libraries, one of our core SDKs. We tried to very much run a kind of eat your own Dogfood principle with our SDKs. So the Venmo app is built on the Venmo SDK that anyone can build on top of. So when you start thinking about open source, it guides many of the principles that you use. Because honestly then you have broad visibility of your code and your code quality, open standards, documentation inside of code are kind of a must have, rather than a nice to have. And then, the other thing, and this sounds kind of hyperbolic, but just an absolute focus on quality. You just can’t ship anything that’s not ridiculous high quality because there are always quality problems that you’re not aware of, so you need to find all of the ones that you can and fix them.

Failure and Morale

Yeh, as you’re pretty new in this space, quality is important to your reputation.

Yeh, and it becomes a lot more apparent when you have problems, like I remember looking in to Crashlytics at Venmo and you need a very, very small percentage of sessions that crash, before that number becomes very depressing. And you have to stop everything and you have to get in to hot fix mode.

Yeh, right and we all know the sort of snowball effect hot fix mode has.

Like not just on software quality and velocity, also just on morale. It sucks to be hot fixing stuff.

Yeh, because you know, then you have monitoring 24/7, your cellphones going off and you’re not getting family time or whatever. Yeh, so you’re right, it’s not just code but it’s team morale which is important too.

Yeh. One of the things that I think commonly gets mistaken as failure inside organisations that run Agile, is moving things off at the end of a sprint. I am of course tending to agree with that, but what I do say is when you have to hot fix that’s failure. I would rather have a 70% completion rate from sprints with zero hot fixes, than 100% completion with even one hot fix.

Moving Faster with Best Practices

Starting with best practices early on, did that sort of restrict how fast you can move at all?

Yeh, so there’s an overhead. So for the first several weeks of trying to build Button, I didn’t really build anything. The truth is that once you’ve done that a few times, once you’ve set up CI, and you’ve setup the tools of the trade, that stuff isn’t so expensive. You should definitely be doing it inside of new projects. The truth it, for every piece of overhead, or for every piece of friction that exists along the way, that makes you a little bit slower as you’re doing things, it more than gets made up for at the end. So, I mention this in the post, but this is something that I’ve noticed in software projects where it’s true that often the last 20% really does take 80% of the time. And I’ve found that best practice can reduce that 80% to something more like 40% or 50%. Because when you’re not building with best practices everything you’ve moved across the Kanban board, I’ve found, is never truly done. It’s either not quite to spec, not quite right. Or, there’s a quality issue – it’s not working 100% correctly, or there’s you know, something you’ve missed in code coverage. The savings at the end allowed us to get our produce to market faster than I think if we had just started coding, never done a test and never reviewed any pull requests, because it does pay off a lot.

Building with best practices and emphasising the time to do it, emphasising that the last 20% of the task is really 80% of the work. Did you find that you sort of had to sell that to the rest of the business?

There are always moments, where it does seem to other people that you could just get on with it and stop worrying about this stuff. I got really fortunate, in that both of my co-founders, or all of my co-founders, kind of get it and are genuinely respectful of deferring to us on matters of engineering. So I think we have a really healthy balance between engineering and business. I’m really fortunate to have partnered with a couple of big business heavyweights and fortunately we haven’t become this big business run company where we build stuff to spec and don’t care about engineering. Our platform is our product and our SDKs are our product, and we invest very heavily in things that we want to maintain for years to come.

Right, and that’s sort of, the reason that you sort of have that perspective is to not only be around as a business but because you also have that open source mentality. You know, we’re building for the open source in a sense.

Yeh and while we haven’t gotten the chance to do a lot of that yet, the way that we’ve designed a lot of what we have built, allow us to be able to very soon start to open source quite a number of projects.

Benefits of a Best Practice Lead Approach

6 to 7 months in now for button, it sounds like treating things as open source, kind of putting in those preferred best practices is really working for you.

Yeh, we’ve had lots of benefits. I’d say two keys. So there’s a difference between building a product and taking a product to market. And so once you’ve built the product you have to take it to market. And so over the last couple of months we’ve been talking to a lot of partners, creating a lot of integrations and I think that that way that we built really help us. Firstly, when you step back and think about things first, your design is much more modular and intuitive. And secondly, it’s fully documented. So creating our integration guides for the documentation that we needed to be able to share with the developers in order to do integrations was significantly easier because we have a lot of that stuff built already. The other kind of main way, is that with any software project you change along the way. You know, you switch focus a bit, new shiny things come in at you, old shiny things become less shiny. And along the way we’ve made several course corrections, in what we really want to build and what we think is really important. And it turns out that we’ve built a very modular framework, so the core of what we created never stopped being useful. And I think that when you think about building open source products, think about the layers of maintainability, that this approach lends itself to doing that. And so I genuinely believe that if we hadn’t gone down this path, then some of the shifts in, like what we were thinking is important or what we think is the highest priority would have basically meant we would have had to start again. If it weren’t for the fact that the way that we approached it gave us this very modular, layer-based approach meant that we could just think about changing the interface tools versus, we need to re-do the whole thing. So yeah, like massively beneficial. Which is why I chose to share the document publicly, because originally when you start and you’re like, I kind of have an idea, you think, if I live really anally by these things then my life is probably going to be better. And in the end I was like ‘oh wow’, it really kind of was better.

Thank you, for your time today, I really appreciate it!

Awesome, thanks guys, it has been a pleasure.

Likewise Chris, take care.


Looking for ways to review your code? Try Kiln Code Reviews.

9 Integration Testing Do’s and Don’ts

December 1st, 2014 by Andre Couturier

integration tests check systems work together
Integration tests check that your application works and presents properly to a customer. They seek to verify your performance, reliability and of course, functional requirements. Integration tests should be able to run against any of your developer, staging and production environments at any time.

Writing good tests that prove your solution works can be a challenge. Ensuring that these tests perform the intended actions and prove the required outcomes requires careful thought. You should consider what you are testing, and how to prove it works – both now and in the future. To help you create tests that work and are maintainable, here are 9 Do’s and 9 Don’ts to consider:

When Creating Integration Tests Do…

test with customers

1. Consider the cost/benefit of each test

Should this be a unit test? How much time will it save to write this test over a manual test? Is it run often? If a test takes 30 seconds to run manually every few weeks, taking 12 hours to automate it may not be the best use of resources.

2. Use intention revealing test names

You should be able to work out what a test is doing from the name, or at least give you a good idea.

3. Use your public API as much as possible

Otherwise, it’s just more endpoints and calls to maintain when application changes are made.

4. Create a new API when one isn’t available

Rather than relying on one of the Don’ts

5. Use the same UI as your customers

Or you might miss visual issues that your customers won’t.

6. Use command line parameters for values that will change when tests are re-run

Some examples include items like site name, user name, password etc.

7. Test using all the same steps your customers will perform

The closer your tests are to the real thing, the more valuable they’ll become.

8. Return your system under test to the original state

Or at least as close to it as you can. If you create a lot of things, try to delete them all.

9. Listen to your customers and support team

They will find ways to use your systems that you will never expect. Use this to your advantage in creating real world tests.

When Creating Integration Tests Don’t…

integration testing

1. Write an integration test when a unit test will suffice

It’ll be extra effort for no benefit.

2. Use anything that a customer cannot use

Databases, web servers, system configurations are all off limits. If your customer can’t touch it, your tests have no business touching it either.

3. Access any part of the system directly

Shortcuts like this just reduce the quality of your tests.

4. Use constants in the body of your tests

If you must use constants, put them in a block at the top of your test file, or a configuration file. There is nothing worse than having to search through all your source files because you changed a price from $199.95 to $199.99.

5. Create an internal only API

Unless necessary for security or administration.

6. Create an internal only UI

You’re supposed to be testing what the customer will see after all.

7. Make your test too complex

No matter how brilliant your test is, keep it simple. Complexity just breaks later. If you are finding it hard to write, it will be hard to maintain too.

8. Test more than one thing

Stick to what you need to test, if you try to do too much in one test it will just get more complex, and more fragile.

9. Leave the test system in a bad/unknown state

This means a broken or unusable site, database or UI.

How Fog Creek Got Started

November 20th, 2014 by Gareth Wilson

Starting out as a consulting company in 2000, Fog Creek was founded with the goal of creating the best place for developers to work. The video covers the early years of Fog Creek. Hear from our founders, Joel Spolsky and Michael Pryor, how they navigated the Dot-com crash and bootstrapped the company in to a growing, product-based business.

Effective Code Reviews – 9 Tips from a Converted Skeptic

November 17th, 2014 by Gareth Wilson

I knew the theory. Code reviews help to:

  • Catch bugs
  • Ensure code is readable and maintainable
  • Spread knowledge of the code base throughout the team
  • Get new people up to speed with the ways of working
  • Expose everyone to different approaches

Or, they’re just a giant waste of time. At least, that was my first impression of code reviews.

I was the new guy, a recent grad, developing plugins for a software company in London.

Over time I had to submit blocks of identical or similar code. They would get reviewed by the same poor, put upon guy (“he’s the best at it” my manager told me. No good deed…). Yet each review would come back picking at something different. It seemed needlessly picky and arbitrary.

Worse still, reviews would take days, if not weeks. By the time I got my code back I could hardly remember writing it. It wasn’t the guy’s fault. He’d asked for a senior dev, but had gotten me. He was sick of dealing with the issues every inexperienced developer makes, and code reviews were his way of exorcising that frustration.

Add to this the time lost in syncing the different branches, the context-switching… I was not a fan, nor were the rest of the team and it showed.

Skip forward a few years though and I find myself nodding along whilst reading a tweet quoting Jeff Atwood:

“Peer code reviews are the single biggest thing you can do to improve your code.”

What I had come to appreciate in the intervening years is that it wasn’t that code reviews were bad. Code reviews done badly were. And boy, had we been doing them badly.

I had learned this the hard way. And it certainly didn’t happen over night. Although on reflection, code reviews have saved me from more than a few embarrassing, build-breaking code changes! But after I had worked elsewhere, I gained experience of different and better ways of working. This gave me opportunity to see first-hand the benefits of code reviews that I had dismissed before. So now I consider myself a converted skeptic.

So that you can avoid such pains, check out our video and then read on for tips that will skip you straight to effective code reviews.

9 Code Review Tips

For everyone:

  • Review the right things, let tools to do the rest

    You don’t need to argue over code style and formatting issues. There are plenty of tools which can consistently highlight those things. Ensuring that the code is correct, understandable and maintainable is what’s important. Sure, style and formatting form part of that, but you should let the tool be the one to point out those things.

  • Everyone should code review

    Some people are better at it than others. The more experienced may well spot more bugs, and that’s important. But more important is maintaining a positive attitude to code review in general and that means avoiding any ‘Us vs. Them’ attitude, or making reviewing code burdensome for someone.

  • Review all code

    No code is too short or too simple. If you review everything then nothing gets missed. What’s more, it makes it part of the process, a habit and not an after thought.

  • Adopt a positive attitude

    This is just as important for reviewers as well as submitters. Code reviews are not the time to get all alpha and exert your coding prowess. Nor do you need to get defensive. Go in to it with a positive attitude of constructive criticism and you can build trust around the process.

For reviewers:


  • Code review often and for short sessions

    The effectiveness of your reviews decreases after around an hour. So putting off reviews and doing them in one almighty session doesn’t help anybody. Set aside time throughout your day to coincide with breaks, so as not to disrupt your own flow and help form a habit. Your colleagues will thank you for it. Waiting can be frustrating and they can resolve issues quicker whilst the code is still fresh in their heads.

  • It’s OK to say “It’s all good”

    Don’t get picky, you don’t have to find an issue in every review.

  • Use a checklist

    Code review checklists ensure consistency – they make sure everyone is covering what’s important and common mistakes.

For submitters:

  • Keep the code short

    Beyond 200 lines and the effectiveness of a review drops significantly. By the time you’re at more than 400 they become almost pointless.

  • Provide context

    Link to any related tickets, or the spec. There are code review tools like Kiln that can help with that. Provide short, but useful commit messages and plenty of comments throughout your code. It’ll help the reviewer and you’ll get fewer issues coming back.


Register Now for ‘Code Reviews in Kiln’ Webinar

Join us for our next live ‘Code Reviews in Kiln’ webinar. This webinar will help first time or novice users learn the basics of Code Reviews in Kiln.

We’ll cover:

  • What are Code Reviews
  • Why use Code Reviews
  • When use Code Reviews
  • What to look for during Code Reviews
  • Creating a Code Review
  • Commenting and Replying on a Code Review
  • Working with Existing Code Reviews
  • Code Review Workflow

Secure your spot, register now.

Scaling Customer Service by Fixing Things Twice

November 10th, 2014 by Gareth Wilson

As a bootstrapped company we’ve always had to work within a budget and avoid unnecessary costs without harming our mission. One area that has the potential for suffering in the face of limited budgets is Customer Service. When a company is growing, customer service can suffer or it can grow to consume a considerable amount of your budget. We’ve been able to grow our customer base by more than 10 times, maintaining a high level of customer service while keeping our support costs manageable.

How? By Fixing Things Twice using the 5 Whys.

Check out the video and read more about how these techniques can help you scale customer service without the cost.

Fixing Things Twice

When a customer has a problem, don’t simply resolve their issue and move on – but rather take advantage of the issue to resolve its underlying cause. This is Fixing Things Twice.

We think that for each customer issue, we have to do two things:
1. Solve the customer’s problem right away
2. Find a way to stop that problem from happening again

How we solve the first depends on the specific problem at hand, but to resolve the second we use the 5 Whys method.

Resolving Root Causes with 5 Whys

The 5 Whys is a problem-solving method and form of root-cause analysis. It involves recursively asking the question ‘why?’ five times when faced with an issue. Doing so enables you to get to the bottom of it, allowing you to fix and stop it from happening again.

Just ask why

This technique was recently popularized by Eric Ries in his book ‘The Lean Startup’. Yet it was developed by Sakichi Toyoda in the late eighties at Toyota. Over the years its use has spread beyond the motor industry to software development and other areas such as finance and medicine. It’s a recommended technique used by the UK’s National Health Service for example.

Lets take a look at how it works using a hypothetical situation:

  • The initial problem – The machine won’t start
  • 1st Why? – There’s no power
  • 2nd Why? – The power supply is not working
  • 3rd Why? – The capacitor has broken
  • 4th Why? – The capacitor was old but had not been replaced
  • 5th Why? – There’s no schedule to check for ageing parts in the power supply units. This is the root cause.

This technique is especially useful when you’re able to focus on processes as causes, like in the example above. Whilst other factors like time, money and resources might play their part, they’re beyond our immediate control. 5 Whys quickly exposes the relationship between the various causes of a problem – yet it does not require any analysis tools. This means it can be adopted and applied throughout an organization. We’ve extended its use beyond Support to include System Administration too. Watch the video above to see an example.

What does Fixing Things Twice mean for Support?

Here’s a couple of examples of using Fixing Things Twice provided by Adam Wishneusky, Technical Support Engineer here at Fog Creek:

When we had people asking a lot about our security practices. We fixed the problem once by giving the customer an answer. Then fixed it twice by putting up public docs at

Another example from a few years ago, is when we found that FogBugz wouldn’t let you create a new LDAP-authenticated user if one already existed in the database with the same LDAP UID even though that user is deleted. We showed customers how to manually fix the data to get them working, but we also pushed the dev team to fix the bug.

support and development
From this example you can see that Support must have access to the Development team. It’s often the only way the underlying issues will get fixed.

It takes commitment too – it’s easy to skip the second fix and it’s tempting to do so as it means spending more time on any one issue. But it’s a false economy to do so. When the issue crops up again and again, you’ll have to spend even more time on it.

If you stick to Fixing Things Twice then over time all the common problems get resolved. This frees up your Customer Service team to look in to the unique issues that need more time. Resolving your most frequent issues overcomes the support overhead that typically comes with adding new customers.

The Only Three Types of Meeting You Should Hold

November 6th, 2014 by Gareth Wilson

A perennial problem within software development teams is communication. Making a software company fast, nimble and effective is a function of reducing the amount of communication that takes place to a minimum. This means finding the right balance. Too much and you can’t get any work done, too little and no-one knows what to do or it’s un-coordinated.

Meetings, in particular, can be a real time sink. So to keep the balance right we actively try and reduce the number we hold. There’s only three types of meetings that work for us:

1) Mass Communication Meetings

These are meetings where you have a lot of information to communicate to a large group of people. After all, you don’t just want to send some epic email opus that no-one will read (Sorry Satya!). We have a regular ‘Town Hall’ meeting for example. In these we all come together, both office and remote staff, in the cafeteria and via video link. We hear about some pre-planned topics, suggested and organized in a Trello board. These work well because everyone hears the same thing and they only cover issues relevant to everybody.

2) Brain-storming Meetings

We use these when we just want to come up with a lot of ideas, fast. They aren’t great for discussing the ideas in detail, but can work for pure idea generation.

3) Stand-up Meetings

These are great for keeping teams on track, by providing a brief update on what you did, what you’re working on and flagging up any blockers to progress. These are especially useful in remote teams, by bringing people together you help to form that team bond. It’s important to keep them focussed though, so we’re strict and avoid over-runs and people going off-topic.

“If you have a meeting with 8 people in the room that lasts an hour, that’s a day of productivity that was lost” Joel Spolsky

And that’s it. For all others we find a more effective alternative. That might mean linking up with people on chat, ad-hoc conversations via Hangout and through extensive use of collaborative software, like FogBugz, via cases, discussion groups and the Wiki. We never use a meeting for example, when we’re trying to solve a difficult problem. Getting people together in a room just means that no-one can think things through properly. They might generate a lot of discussion, but lead to few real conclusions.

When we do have a meeting, we also limit the number of people invited to just the ones that have to be there. Those with only some relevance can catch up with the minutes added to the Wiki at a time that suits them.

We’ve found that sticking to this really saves us time. What ways have you found to minimize communication overhead? Tweet them to @fogcreek and we’ll retweet the best ones.

Looking for more?

Visit the Archives or subscribe via RSS.