Archive for the ‘Fog Creek’ Category

Design Critiques at Fog Creek

December 18th, 2014 by Pirijan Ketheswaran

No matter how clearly defined the problem, designing an elegant, intuitive and hopefully enjoyable solution is rarely straightforward. Maybe you’ve felt the frustration of days spent circling the drain of a bad idea, or the disappointment that comes with polishing something to perfection that ended up being out of scope? We all have.

And for those reasons, and more, we’ve started experimenting with weekly design critiques at Fog Creek. These involve designers coming together with marketing and development to present what they’re working on in order to gather early feedback and ask for advice.

design_critiques (1)

Why Do We Do Design Critiques?

Critiques done right open design up to an organization. It gives non-designers a window into how we approach problems, the questions we ask, and how our processes have been shaped. For designers, it’s getting that early feedback from domain experts that can be invaluable.

Regular critiques are also a great way of keeping a design team connected even when they’re working on totally separate products.

When everybody has a voice and a clear view into how stuff is made, the result is more consistent, higher quality products.

design_process (1)

How We Run a Design Critique

Each week, around the same time, someone in our #design team chat room will ask if anyone has work they’d like to present to other designers. Anyone outside the team is also more than welcome to attend if they’re interested in what’s being shown that week.

The design work shared at this stage is usually pretty rough, work in progress type stuff. We don’t look for anything resembling pixel perfection or polish at this point. Sessions are kept short and informal, critiques and feedback are just suggestions.

Designers presenting explain what they’re working on by:

  • Defining the problem(s) to be solved or the jobs to be done.
  • Define what success looks like. (A simpler user flow, the ability to do something fun and new, etc.)
  • Any insights from user research, business, or technical considerations that have influenced the design

The other participants then:

  • Ask questions to clarify anything they don’t understand
  • Ask about the reasoning behind specific design/interaction decisions
  • Suggest other possible approaches the designer could take or explore

Those with functional area expertise also raise questions and provide feedback relating to their field. For example:

  • Designers ask about Visual Design and User Experience issues

    Such considerations include usability, consistency and accessibility issues. As well as specific questions around aesthetics and visual choices.

  • Developers provide insight on technical feasibility

    They often provide feedback on design changes and implementation considerations like development time, performance etc.

  • Marketers ensure it meets business goals and is fitting for the audience

    Feedback often relates to the value propositions and calls to action. As well as whether it will work from an SEO perspective, and if key product elements are represented etc.

  • So far, the feedback we’ve gotten to our Fog Creek design critiques has been really encouraging. Of course, when it comes to improving transparency and quality, there’s always more we can do.

    How do you run your critiques? Let us know @FogCreek and we’ll retweet the best tips.

Building with Best Practice at Button – Interview with Chris Maddern

December 3rd, 2014 by Gareth Wilson

 

In this interview with Chris Maddern, Button Co-Founder, we discuss how they have taken a Best Practice lead approach to building Button. We dive in to what best practices they have implemented, including things like Code Reviews and building as if for open source. He tells us about the impact of this approach on velocity and morale and what benefits they’ve seen.

 

Content and Timings

  • Introduction (0:00)
  • About Button (1:10)
  • Software Development Best Practices (4:10)
  • Code Reviews (5:50)
  • Building Like It Is Open Source (9:40)
  • Failure and Morale (11:30)
  • Moving Faster with Best Practices (12:42)
  • Benefits of a Best Practice Lead Approach (15:53)

 

Transcript

Introduction

Derrick:
Today we have Chris Maddern from Button, Co-Founder at Button, formerly engineering lead at Venmo, a payments startup which was acquired by Braintree in 2012. So Button’s a mobile acquisition and retention platform. Chris is going to talk to us about what the engineering team at Button have put together as Best Practices. I’m your host Derrick, formerly a Support Engineer for Fog Creek’s FogBugz and Kiln Developer Tools products. So Chris, thanks very much for joining us today.

Chris:
So yeah, I’m Chris. Originally from London and moved here [New York] about four years ago. Prior to moving here I was involved in building a company called iCarCheck, which is essentially a Carfax product for the European market. So when I moved out here, I started really focussing down on mobile. Spent a year and a half with a company called Animoto, building out their mobile product. Moved to Venmo and spent a little under a year and a half there heading up mobile engineering at Venmo and then Co-founded Button about six months ago.

We’re the only way of that I know to monetize that actually makes your experience better

About Button

Derrick:
And what is Button?

Chris:
The way that I look at Button is, we’re building a framework for apps to work together. At a really high-level I like to say, that we’re the only way of that I know to monetize that actually makes your experience better. So the idea is how do we help these apps to work together in a way that extends one apps functionality in to another app while driving users, installs and commerce in the second app from the first app.

So like for example, if I make a reservation in Resy, it’s intuitive that I need to book an Uber to get to my reservation. So we provide Resy with the ability to directly productise offering an Uber ride inside of Resy and then when they get driven over to Uber obviously there’s economics in driving that ride.

Derrick:
That’s very cool there, you know it sounds like it’s offering an extra level of service. And you said that the other product that it’s integrating with is called Resy?

Chris:
Yeh, so Resy is one example, Resy is a great little app that lets you pay to reserve tables that otherwise you and I would find it kind of difficult to get tables at. So like a table at Rosemary’s, which doesn’t really take reservations. A table at , which is typically pretty packed. And so the idea is that we help them to give a value-add piece of functionality which encourages you to use them again, which also gives them a monetisation to use. And we’ve been super fortunate to be able to work with Uber so early on, and they’ve been very very supportive and helpful with us, in trying to bring these amazing productisations to market.

Derrick:
Yeh, that Uber integration is pretty new right for you?

Chris:
Yeh, so we launched this, this is kind of our second product to market, we launched it around four or five weeks ago, so we’re just getting to see the first data coming in to that.

Derrick:
The integration, which is sort of just putting all of the pieces together, there are not too many mysteries there, right?

Chris:
Yeh, so the kind of generic technology that we’ve been trying to build is how do you move a user from one application to another application in a totally attributed way and be able to understand and build our around that. And then in the future, potentially tie that back in to our original loyalty card, so say when you move from one app to another app you get some kind of points incentive, or you get some other incentive around that. So yeh, in terms of like what was actually new. We built an SDK around Google’s API, we built really probably one of the things that I’m proudest of when I say we’ve built it, is the ride picker that we drop in to our partners’ apps, so a really beautiful way of choosing the Uber ride that you want to take and then sitting basically on top of the rails that we’ve built for everything else.

Software Development Best Practices at Button

Derrick:
Referring to your engineering blog post, there was a blog post from October about some best practices. What kind of best practices have you adopted at Button?

Chris:
Yeh, so I was super excited that you guys reached out about this. I think that one of the best things that you can do when you when you try and adopt best practices is actually to just try and define what means to you. And to write it down, maybe share it publicly, maybe share it privately, or just like set it as your desktop wallpaper. But really commit to what that means to you. And so that blog post was originally an internal GitHub Wiki that I’d written basically for myself as the only person writing code at Button at the time. And then it translated to the blog post that you see today. So in terms of the best practices that we have right now, we try and start with really lofty goals that I’ve outlined in that blog post. Like 100% code test coverage, everything being code reviewed. Kind of stuff that that you hear that’s very standard, but being dogmatic about it is rare. And starting with the goal of being extremely obsessive with it, with the acceptance that, you know, you’ll only get 80-85% there because sometimes you just need to ship something. You know honestly, it’s terrible to say this, but sometimes you just can’t be bothered, like sometimes you just can’t be bothered to spend that extra five hours to get that extra 10% code coverage or something. But you set these goals and if you make it 80% of the way there then you feel like you’re going to be better off. So one of the most significant of the ones we follow absolutely religiously is code review. So we code review everything, no matter how small the change is. Just to get two sets of eyes on it. I would ask pretty much anyone to code review my code who has any context on coding at all. Because it’s amazing what new people will see that you just don’t. Even if they don’t have much context on what it is that you’re trying to do. Some things are just obvious, but you just don’t see them.

I would say somewhere between 30% and 40% of new code that was being written at Venmo was actually being contributed to open-source

Code Reviews

Derrick:
Yeh, that’s one of the great benefits of code reviews. So it sounds like everything gets code reviewed for you there, you know, maybe not tagging specific changesets. Would you say something like that gets code reviewed? Probably not

Chris:
So you mean actually like creating the tag. So for that stuff we try and follow a pretty strict branching process. So anything that’s being marked as ready for a particular release will go down to a feature-release branch for that feature and then that branch will be end-of-lifed with the last commit that increments the version number and custom tag created. So we try and create some process around that because it’s kind of amazing how many times you follow a really robust process to 99% of the way there, and then a tiny little thing at the last minute. Like, how we do code signing, how you implement a version number, causes an issue that’s very real. So we try and go that whole mile.

Derrick:
It sounds like you just do 1 on 1 code reviews, in the sense that there’s one other person looking at it. Do you ever find yourself in the situation where you’re using more than one person?

Chris:
Yeh, so this is a new and unique problem to us now, now that we’re actually growing. So originally there were two of us. And so if we were reviewing code, it would be the other person. We’re now at 4 and looking to grow more in the coming months too. And so we try and create sideways visibility wherever we can. So have some concept of assigned reviewers, but by default everything communicates via groups that go in to all of our engineering teams, so people know what pull requests are open, a room for new requests to people can review things. And we try and encourage just an awareness of what’s going on and the code culture in each of the codebases. In terms of people who aren’t actually working on that codebase actually contributing to code reviews is really low. And I think that’s simply because everyone is so heads-down. I would hope to see, and certainly whilst I was at Venmo, we would massively encourage pull requests which would have half a dozen people commenting on. And what’s interesting is, your pull requests then become one of your biggest signals in to your style guides and in to your conventions. Because you’ll start discussing things and you’ll realise that there is no codified way of doing that. And it’s something clearly people feel strongly about to pull down the pull request, so you should start to think about how you codify that in to your style guides.

Derrick:
It’s really nice to hear that you value code reviews, it’s something that a lot of people, you know, aren’t sure what to do yet about.

Chris:
The thing is just to start. Have someone look at your code, and then as soon as you take off the hat that says ‘I want to get this committed as soon as possible’ and put on the hat that says ‘I want to protect the codebase’, which is the hat that you should wearing when code reviewing, then you just start seeing benefits. Even if it’s just you who can take off the hat and put on the other hat. I’ll routinely review my own pull requests before anyone else, to make sure there’s nothing that I’m going to be embarrassed about.

I would rather have a 70% completion rate from sprints with zero hot fixes, than 100% completion with even one hot fix

Building Like It Is Open Source

Derrick:
It sounds like your experience in the past has helped you to move really quickly here at Button. Is there something sort of specific from your experience at Venmo that has helped you here?

Chris:
Yeh, so at Venmo we built with open-source in mind with pretty much anything that we would do. It’s something of an off-the-cuff estimate, but I would say somewhere between 30% and 40% of new code that was being written at Venmo was actually being contributed to open-source. Whether that be in one of our container libraries, one of our core SDKs. We tried to very much run a kind of eat your own Dogfood principle with our SDKs. So the Venmo app is built on the Venmo SDK that anyone can build on top of. So when you start thinking about open source, it guides many of the principles that you use. Because honestly then you have broad visibility of your code and your code quality, open standards, documentation inside of code are kind of a must have, rather than a nice to have. And then, the other thing, and this sounds kind of hyperbolic, but just an absolute focus on quality. You just can’t ship anything that’s not ridiculous high quality because there are always quality problems that you’re not aware of, so you need to find all of the ones that you can and fix them.

Failure and Morale

Derrick:
Yeh, as you’re pretty new in this space, quality is important to your reputation.

Chris:
Yeh, and it becomes a lot more apparent when you have problems, like I remember looking in to Crashlytics at Venmo and you need a very, very small percentage of sessions that crash, before that number becomes very depressing. And you have to stop everything and you have to get in to hot fix mode.

Derrick:
Yeh, right and we all know the sort of snowball effect hot fix mode has.

Chris:
Like not just on software quality and velocity, also just on morale. It sucks to be hot fixing stuff.

Derrick:
Yeh, because you know, then you have monitoring 24/7, your cellphones going off and you’re not getting family time or whatever. Yeh, so you’re right, it’s not just code but it’s team morale which is important too.

Chris:
Yeh. One of the things that I think commonly gets mistaken as failure inside organisations that run Agile, is moving things off at the end of a sprint. I am of course tending to agree with that, but what I do say is when you have to hot fix that’s failure. I would rather have a 70% completion rate from sprints with zero hot fixes, than 100% completion with even one hot fix.

Moving Faster with Best Practices

Derrick:
Starting with best practices early on, did that sort of restrict how fast you can move at all?

Chris:
Yeh, so there’s an overhead. So for the first several weeks of trying to build Button, I didn’t really build anything. The truth is that once you’ve done that a few times, once you’ve set up CI, and you’ve setup the tools of the trade, that stuff isn’t so expensive. You should definitely be doing it inside of new projects. The truth it, for every piece of overhead, or for every piece of friction that exists along the way, that makes you a little bit slower as you’re doing things, it more than gets made up for at the end. So, I mention this in the post, but this is something that I’ve noticed in software projects where it’s true that often the last 20% really does take 80% of the time. And I’ve found that best practice can reduce that 80% to something more like 40% or 50%. Because when you’re not building with best practices everything you’ve moved across the Kanban board, I’ve found, is never truly done. It’s either not quite to spec, not quite right. Or, there’s a quality issue – it’s not working 100% correctly, or there’s you know, something you’ve missed in code coverage. The savings at the end allowed us to get our produce to market faster than I think if we had just started coding, never done a test and never reviewed any pull requests, because it does pay off a lot.

Derrick:
Building with best practices and emphasising the time to do it, emphasising that the last 20% of the task is really 80% of the work. Did you find that you sort of had to sell that to the rest of the business?

Chris:
There are always moments, where it does seem to other people that you could just get on with it and stop worrying about this stuff. I got really fortunate, in that both of my co-founders, or all of my co-founders, kind of get it and are genuinely respectful of deferring to us on matters of engineering. So I think we have a really healthy balance between engineering and business. I’m really fortunate to have partnered with a couple of big business heavyweights and fortunately we haven’t become this big business run company where we build stuff to spec and don’t care about engineering. Our platform is our product and our SDKs are our product, and we invest very heavily in things that we want to maintain for years to come.

Derrick:
Right, and that’s sort of, the reason that you sort of have that perspective is to not only be around as a business but because you also have that open source mentality. You know, we’re building for the open source in a sense.

Chris:
Yeh and while we haven’t gotten the chance to do a lot of that yet, the way that we’ve designed a lot of what we have built, allow us to be able to very soon start to open source quite a number of projects.

Benefits of a Best Practice Lead Approach

Derrick:
6 to 7 months in now for button, it sounds like treating things as open source, kind of putting in those preferred best practices is really working for you.

Chris:
Yeh, we’ve had lots of benefits. I’d say two keys. So there’s a difference between building a product and taking a product to market. And so once you’ve built the product you have to take it to market. And so over the last couple of months we’ve been talking to a lot of partners, creating a lot of integrations and I think that that way that we built really help us. Firstly, when you step back and think about things first, your design is much more modular and intuitive. And secondly, it’s fully documented. So creating our integration guides for the documentation that we needed to be able to share with the developers in order to do integrations was significantly easier because we have a lot of that stuff built already. The other kind of main way, is that with any software project you change along the way. You know, you switch focus a bit, new shiny things come in at you, old shiny things become less shiny. And along the way we’ve made several course corrections, in what we really want to build and what we think is really important. And it turns out that we’ve built a very modular framework, so the core of what we created never stopped being useful. And I think that when you think about building open source products, think about the layers of maintainability, that this approach lends itself to doing that. And so I genuinely believe that if we hadn’t gone down this path, then some of the shifts in, like what we were thinking is important or what we think is the highest priority would have basically meant we would have had to start again. If it weren’t for the fact that the way that we approached it gave us this very modular, layer-based approach meant that we could just think about changing the interface tools versus, we need to re-do the whole thing. So yeah, like massively beneficial. Which is why I chose to share the document publicly, because originally when you start and you’re like, I kind of have an idea, you think, if I live really anally by these things then my life is probably going to be better. And in the end I was like ‘oh wow’, it really kind of was better.

Derrick:
Thank you, for your time today, I really appreciate it!

Chris:
Awesome, thanks guys, it has been a pleasure.

Derrick:
Likewise Chris, take care.

 

Looking for ways to review your code? Try Kiln Code Reviews.

9 Integration Testing Do’s and Don’ts

December 1st, 2014 by Andre Couturier

integration tests check systems work together
Integration tests check that your application works and presents properly to a customer. They seek to verify your performance, reliability and of course, functional requirements. Integration tests should be able to run against any of your developer, staging and production environments at any time.

Writing good tests that prove your solution works can be a challenge. Ensuring that these tests perform the intended actions and prove the required outcomes requires careful thought. You should consider what you are testing, and how to prove it works – both now and in the future. To help you create tests that work and are maintainable, here are 9 Do’s and 9 Don’ts to consider:

When Creating Integration Tests Do…

test with customers

1. Consider the cost/benefit of each test

Should this be a unit test? How much time will it save to write this test over a manual test? Is it run often? If a test takes 30 seconds to run manually every few weeks, taking 12 hours to automate it may not be the best use of resources.

2. Use intention revealing test names

You should be able to work out what a test is doing from the name, or at least give you a good idea.

3. Use your public API as much as possible

Otherwise it’s just more endpoints and calls to maintain when application changes are made.

4. Create a new API when one isn’t available

Rather than relying on one of the Don’ts

5. Use the same UI as your customers

Or you might miss visual issues that your customers won’t.

6. Use command line parameters for values that will change when tests are re-run

Some examples include items like site name, user name, password etc.

7. Test using all the same steps your customers will perform

The closer your tests are to the real thing, the more valuable they’ll become.

8. Return your system under test to the original state

Or at least as close to it as you can. If you create a lot of things, try to delete them all.

9. Listen to your customers and support team

They will find ways to use your systems that you will never expect. Use this to your advantage in creating real world tests.

When Creating Integration Tests Don’t…

integration testing

1. Write an integration test when a unit test will suffice

It’ll be extra effort for no benefit.

2. Use anything that a customer cannot use

Databases, web servers, system configurations are all off limits. If your customer can’t touch it, your tests have no business touching it either.

3. Access any part of the system directly

Shortcuts like this just reduce the quality of your tests.

4. Use constants in the body of your tests

If you must use constants, put them in a block at the top of your test file, or a configuration file. There is nothing worse than having to search through all your source files because you changed a price from $199.95 to $199.99.

5. Create an internal only API

Unless necessary for security or administration.

6. Create an internal only UI

You’re supposed to be testing what the customer will see after all.

7. Make your test too complex

No matter how brilliant your test is, keep it simple. Complexity just breaks later. If you are finding it hard to write, it will be hard to maintain too.

8. Test more than one thing

Stick to what you need to test, if you try to do too much in one test it will just get more complex, and more fragile.

9. Leave the test system in a bad/unknown state

This means a broken or unusable site, database or UI.

 

To learn more about Testing and QA at Fog Creek, watch the following video:

How Fog Creek Got Started

November 20th, 2014 by Gareth Wilson

Starting out as a consulting company in 2000, Fog Creek was founded with the goal of creating the best place for developers to work. The video covers the early years of Fog Creek. Hear from our founders, Joel Spolsky and Michael Pryor, how they navigated the Dot-com crash and bootstrapped the company in to a growing, product-based business.

Effective Code Reviews – 9 Tips from a Converted Skeptic

November 17th, 2014 by Gareth Wilson

I knew the theory. Code reviews help to:
code_review

  • Catch bugs
  • Ensure code is readable and maintainable
  • Spread knowledge of the code base throughout the team
  • Get new people up to speed with the ways of working
  • Expose everyone to different approaches

Or, they’re just a giant waste of time. At least, that was my first impression of code reviews.

I was the new guy, a recent grad, developing plugins for a software company in London.

Over time I had to submit blocks of identical or similar code. They would get reviewed by the same poor, put upon guy (“he’s the best at it” my manager told me. No good deed…). Yet each review would come back picking at something different. It seemed needlessly picky and arbitrary.

Worse still, reviews would take days, if not weeks. By the time I got my code back I could hardly remember writing it. It wasn’t the guy’s fault. He’d asked for a senior dev, but had gotten me. He was sick of dealing with the issues every inexperienced developer makes, and code reviews were his way of exorcising that frustration.

Add to this the time lost in syncing the different branches, the context-switching… I was not a fan, nor were the rest of the team and it showed.

Skip forward a few years though and I find myself nodding along whilst reading a tweet quoting Jeff Atwood:

“Peer code reviews are the single biggest thing you can do to improve your code.”

What I had come to appreciate in the intervening years is that it wasn’t that code reviews were bad. Code reviews done badly were. And boy, had we been doing them badly.

I had learned this the hard way. And it certainly didn’t happen over night. Although on reflection, code reviews have saved me from more than a few embarrassing, build-breaking code changes! But after I had worked elsewhere, I gained experience of different and better ways of working. This gave me opportunity to see first-hand the benefits of code reviews that I had dismissed before. So now I consider myself a converted skeptic.

So that you can avoid such pains, check out our video and then read on for tips that will skip you straight to effective code reviews.

9 Code Review Tips

For everyone:

  • Review the right things, let tools to do the rest

    You don’t need to argue over code style and formatting issues. There are plenty of tools which can consistently highlight those things. Ensuring that the code is correct, understandable and maintainable is what’s important. Sure, style and formatting form part of that, but you should let the tool be the one to point out those things.

  • Everyone should code review

    Some people are better at it than others. The more experienced may well spot more bugs, and that’s important. But more important is maintaining a positive attitude to code review in general and that means avoiding any ‘Us vs. Them’ attitude, or making reviewing code burdensome for someone.

  • Review all code

    No code is too short or too simple. If you review everything then nothing gets missed. What’s more, it makes it part of the process, a habit and not an after thought.

  • Adopt a positive attitude

    This is just as important for reviewers as well as submitters. Code reviews are not the time to get all alpha and exert your coding prowess. Nor do you need to get defensive. Go in to it with a positive attitude of constructive criticism and you can build trust around the process.

For reviewers:

code_review_in_comfort

  • Code review often and for short sessions

    The effectiveness of your reviews decreases after around an hour. So putting off reviews and doing them in one almighty session doesn’t help anybody. Set aside time throughout your day to coincide with breaks, so as not to disrupt your own flow and help form a habit. Your colleagues will thank you for it. Waiting can be frustrating and they can resolve issues quicker whilst the code is still fresh in their heads.

  • It’s OK to say “It’s all good”

    Don’t get picky, you don’t have to find an issue in every review.

  • Use a checklist

    Checklists ensure consistency – they make sure everyone is covering what’s important and common mistakes.

For submitters:

  • Keep the code short

    Beyond 200 lines and the effectiveness of a review drops significantly. By the time you’re at more than 400 they become almost pointless.

  • Provide context

    Link to any related tickets, or the spec. There are code review tools like Kiln that can help with that. Provide short, but useful commit messages and plenty of comments throughout your code. It’ll help the reviewer and you’ll get fewer issues coming back.

 

Register Now for ‘Code Reviews in Kiln’ Webinar

Join us for our next live ‘Code Reviews in Kiln’ webinar. This webinar will help first time or novice users learn the basics of Code Reviews in Kiln.

We’ll cover:

  • What are Code Reviews
  • Why use Code Reviews
  • When use Code Reviews
  • What to look for during Code Reviews
  • Creating a Code Review
  • Commenting and Replying on a Code Review
  • Working with Existing Code Reviews
  • Code Review Workflow

Secure your spot, register now.

Scaling Customer Service by Fixing Things Twice

November 10th, 2014 by Gareth Wilson

As a bootstrapped company we’ve always had to work within a budget and avoid unnecessary costs without harming our mission. One area that has the potential for suffering in the face of limited budgets is Customer Service. When a company is growing, customer service can suffer or it can grow to consume a considerable amount of your budget. We’ve been able to grow our customer base by more than 10 times, maintaining a high level of customer service while keeping our support costs manageable.

How? By Fixing Things Twice using the 5 Whys.

Check out the video and read more about how these techniques can help you scale customer service without the cost.

Fixing Things Twice

When a customer has a problem, don’t simply resolve their issue and move on – but rather take advantage of the issue to resolve its underlying cause. This is Fixing Things Twice.

We think that for each customer issue, we have to do two things:
1. Solve the customer’s problem right away
2. Find a way to stop that problem from happening again

How we solve the first depends on the specific problem at hand, but to resolve the second we use the 5 Whys method.

Resolving Root Causes with 5 Whys

The 5 Whys is a problem-solving method and form of root-cause analysis. It involves recursively asking the question ‘why?’ five times when faced with an issue. Doing so enables you to get to the bottom of it, allowing you to fix and stop it from happening again.

Just ask why

This technique was recently popularized by Eric Ries in his book ‘The Lean Startup’. Yet it was developed by Sakichi Toyoda in the late eighties at Toyota. Over the years its use has spread beyond the motor industry to software development and other areas such as finance and medicine. It’s a recommended technique used by the UK’s National Health Service for example.

Lets take a look at how it works using a hypothetical situation:

  • The initial problem – The machine won’t start
  • 1st Why? – There’s no power
  • 2nd Why? – The power supply is not working
  • 3rd Why? – The capacitor has broken
  • 4th Why? – The capacitor was old but had not been replaced
  • 5th Why? – There’s no schedule to check for ageing parts in the power supply units. This is the root cause.

This technique is especially useful when you’re able to focus on processes as causes, like in the example above. Whilst other factors like time, money and resources might play their part, they’re beyond our immediate control. 5 Whys quickly exposes the relationship between the various causes of a problem – yet it does not require any analysis tools. This means it can be adopted and applied throughout an organization. We’ve extended its use beyond Support to include System Administration too. Watch the video above to see an example.

What does Fixing Things Twice mean for Support?

Here’s a couple of examples of using Fixing Things Twice provided by Adam Wishneusky, Technical Support Engineer here at Fog Creek:

When we had people asking a lot about our security practices. We fixed the problem once by giving the customer an answer. Then fixed it twice by putting up public docs at http://www.fogcreek.com/security/

Another example from a few years ago, is when we found that FogBugz wouldn’t let you create a new LDAP-authenticated user if one already existed in the database with the same LDAP UID even though that user is deleted. We showed customers how to manually fix the data to get them working, but we also pushed the dev team to fix the bug.

support and development
From this example you can see that Support must have access to the Development team. It’s often the only way the underlying issues will get fixed.

It takes commitment too – it’s easy to skip the second fix and it’s tempting to do so as it means spending more time on any one issue. But it’s a false economy to do so. When the issue crops up again and again, you’ll have to spend even more time on it.

If you stick to Fixing Things Twice then over time all the common problems get resolved. This frees up your Customer Service team to look in to the unique issues that need more time. Resolving your most frequent issues overcomes the support overhead that typically comes with adding new customers.

The Only Three Types of Meeting You Should Hold

November 6th, 2014 by Gareth Wilson

A perennial problem within software development teams is communication. Making a software company fast, nimble and effective is a function of reducing the amount of communication that takes place to a minimum. This means finding the right balance. Too much and you can’t get any work done, too little and no-one knows what to do or it’s un-coordinated.

Meetings, in particular, can be a real time sink. So to keep the balance right we actively try and reduce the number we hold. There’s only three types of meetings that work for us:

1) Mass Communication Meetings

These are meetings where you have a lot of information to communicate to a large group of people. After all, you don’t just want to send some epic email opus that no-one will read (Sorry Satya!). We have a regular ‘Town Hall’ meeting for example. In these we all come together, both office and remote staff, in the cafeteria and via video link. We hear about some pre-planned topics, suggested and organized in a Trello board. These work well because everyone hears the same thing and they only cover issues relevant to everybody.

2) Brain-storming Meetings

We use these when we just want to come up with a lot of ideas, fast. They aren’t great for discussing the ideas in detail, but can work for pure idea generation.

3) Stand-up Meetings

These are great for keeping teams on track, by providing a brief update on what you did, what you’re working on and flagging up any blockers to progress. These are especially useful in remote teams, by bringing people together you help to form that team bond. It’s important to keep them focussed though, so we’re strict and avoid over-runs and people going off-topic.

“If you have a meeting with 8 people in the room that lasts an hour, that’s a day of productivity that was lost” Joel Spolsky

And that’s it. For all others we find a more effective alternative. That might mean linking up with people on chat, ad-hoc conversations via Hangout and through extensive use of collaborative software, like FogBugz, via cases, discussion groups and the Wiki. We never use a meeting for example, when we’re trying to solve a difficult problem. Getting people together in a room just means that no-one can think things through properly. They might generate a lot of discussion, but lead to few real conclusions.

When we do have a meeting, we also limit the number of people invited to just the ones that have to be there. Those with only some relevance can catch up with the minutes added to the Wiki at a time that suits them.

We’ve found that sticking to this really saves us time. What ways have you found to minimize communication overhead? Tweet them to @fogcreek and we’ll retweet the best ones.

Improve Your Culture with these Team Lunch Tips from 20 Startups

November 4th, 2014 by Gareth Wilson

Fogcreek Team Lunch
The importance of eating together has long been recognized in positive child development and strengthening family bonds. Eating together is a great equalizer and it can be a good way to help form better and more valuable relationships amongst teams of co-workers too.

Daily Team Lunches

We swear by daily team lunches here at Fog Creek. Having stumbled across their positive impact at Microsoft, Joel Spolsky, one of our co-founders, has said that whilst “there’s a lot of stuff that’s accidental about Fog Creek… lunch is not one of them… The importance of eating together with your co-workers is not negotiable, to me.”

 

I’m similarly enthusiastic having seen the benefits myself whilst working in Support at a previous company. In my role, I was the bearer of bad news to the Development team – sending and escalating the bugs raised by customers. I had little interaction with the team beyond raising bugs, so forming positive working relationships with its members could be tough. But this was no longer an issue once we started having team lunches. They were a great way for people from different functional areas of the company to come together. We’d chat about both work and non-work stuff, and it meant that we could get to know each other as people and not just colleagues.

Long lunch tables
At Fog Creek, we deliberately have rows of long tables in our cafeteria. Having round tables means that when looking for a place to sit, you have to pick a group of people. But with long ones you just go and sit at the end of the row. You end up speaking to different people every day, helping to avoid cliques. It’s good for new hires too – they don’t have to sit alone or force themselves upon an unfamiliar group.

Like StumbleUpon, AirBnB, Eventbrite and others, we have lunch catered. It’s served up at the same time every day so everyone knows when there will be people around to go eat with. For the foodies amongst you, we share photos of some of the tasty dishes on our Facebook page.

team_lunches

Others, like MemSQL and Softwire have hired in their own chefs. And of course there’s the likes of Facebook, with their own on-site Pizza place, Burger bar and Patisserie, and Fab, who have their Meatball Shop and Dinosaur Bar-B-Que.

It Doesn’t Need to be Expensive

It doesn’t need to be expensive though – you don’t have to provide the food, people can bring their own lunch. The important part is the set time and place to eat together. Make them optional, so that people don’t feel obligated and can get on with critical work if need be.

If space is a problem, then eat out. A group at Chartio for example, eat together at a different place in San Francisco every day.

Can’t do it every day? No problem. Take Huddle, they have a team lunch once a week. FreeAgent do too and they keep things interesting by picking a different cuisine from around the world each time.

fogcreek_team_lunch2

Stay Productive

TaskRabbit, Softwire and Bit.ly have their ‘Lunch and Learn’ sessions. One team member presents on a particular topic of interest, whilst the rest munch away. Twilio use their team lunches for onboarding new hires, who demo a creation using their API to colleagues in their first week.

Small Groups or the Whole Team

It doesn’t have to be the whole team either. Warby Parker for example has a weekly “lunch roulette,” where two groups of team members go out and share a meal. HubSpot allow any employee “to take someone out for a meal that they feel they can learn from”.

Get Creative!

There are many creative ideas, too. Shoptiques provides lunch with its Book Club, LinkedIn gets in food trucks every Friday, and GoodbyeCrutches have themed lunches – “Jimmy
Buffet Day, Smurf Day, and Pirate Day” being amongst their favorites.

No Excuses

You don’t even need to be in the same country! oDesk hold virtual team lunches where its employees from the US, Russia, Brazil and India gather together and eat whilst on a Hangout.

So there you go, there’s no excuse to have another sad lunch, sat alone at your desk reading some random blog post…

How have you improved team culture at your work place? Tweet your tips to @fogcreek and we’ll re-tweet the best ones.

Sysadmin Hiring at Fog Creek

November 3rd, 2014 by Mendy Berkowitz

Here at Fog Creek we’ve spent a lot of time honing our recruiting processes. From writing the book on technical recruiting to dealing with masses of resumes. Recruiting is woven into the fabric of our company.

Our task-oriented interviewing style works for technical and non-technical positions alike, but for all the ink we’ve spilled about recruiting and interviewing, you’ll notice it is mostly focused on developers.

Interviewing developers, hiring developers, helping developers create awesome software; one might get the wrong impression that Fog Creek was composed solely of developers. And we all know this isn’t true. We have a diversity of skill sets here at Fog Creek and while we mostly hire developers, some of the other recent job listings have been for sysadmins/devops, support engineers, designers, sales, product marketing managers, accountants and receptionists.

As a member of the sysadmin team, I wanted to add some more ink to how we approach hiring sysadmins here at Fog Creek. (I’m using the term System Administrator because that is what I was called when I started my career. At Fog Creek our team responsibilities are some amalgamation of what is colloquially called SRE or DevOps or Sysadmin.)

What are we looking for in a sysadmin?

Hiring DevOps
Many of the principles of the recruiting process are the same, but the responsibilities of a developer and a sysadmin are somewhat different. Here are some of the things we are looking for when hiring sysadmins.

  • Ownership: If something is broken, fix it. We are a small company and ‘the buck stops here’. There are no other teams we can blame or foist problems on.
  • Can operate independently: We don’t believe in micro management. We need people who can take a high level task and self manage their way to completion with minimal oversight.
  • Find and Solve problems: We have a term we’re rather fond of, “Fix it Twice“. Reacting to problems after they occur is an important skill, but we must think of ways to fix things so the problem doesn’t happen again.
  • Create reasoned architectures: We aren’t biased towards any particular technology. We want to use the best tool for the job. This requires us to evaluate the pros and cons of the latest and greatest while being familiar with the old standbys.
  • Ultimately smart and gets things done.

Our Interview Process

Our interview scheme has three parts:

  • Sorting
  • Screening
  • Interviewing

Sorting

Sorting through resumes and candidates can be time consuming. I wish there was a hat to help us as we spend a couple of minutes per applicant and look for the following markers:

  • Can they write well?
  • Did they read the job posting?
  • Have they worked on tough problems?
  • Did they introduce something at work?
  • Are they passionate about technology?
  • Did they work on something that requires in-depth knowledge of the subject?
  • Do they have experience with high pressure incidents or high risk decisions?

Most of these questions are subjective. This part of the process is just to separate the blatantly unqualified or obviously disinterested. I can’t stress how important the cover letter can be in the overall process. A boilerplate “Dear Sir or Madam” letter won’t catch anyone’s eye. Referring to the job description, showing an interest in our company, showing you are just as invested in this process as we are, these are ways to stand out and make people interested in talking to you.

Screening

The next part of the processes is screening. Before we invest in a full day of interviews we conduct phone and code screens.

First, we’ll have a phone conversation with the applicant. As much as we would all love to just interact with machines most sysadmins also have to interact with people.

  • Can they carry on a conversation?
  • Can they explain what they are working on and why it is (or isn’t) important?
  • Can they take a piece of technology or an interaction that they use every day and explain the technical guts that make it work?

After the phone screen we’ll send the applicant a “take home problem” that involves solving a real world log analysis problem. The only real constraint we give is time. We provide the candidate a relatively small log file, and explanation of the format and the context of the log file. They have 24 hours from their chosen start-time to come up with a solution and send us their answer and their code. It shouldn’t matter if they send the solution in 5 minutes or 23 hours and 59 minutes. We look to see if we can understand their code, whether it works, and if it returns the correct result.

Interview

This is most common and often most difficult part of the process. You can find plenty of articles about interview questions, but we wanted ours to be as close as possible to the situations we find ourselves dealing with everyday. In the end you are hiring someone to be on your team. That means they will be doing the same types of tasks as you are doing. We tried to find generic examples of those tasks and observe how the candidate acts while performing them.

In general, a candidate will talk to five people:

  • The first interview focuses on architecture. The goal is to get them to whiteboard as much as possible. Can they plan out an architecture to solve a problem. Do they cover the components that are needed to support that architecture?
  • Second, we talk about troubleshooting. Can they figure out what is wrong (and fix it)? Can they figure out what is causing the system problem? Can they then write customer communication explaining the problem and what will be done to fix it in the future.
  • The third interview challenges them with learning through implementation. We are always interacting with new technologies. Given a pre-designed system, can they build it even if they aren’t familiar with all the components?. How do they approach things they don’t know and learn about them?
  • In the fourth interview they’ll talk to a real live developer in its native habitat. We interact with our developers and support engineers every day. Can they interact with our developers? Can they have a constructive conversation with them about code and ideas?
  • Lastly you’ll have a chat with our founders.

There you have it. This is the process we’ve been using to hire sysadmins at Fog Creek. Think you can make it through this process? Want to work in a great company with awesome people? We’re hiring!

How to Get Started –and Stick with– Usability Testing

October 27th, 2014 by Gareth Wilson

As you’re working on a piece of software, you get to know it well. Perhaps too well. How to complete common actions become so obvious to you that you don’t need to think about them any more. This isn’t the case though when someone new comes to use your product. So getting your product in to the hands of users to test is a key step in its development. But of course, you already know this. Almost everybody does. Yet few people get around to actually doing it, and those that do, often stop after an initial release or phase.

Check out the video below featuring usability expert and author of “Don’t Make Me Think”, Steve Krug. Then read on for tips to help you get started, and stick with, usability testing.
 

Usability Testing doesn’t have to be complicated

Keep it simple
Usability testing doesn’t have to be complicated. Upon hearing the term ‘usability testing’, you might begin to imagine a testing lab, with cameras setup to film eyes, face, clicks and the screen. But it doesn’t need to be like this. All you really need is to find someone, provide something to test and a place to do it.

Take anyone you can get
Test participants don’t need to be familiar with your product. They don’t even have to be from your target audience. So grab a colleague (in our video we roped-in our Office Manager), a friend or anyone else who is around.

You don’t need hordes of people either. In fact, there’s little benefit in testing with more than 5 people.

Test early
You don’t need to have built something to begin user testing. Some sketches, linked screenshots or a simple prototype will do. The earlier you start the better, so you can avoid building unnecessary stuff later. A paper prototype can be enough to begin and start getting useful feedback.

Don’t get caught up with design preferences and feature requests
Your testers come with pre-conceived ideas of what’s expected of them. Most common is the idea that you’re after design feedback, in particular colors. Layout and design issues are important, but things like color are rarely your major problem. Unless people have a strong reaction (using words like “puke”), it’s often safe to ignore such feedback. Same goes with feature requests. You will already have loads of your own. So unless their idea immediately makes sense to all, you can skip these too.

Start with a few theories
Generate a few theories about why something important is happening. Such as why users aren’t converting, not using a particular feature or how they might complete a particular task. Think of a scenario to test this, and then run that test. After each test, debrief – discuss amongst you what you observed and why that might be.

Test once a month
Once you have carried out a test, make some changes, create more theories and set about testing them too. It doesn’t need to be too often – just one morning, once a month. This will provide a constant flow of feedback that will keep you on top of your major issues.

Once you’re in to it, use tools to help
There’s a bunch of useful tools which can help you with usability testing. Use Five Second Test to get quick feedback on UI, web pages and sketches. InVision for creating prototypes. LookBack (mobile) and SilverbackApp (desktop) to get in-app user-feedback.

If you try out these tips then there’s really no barrier to you starting and sticking with usability testing. Don’t forget to check out the video above to hear more on usability testing from Steve Krug and some of the Creekers.


Looking for more?

Visit the Archives or subscribe via RSS.