Nurturing a High Performing Engineering Team at Digital Creators

Daragh Byrne
Digital Creators
Published in
7 min readJun 9, 2021

--

I’ve been lucky enough to have worked in a few high-performing software development teams over the years. There are a few key indicators: work is clearly defined, and tickets get worked through in days rather than weeks. Demos go well. Stakeholders are happy. Developers are not overworked to the point of excessive stress. Everyone is proud of the product, and most people on the team “just know” things are going well.

Recently, the primary team I’ve been working with over here at Digital Creators (five developers, two product designers and a product lead) has been humming. We’ve all been feeling pretty good about progress and quality, and the client feedback has been stellar. A question came up in one of our retros the other day — why are things going so well? Which is a nice question to be able to ask!

Spoiler alert — it didn’t happen overnight. We’ve gotten to this point by increments — identifying and applying maximum impact potential changes to our minimum viable process sprint-by-sprint. This is how we did it.

Inspect and adapt mindset

The team has been willing to question its own performance in an honest way. We’re all curious people who have a desire to do a good job (and we want to enjoy it!), so our retrospectives have been open, honest places where we can all bring our experience to the task of optimising our efforts. It’s important to note that we never skip a retrospective. It’s the first thing we don on the Monday of a new sprint, and has become non-negotiable, because we’ve all seen the value.

We have a “teach-don’t-blame” attitude towards one another, which allows everyone to help everyone else in an environment of trust and respect. When we see an opportunity to apply an improvement, we do so enthusiastically, with an empirical mindset which is just as open to failure as success.

Relentless refinement

So what, exactly, does this ticket mean? When the answer to this question isn’t clear, developers can get quite caught up in back-and-forth conversations and half-made decisions. At some point, somebody has to make the precise call on whether the button turns red or green when you click it. Or exactly how many flying monkeys should be released.

Our product lead and engineering lead devote a significant amount of time each sprint to work with our client partners to refine the product backlog. In general, they are thinking about the backlog at three levels: Now, Next and Later.

  • Now means this sprint — of course, even with quite a bit of focus, when a ticket is in a developer’s hands there will still be questions. Within-sprint refinement is, of course, sometimes necessary, so we prioritise it, especially at the beginning of our sprints.
  • Next means next sprint — we ensure we have enough detail on next sprint’s tickets that we can make a reasonable commitment based on accurate enough estimates
  • Later refers to the product roadmap — what the mid-to-long term content of upcoming sprints looks like.

The investment in refinement pays dividends — the acceptance criteria on our tickets tend to be super-clear by the time a developer is building it, and locked down even further by the time it’s handed over to QA.

If there’s one takeaway from this, it’s “quality starts with your user stories”.

Show don’t tell culture

Before raising a pull request, developers add evidence that the code they are about to raise for review actually works, by adding a screenshot and/or a video to the ticket.

This has two huge advantages:

  • The act of preparing the evidence forces the developer to test thoroughly. I’ve lost track of the number of times I’ve spotted problems in my own code by doing this.
  • The people reviewing and testing the code have some idea what it’s actually supposed to do!

It may sound like a little extra burden of work on the developer, but we’ve all adapted to the discipline required and everyone on the team agrees that it’s worth it.

Review is a first-class activity

We value code review way beyond just read/snarky comment/click approve anyway. We have a code review column on our JIRA board so it’s easy to see what tickets are awaiting review, and we prioritise getting code reviewed over writing new code at various times.

Done right, code review has several undeniable advantages.

  1. It’s our first line of defence against future tech debt.
  2. It’s an opportunity for developers to learn about areas of the codebase they don’t know yet.
  3. We become aware of potentially conflicting changes or duplicated work before it eventuates.
  4. It’s an opportunity to learn from each other! We’re a team with wide range development experience, in terms of platforms and approaches. I’ve learned so much from my colleagues by watching how they approach reviews.

We’ve experimented with different review granularity. Ideally, we’re checking code for correctness (i.e. it meets the various acceptance criteria for the ticket) — if a reviewer has the time to check the code out and run it, that’s highly encouraged. Of course, the evidence provided in the ticket acts as a guide at what should be tested.

In the event that the reviewer doesn’t know much about the particular part of the system, or there’s a large amount of context required to understand the requirements, they review for code hygiene and potential gotchas at a minimum.

We’re proactive in calling each other out when we’re waiting on reviews, and it’s often the first task of the day after standup. Of course, there are occasions when reviews happen after merge, but we try to keep these to a minimum while being flexible.

Design is tactical and strategic

I’ve often been in a standup and heard the blocker “I’m just waiting for the designs to be finished before I start working on that ticket”.

We solve this on our team by dividing design activities into strategic (relating to an upcoming sprint) and tactical (solving problems on tickets in this sprint). We have a team of two designers who alternate on a sprint by sprint basis. One is free to work uninterrupted on getting ahead of the developers by working on designs for next sprint’s tickets. The other is on hand to provide answers and updates for design issues that are caught “in-flight”.

We’ve got a pragmatic approach to design issues that are caught mid-sprint. If the design refinement is minimal, it’s fixed there and then. If the UX needs to be substantially adjusted, but it doesn’t affect the core functionality, we’ll often proceed to release the feature anyway, and put a task into the backlog to update the UX in the subsequent sprint.

Focus on task completion as a team

We do everything we can to unblock tickets at whatever stage they are. We have the philosophy that it’s better to have one ticket completed and ready for deployment than two tickets half done.

In fact, this is the real reason we have a relentless focus on refinement, and review as a first class activity — we noticed that having groomed tickets and unreviewed code were both bottlenecks to task completion, so we go out of our way to emphasise each of these activities. Of course, your team might operate in a slightly different context, so might have different bottlenecks — but the principle stands: focus on your bottlenecks and work together to surmount them.

Split stories and keep splitting

We make our user stories as small as they can be while still delivering value. There are good reasons to do this.

  • Small stories are often easier to estimate. There is less ambiguity, and the margin of error on the quantity of work is lower.
  • Small stories mean smaller code reviews.

There are many ways to split stories. Of course, it’s subjective, but the rough guideline of “this will please at least one user if we deliver it” can help.

Realistic commitment through estimation and velocity

One reason stakeholders can be disappointed when something isn’t done in a sprint is disappointed expectations. I’ve found over the years that building trust is a matter of delivering what you say you will, rather than delivering as much as possible — there is a subtle difference.

You need to have some way to measure the quantity of work a team is capable of before you commit them to delivering a particular set of user stories. We’ve been using relative estimation and velocity tracking to make sure we are comfortable with what we commit to. That means the stakeholders often have an accurate picture of what we are likely to deliver, which means less disappointment!

Estimating with story points has taken a little getting used to, but we’re in a rhythm with it now where we all think in terms of relative effort rather than hours. It’s super easy to say: that sounds like it’s almost twice as much work as that other thing.

We track velocity (number of story points that went into the “done” column) per sprint, and after a few sprints doing this we had a fairly good estimate of just what we were truly capable of as a collective.

We are also fairly transparent about found work. When somebody wants to add something to the sprint, we have a certain amount of buffer in the case of tech debt, or we have a conversation about what might be dropped.

We treat estimates as just-that — estimates. Building software is inherently complex — we accept that we don’t always get it perfect. There are sprints when we don’t quite get there with our commitment, and that’s OK. There are sprints when we pull work in from the upcoming sprint, and that’s OK too. We find it evens out over the long term.

Contact points

We’ve constantly refined the amount and style of communication that happens within the team, who are currently mostly working from home.

  • Slack is for creating visibility, status updates and setting up deep dives, for the most part.
  • Technical and scope decisions go in durable writing. Durable means JIRA essentially — stick it on the ticket, or it didn’t happen.
  • Efficient standups are used to identify blockers more than anything else, and we keep them to fifteen minutes by aggressively suggesting to take things off-line (solution-type discussions are basically banned at standup!). Being distributed across time zones means we don’t have full-team standups every day. Our engineering team is located in the same city, so we will have a standup on the days that we’re not doing a full team standup.

Summary

Becoming a high-performing engineering team takes a bit of effort, but I have always found it a worthwhile journey. In my experience, it’s best done incrementally. Follow the pattern of honest reflection, coupled with sprint-by-sprint adoption of better practices. You’re sure to make progress. And remember to have fun along the way!

--

--