I guarantee that the first time you try vertical splitting with a team someone will say “some stories can’t be split”. There is a kernel of truth in this statement (it is obviously the case that there will be some level below which a story cannot be split any further) that makes it easy to say - but the truth is a team splitting stories almost never reaches this limit; you have to push through your instinctive reluctance to split vertically until it starts to become natural.
Lately I’ve been spending a lot of time thinking about slicing work. This article is about my struggle with it.
In the last couple of parts of this series I’ve been talking about how we’ve been coping with sprints that went wrong. This time I want to look at a sprint that appeared to go right.
Refactoring is something that you hear about almost constantly in software engineering these days. Agile practices centred around minimal documentation and avoiding big up-front design drive a need to constantly reduce any accumulated technical debt in order to maintain the “sustainable pace”. In this post I wanted to talk a bit about my views on the practice.
In the last post on this topic we talked about some general characteristics we would like to see in an options library, and what some of the existing libraries do.
Last time we talked about a sprint whose burndown was going wrong, and the actions we took. One week later and our burndown still looked like this:
It’s Monday morning, the start of a new week and half way through your sprint. Your burn-down looks like this:
In our retrospective this week I wanted to look at the “definition of done”, but during the opening one of my team brought up some issues with the sprint demos, so we talked about that instead!
Something I find myself doing often is writing small tools that need to take various command line arguments. Many years ago, when I first encountered this problem, I investigated the options (pun intended) and decided I didn’t like any of them - so I wrote my own 1, which was eventually open-sourced by Warwick Warp. At Pattern Analytics we again wrote a similar library. When I started at Allinea I discovered that their internal options library shared some of the concepts I had used previously, I thought it was interesting that people hit on the same problems and work through similar solutions - so I thought I’d write a series about it.
I do weekly one-on-one meetings 1 with everyone on my team. I do this because, with hindsight, a regular opportunity to relieve myself of frustrations and blockers, and to talk about where I wanted my role to go next is something I’ve wanted in every job I’ve had. It’s perhaps arrogant of me to assume what I wanted is what my team wants, but I do it anyway - the meetings are not for my benefit, they are for and about helping my team with whatever might be bothering them that week. I do it every week because the only way to get in front of a problem is to know about it as soon as possible. We still do sprint retrospectives as a team, because day-to-day we function as a team - but people are different and I find the conversations I have one-to-one are markedly different to those we have during the retrospectives.
Many years ago I read an article whose key premise was that all code is in one way or another an API. The original article is lost in the depths of the Internet, and I can’t for the life of me find it (leave a comment if you do manage to find it, I’d be interested in re-reading it!). The article changed how I thought about the code I was writing, so I thought it would be nice to examine the idea here.
Early last month I started a new job as Head of Software at Biosite Systems. Biosite create and manufacture a variety of hardware and software for access control and resource management on constructions sites in the UK. This is a significant change in direction for me, until now I’ve always been a software developer first and a manager second. In this role I’m a manager first, and a software developer second - and so far the difference has been even more significant than I was expecting.
In his cppcon 2014 talk, Jon Kalb 1 gave a quick summary of how to convert legacy code to exception safe code 2. In this post I thought I’d go over his technique, because it is useful to know how to safely and incrementally convert exception-unsafe-code to exception-safe-code.
In 2014 Jon Kalb 1 gave a three hour presentation at cppcon on how to write exception safe code 234. In the talk, Kalb sets out a series of guidelines on how to write exception safe code. In this post I’m going to go over those guidelines, with a few small adaptations of my own. If you’re interested in using exceptions in your code I highly recommend the talk, as you will find Kalb’s thoughts mirror many of my own on the subject.
Back in 2005 Raymond Chen of Microsoft 1 wrote a piece lamenting exceptions as making a hard problem harder. In this article I want to revisit his thoughts and see how they fit in with what we know 11 years later.
Before I start looking at the drawbacks of exceptions, I want to go over some of the other benefits. To illustrate these benefits I’m going to use a simple function.
In C++ there is one situation where it is only possible to use exceptions to report an error - when object constructors can fail. In exception-free code bases this leads to the additional requirement that constructors must always succeed. For certain design patterns having constructors that always succeed can simplify code - but in this article I want to look at the knock on effect this decision has on the design of the entire system, and how exceptions actually permit a simplified, more coherent design.
In this post I would like to talk about what I consider to be one of the primary benefits for using exceptions to handle errors. That is, I believe that exceptions improve the flow of the code.
One criticism levelled against exceptions is that they negatively impact on performance. In this article I will attempt to address this misconception.
One of the criticisms levelled at C++ is that its large and complex feature set can be difficult to master, and as someone who’s been programming primarily in C++ for over 15 years and is still discovering new and exciting aspects of the language that’s certainly a viewpoint I can relate to.
This post is the TLDR of my On Testing series.
So far in this series I’ve talked about automated testing. Automated testing is an essential part of what makes sprints viable - because in order for something to be releasable it needs to work, and in order to know something works we (usually) have to test it. Thus, everything I’ve written so far is very useful in Agile environments. In this article I want to touch on the place I think manual testing has in these environments.
Behaviour-driven development specifies that tests of any unit of software should be specified in terms of the desired behaviour of the unit. Which is good, because, as we’ve already established testing behaviour is what lets our tests stand the test of time.
In this part of my “On Testing” series I talk about what I believe the goals of automated testing are, and how they influence how you should write tests.
Carrying on the testing theme, in this article I’m going to describe “the test pyramid”. The test pyramid is a way of thinking about how tests relate to one another - and how they support building quality software.
Lately I’ve been thinking about testing, so in this series I want to talk about how I approach testing and some of my personal philosophy. I intend to focus on what I believe it takes to make tests work as part of a robust development process.