I’ve kept thinking about this topic due to going through several iterations where projects are behind or delivered late, while trying to provide optics in the way of everything is going fine, until the last minute to switch gears or get exceptions for releases.
For the most part, when I was a developer on delivery teams, I didn’t try to “steer the ship” from the project management side. Instead, I attempted to take as many tickets as possible, even on adjacent technologies. If I were doing back-end Node.js development, I would jump into back-end java development, front-end development, etc.
For the past couple of years, I moved from development into architecture. The lateral movement shifted my expected output, my way of looking at projects, and my development lifecycle involvement.
I feel there are similarities (or dysfunctions) shared across the projects that I’ve seen, so I’m trying to share them and perhaps help you notice them and maybe help the project steer into a more stable place.
Most companies prioritize their projects in terms of revenue, impact, or whatever is meaningful for them. So if you’re working within the top 5 or 10 projects, you might be getting enough priority that if you encounter a roadblock, you could effectively raise your hand and get the executive sponsor to unblock the road for you.
But what happens on 11-20 spots? What happens if there’s a “core” team, and they have a limited set of engineers on their team? They might only be able to tackle the top 5 projects, and then raising your hand might not get you the unblock you expect.
At that moment, then you’ll have to be resourceful and negotiate ways to push forward. Either contribute development, try to pull the requirement within your domain, or effectively determine if defer of required feature is possible later.
This happens more often than I care to admit and usually will happen naturally when a new requirement comes, and then it has to be implemented both in a front-end and in a core service.
I see that when this sort of coupling happens, poor release planning was involved, and then release dates don’t match. So one team is unable to thoroughly test and release, they can just wait.
Also, when there’s no feature flag system in place, it can happen that an iterative release might not be possible. Meaning that the feature needs to exist upstream for the front-end to become releasable, for example.
I feel this also occurs naturally because we generally think about the end goal of a project.
We want to rebuild the whole user experience with our new design system
When working the tasks backward, the deliverables are not correctly planned, the stories are too large for a unit of work, possibly spanning multiple sprints. There’s a misunderstanding between stakeholders or expectations around how features should work.
Try to estimate or size tasks is a great thing to do here, also correctly laying out user stories to create releasable features on every sprint (hopefully having a demo at the end of each sprint).
Not create stories in a way that incorrectly addresses a single element of a system (back-end server) and instead really describes an end-to-end feature (every user story should be releasable).
When context is stripped away, people have no choice but to wait to be told what to do. @estherderby— Jessica Joy Kerr (@jessitron) October 6, 2020
For some reason, this is harder than it looks, especially when it comes to being “strict’ like a user story is meant to be:
As a < type of user >, I want < some goal > so that < some reason >.
But then, stories deviate from this in many different ways, like being prescriptive right from the start.
Add property x in the endpoint /resource
Or define step by step.
1 user inputs a phone number 2 search phone number in DB 3 display results for the user
Both examples remove context and don’t allow a person to choose the appropriate trade-off in the system to achieve the desired goal. Someone might not say anything about these requirements, but it might not be the best solution for the expected goal (missing on both examples).
I’d echo what @missamytobey was saying: QA becomes part of the process. We put a huge emphasis on the idea that the testing suite (w/ peer overview) should be good enough determine release-readiness. @lizthegrey & I gave a talk on it: https://t.co/GQH2pYf3xZ— Danyel Fisher (@FisherDanyel) September 16, 2020
Individual units might be tested before being released, like the back-end system, messaging daemon, or even the front-end.
It happens that when you don’t create real user stories for an “epic,” you might end up not being able to test an accurate representation of a feature before enabling in production.
What I’ve seen happen is that everything is “ready,” but testing needs to be done in a holistic matter. If your company has a QA team, they might understand the full feature until everything is ready to release. The team will take some time (and probably some blame) to review, assess, and determine how to correctly go over the scenarios meant to be accomplished.
Or due to timing, you might just test “happy paths,” or worst, not try the feature and simply trust that you can deliver fixes on time. Except that you’re already carrying a backlog of stories, you’re mid-sprint, and the team starts receiving an influx of bugs that are not addressable on time.
This has happened many times because institutional knowledge takes time to be acquired and not asking one more question about why something is being done or required in a certain way.
I’ve often come to solution reviews or problems with a user story where someone tells me that if we do the feature as requested, it will probably break another business area. Imagine customer support, analytics, etc. We have to take a few steps back and adjust the feature.
As I’ve come to understand how the company works holistically, I’ve also started to bring these sorts of questions to solution review meetings, planning, etc.
This is one of the most curious things that I see unfold. When panic mode starts, people seem to go into “status review” meetings, so now the development team has daily standup and afternoon review. Daily blocker meetings or anything you can imagine would “unblock” developers or make development faster, but at the same time doesn’t allow them to use their time to actually build the thing.
This is something I still don’t really understand. Do people don’t trust the system they’re using to build software? Do they don’t trust the people putting effort into all the different software development parts (feature requests, cross-org collaboration, prioritization, etc.)?
I think there’s not much to say about this problem. There’s even a law written around it:
adding manpower to a late software project makes it later — Brooks’s law
I’m still unsure why people think they can go around this. There’s more to a project than just adding developers to burn through the backlog. And at the same time, it might be the frustration of seeing the backlog grow or not deplete fast enough for the expected launch date.
It might also not happen, like “let’s just add more developers.” Instead, it might play out like:
X: "please deliver on time." Y: "we're not enough people to go through the backlog on time." X: "how many more you need?" Y: "maybe 3 more..." X: "done."
These sorts of problems will come back again and again, probably for the foreseeable future. And at the same time, it feels like it should be solvable by being good at the basics.
Habits die hard, and you can see people’s real mode of working when entering panic mode. People would retreat to their most instinctive actions or ingrained habit.
Like, I usually do pen and paper task management, but when tasks become overwhelming, I go back to working through the queue by whatever means necessary. Even if that is not the most efficient way, probably because I want to feel in control.
I feel that being able to rationalize these sorts of problems and writing about them will help me (and hopefully you) avoid them whenever you come to come across them.