Decision making models
I've worked for long enough and read enough book on methods for decision making to try to make some generic yet hopefully valuable conclusions from them. Many books on the topic are far to wordy in my humble opinion with respect to the amount of novelty they provide, hence this attempt to offer a more condensed article on the subject with links to further reading on truly worthwhile frameworks.
Almost all that I've come across that are younger than myself (less than 40 or so years old) has an iterative model of some kind, and as this resonates with my own experience of what works this seems to be the best humanity has come up with yet. This is not without drawbacks in terms of overhead and to some extent initial unclarity for anything but the most simple of tasks it makes sense to assume that multiple iterations are needed in some form.
The steps that I see frequently used are some kind of feedback loops, two to five steps, take your pick but that is what is going to be the core. The big contributor to the difference in the number of steps is simply how steps are described, if one method has a "Decide and Act" step, it can obviously be one step shorter then the method which separates the this into two steps, "Decide" and "Act", but fundamentally they are equal.
For any iterative process which step is first obviously shouldn't matter as long as you get going with iterations, I think it is mostly for pedagogic reasons various methods have different starting steps, and after all, for any process described in written form, one step has to be described first. Depending on the problem at hand some specific step might make sense as the first starting point, but from a model point of view that doesn't make sense. A key point is to avoid falling into a specific model or pattern of actions depending on how you start, you need to get iterating.
One of the steps that are frequently very vaguely described is the "Do something" step. Well it is very reasonable to assume that this step is highly task dependent but some general things can be said about this step. First there are many generic variations of "do": Design something, Mock something, Build something, Send a survey, Talk to other people, Look for previous examples, Have a meeting, Check the Internet and so on... My only general conclusion about this is, at any possible point in time pick the easiest and fastest way you haven't already tried, or at least try it in a different way. This is valuable because this help you save money and arrive at results faster, there might be limitations to this of course, there might be a larger context and the expectation is that your contribution specifically is to "Build" something, well then build the cheapest and fastest thing you can first, but if you can learn more faster by talking to someone and you are allowed to do that, then go for it. Too often have I seen enormous amounts of waste from teams acting on reflex, "we put this story in the backlog and then developed a rough prototype, because that is what we do" when they could have googled something, sent a link and asked "do you mean something like this?". For this reason I like talking to individuals because they can often answer with a link to resource or similar which can save you huge amounts of time.
One of the most sad step definitions that are common is the "think" step. Usually described with a fancier word like "Interpret" or "Analyse" and often combined with "Prioritize", sometimes combined with "Measure" to hint at some degree of scientific method. In some methods thinking is dividing into two different parts, think about the problem and think about the model, but more on that later. Humans are equipped with the most amazing intellectual capacity, yet we need to remind ourselves to think. There is a grave lesson to learn from this, that is, if we are not mindful of what we are doing humans are frequently found to skip the thinking part, how sad. And the second sad part of this is that when described in a model (even if iterative) there is a specific point in time when you are supposed to think, or at least think more then you do in the other steps. I don't know how your brain works but sometimes it would be a blessing if there was an option not to think, but mine just keeps going, albeit on different things entirely. So the lesson learned is clearly, be intentional about thinking, in case you have forgot. And considering the cheap price to pay for the occasional thinking regardless of what model you use, don't be worry about throwing in some thinking here and there in the other steps. There is a known term called paralysis by analysis, but I have never seen it happen in practice, imagine a whole team sitting and thinking for a month, never seen it. What I have seen is teams overly focus on the same action over and over again without ever achieving anything, but that is more closely related to the paragraph above on what to do.
Often models incorporate a step on gathering knowledge or methods for analysis. Sometimes this is combined with the thinking part but I think there are some specific things to note about it. First, your options for how you can measure or analyse what you did is actually dependent on what action you took, so there is a bit of forward thinking that is valuable here, when we chose what to do it is great is that can generate new information on the task at hand, and even more interesting new kinds of information or at least new perspectives on the task. If you have done something that generated a lot of qualitative information (eg: sent a survey) then try to generate quantitative information (eg: usage statistics on a prototype or an A/B test) next time. And in the best of both worlds look for enough qualitative information to be able to use quantitative methods to analyse it. I've frequently found it valuable to get the right amount of data to work with, there is rarely much you can learn if you condense all your data to a single median value, nor will you come to great conclusions about a 1TB dataset of logfiles. Getting things down to manageable datasets that you can wrap your head around yet avoiding to oversimplify things is very important. If you want to share your data and reason together with someone else about it this becomes even more important and also requires that you have tooling to easily navigate and turn the data around since you are not likely to have the same view on what granularity and what perspectives of the data that is reasonable to work with and where to begin.
For both the think and analyse steps a key learning is that more is more, even the brightest mind, including your own, and my own will easily overlook something. If you put a few independent minds together even for the shortest amount of time the risk of overlooking the obvious is radically smaller. The independent part is key and preferably tracks of thinking should be both isolated at for some time and collaborative for some time. Too early collaboration and too much of it will not produce good results and if not watched carefully might even produce even worse results than a single capable mind. There are often very valuable thoughts coming from outliers in the thinking process, drowning those in consensus or not having enough minds available to include the outliers is a terrible mistake. What is less productive than overlooking the "I could have told you so" person and spending large resources on something a team member already knew. The difficulty is as always to tell the outlier from the uninformed but a bit of digging in the reasons behind a stated opinion can often uncover the value and thought processes behind them. A common mistake in the analyse phase is to focus to much on analysis of a previous step or action when you often can have many different and complementary sources of information to analyse early, and indeed you should always strive do this. You can both send a survey and talk to people in a very short time and then assess both results. If you experiment with an action ask multiple teams to try something but don't try the same thing on all teams to ensure you gather learning in a parallel way, this will greatly multiply your learning capacity and chances for success.
Many models have a specific "change" step, and for an iterative model it is obvious why, you don't want to iterate without moving closer to your goal. For some "change" is similar to "do some more" but in many models this step specifically tries to either improve on the previous action or at least evaluate a change of course. A common thing is that there is often little guidance on how much of a change is good. A common example is the steering of an oil tanker which implies quite a bit of delay between the action and any observable effect, so how do you ensure that you steer enough while still not too much. Methods that do deal with this frequently incorporate some ideas of progressively larger amounts of change until you get the right feedback. While this might sound thoughtful it is also conservative, unless there are high risks in the changes you do like an operational system that may cause death or injury consider more drastic approaches to change. A reasonable strategy is to consider what is the smallest meaningful change and what is the largest, target the middle between the two, if there is to little effect target midway between your previous choice and the maximum and so on. I don't know what this is called in english but swedish there is a term for this called "att gaffla in sig", in computing this could be seen as similar to using a binary search in a sorted set.
A less common but still not rare element of iterative models is that they have a step that thinks about the process, like a meta step. In some variations this is called high level thinking, in some it is included in retrospectives. In some models the focus is on the thinking practice itself with the expected outcome that if the meta process works well the rest will sort of follow. The exceptional flexibility this offers is hard to emphasize enough but from my own experiences this actually works, you rarely end up with bad outcomes if your process around how you think about a problem as a group is really solid. And the range of problems you can deal with becomes great, you do add some extra waste compared to having a well tuned model for your specific problem but unless you frequently solve very similar problems over and over again, the super general approaches to good thinking are well worth studying.
There are limits to how far outside of the box you can walk, and how much a good process can influence the outcomes. Once deep into a project, if it is important, it might be too late to change drastically or to start over, if there are no other options you must go on. Someone else must have at a much earlier stage decided not to put all eggs in one basket and have a plan B/C/D... but this is very rare in practice, however I see a lot of quite big bets placed on the success of a single project team. I think it is quite common that organisations put more effort into a single attempt if the project is important rather than putting less effort into multiple attempts, and this a poor strategy or at least a very high risk strategy. My greatest learning is to have different plans, ideally very different from each other, ideally mutually assisting each other to reach the goal but not dependent on each other. This allows abandoning the options that are not working well. I once together with many bright colleagues lead a "plan B" project that was started in small scale, and while we did a lot very well, and mostly better than the main plan with a fraction of the budget, the main learning wasn't that our ways to lead or methods to develop were better, the main learning was the benefit the company had from exploiting the backup option and a while later stop the main plan, because it was eventually learned that even if the initial plan had been continued to the end due to lack of options, it wouldn't likely been successful or no where near as successful as the backup plan turned out. Considering the stakes for the company the really good strategy would have been to start small with three or even more teams attacking the problem in different ways.
The best general frameworks or models I've encountered are:
Cynefin: https://en.wikipedia.org/wiki/Cynefin_framework
Six thinking hats: https://en.wikipedia.org/wiki/Six_Thinking_Hats
Comments
Post a Comment