Friday, March 8, 2013

The A3 can help keeping some human biases and fallacies at bay

I am describing here why in my personal opinion the benefit of the A3 problem solving method in removing some biases and fallacies ( straw man, or the slippery slope) that otherwise may arise in a method-less problem solving.

A3 is best described in some books, for example “Understanding A3 thinking” or “Toyota Kata”.

However, I’ll describe it a little bit here: A3 based on the A3 paper, used to take notes of the problem, root cause analysis, and possible countermeasures. The problem can be well described as a measurable gap between the current condition and a target condition (or “desired new standard”).
A deep root cause analysis is important part of the process, and one way for doing it is using Ishikawa diagrams and 5-whys. Countermeasures against (actionable) root causes will be considered only after a wide root cause analysis is done.
It is basically a “yes, and...” conversation: “we have this possible cause...”: “yes, and the cause of this cause can be  ...” (etc...) or “yes, and there also these other possible causes”, and so on...
It is important to do first the analysis of all the possible root causes, and then finding the countermeasures phase, because otherwise there is the risk of entering deeply on some specific deep path of arguing about actions, consequences, and so on considering too few hypothesys about some too specific, and unlikely scenarious.

This may open the door to some fallacious arguments.
Example: in an hypothetical context of solving the problem of how to reduce criminality an argument related to a possible root cause is about the circulation of cash: “untracked cash is one of the causes of the prosperity of many criminal activity”.
At that stage, if we avoid the exploration of other possible root causes (for example adding something like “unemployed people are more easily involved in the gangs”), and instead we follow any possible path of the consequences of this first root cause, then we may get stuck on a single path and just “hypothetical” consequences, and objections, that create a sort of paralysis.
“so do you want us to be controlled in each transaction so making us losing our privacy”?
This is a sort of attacking the straw man argument, because is an objection to the hypothetical countermeasure to such root cause that is not the same thing as the the original argument, no matter if the arguments against this countermeasures could be justified, because it is just a separate thing
This going deep into objecting just hypothetical side effects (straw men) can be avoided if we separated a wide root cause analysis from the countermeasures analysis.

Another example: “our stream value map, and also our kanban board, show that the process of setup of the official build takes time X, which is greater than the time Y, that is the gap that we want to reduce, according to the target condition”.
This claim says nothing about what to do, if possible, to how to reduce this time X (e.g. automated builds etc...).
It is completely neutral and objective at that stage. It just invites to consider that we may reduce such time with any countermeasure _that we are going to discuss later_, then we can have some chance to reach the target condition of reducing the cycle time.
The straw men that comes into the “conversations” could be “I know those guys: they do a great job, and they strive working overtime many days. I have seen that. Are you saying that they don’t know how to do the right thing?”.
Nobody is claiming anything about people doing their job good.
The fact that the process clearly separates the root cause analysis and finding countermeasures helps keeping those pointless fallacious “deph first” discussions at bay.
Only after all the root cause are considered, then there is the phase of finding “countermeasures” to address root causes. Countermeasure can have also some weight related to the the hypothetical cost/efficacy ratio,  just to make sure everybody knows that not all the countermeasure are equal. Moreover some countermeasure could also be not pleasant for anyone addicted to some complexity. Yes, unnecessary complexity, messy situations, big ball of muds can be addictive (according to system thinkers there may be concern in really addressing problems "What will life be lie when the messy situation is no longer a big worry? Sometimes 'owing' a big insoluble problem is almost adictive" - Rosalind Armson, Growing Wings on the Way).
Countermeasures usually means try to improve in some area, and that is one of the main reason that we need to make sure that there is always respect for the people. The analysis is focused on understanding, and not on judging. Is the facilitator/mentor the guy who will hopefully be able to create a safe learning environment such that the sense of satisfaction about improving is  greater than the disappointment of uncovering flaws in the process (just because saying that the problem could be on people happens to be unfair -  or I’d say, even impolite?).
So the countermeasures found can take place in order to validate their hypothetical impact in term of getting closer to the target condition. Everybody makes sure that the experimentation will be “safe-to-fail”, and nobody expects that everything will magically work. How to measure the result, and when to go and see what happened is part of the the A3 check phase.
In the Cynefin approach, there is a "complex area", where the cause-effect are not clear there is the need to experimenting by "probe-sense and respond". Anather analogy is the variety in nature and the Fisher Foundamental Thorem: making more experimentations is like creating more variety, and, the theorem finding is that this variety is proportional to the increasing of the rate of fitness, in term of genetical selection.
The A3 also establish in advance what to do if the experimentation will be successful (example: if it will be successful, then we will do some training to other departments about new good practices discovered by this experimentation. If it will not be a success, then restart again considering the using other countermeasures), which is the “Act” phase.
Everything basically is follows the Demming’s "Plan, Do,Check, Act" cycle, close to the scientific method.
More often than not the solution that are likely to be found via A3 are not so complex, but I think they are not meant to. In my experience, the solutions need to be simple, and I also found in books like “The Checklist Manifesto” and “The Influencer” how simple solutions can create dramatic improvements, as long as the solution are carefully studied, and experimented. In Checklist Manifesto, the title says all. In "The Influencer" you can see that many simple solution may just be there, it's just a matter of discover them and "adapt" to make them used by all, like in the story of attacking the diseas of the "Worm of Nueva Guinea" (that in fact has been almost eradicated by now). In that story it has been simply discovered that the disease did not attack people used to filter water simply using their skirts.

So effective solution can be simple, and backing to team, process problem you may discover that they are:

- find the right checklists,
- make sure that information radiators are kept up to date
- make sure that you ask some specific questions
- make sure everybody knows that is in charge for reminding good practices by checklists to anyother no matter of their authority
- introduce some unofficial “shortcuts” shortening the feedback cycle in the formal delivery process between development and tests
- be sure that someone of your team will attend other teams daily Scrum meeting if you have some dependencies

and so on...
So what’s the point of using the A3 method if the solutions are so simple? Couldn’t  we just figure out such “solutions” without having to set up a specific method?
Well, at least in my experience, the point is pointing also to the “invisible elephant in the room”,  mental traps like "habit" or “history”, get rid of the blaming-culture, and get properly out of their comfort zone, without falling directly into “panic”.
I don't think that Top-driven changes without any bottom level  involvement can go very further because of risks of being too much based on wishful thinking, to be not systemic, neither resilient (where a definition of resilience that I like is "The capacity of a system to absorb disturbance and re-organize while undergoing change so as to still retain essentially the same function, structure, identity and feedback")
I think that a necessary condition to make a change in some action systemic is by the rule “I need to know why, as well as what, in order to decide an appropriate how”, because "Knowing the purpose of an activity creates opportunities to do it well" (from "Growing wings on the way - chapter 13").
By A3 it is at first clear, and well defined, the what, in term of target condition, (and also the way in term of an higher order target, or better, in term of "vision" - see picture from Toyota Kata), and the aim of the process is just to discover the how. After people learned the “how”, they still have to remember the “what” and the “why” because they all together worked to figure out the discovering process starting from them.

Let’s also take a look at the “hindsight” bias, and why A3 can take this bias at bay.
We, as people involved, need to work together at the first place so it is less likely that someone  could come up after all with the “I already knew it, if I were there, then I could already tell you”. Everybody involved should already be there at the first place if he or she can help. This helps reducing this hindsight bias abuse
You cannot know how honest is any claim related to the “I already knew”.
As Tetlock shown in studying the “hedgehogs” against “foxes”, in Expert Political Judgment, many people that act as pundit, happens to be expert just because are able to explain everything with hindsight, and in the case they are invited to speak before something happens, and then fail, still they are likely to abuse of self-sealing / no true scotsman based explanation of their hindsight  claims.
So basically, in the kaizen process guided by A3, we are invited to make hypothesis and experiments, and have a clear measure in advance about how to measure the progress, and check without fearing of bad hypothesis, and we know that failure is a crucial learning (safe) part of the experimentation.

I have experimented A3 in the past, but unfortunately I am afraid that I haven’t been in a position to make always sure to protect these points in the process. “With hindsight” I’d say that some simple rules like the Scrum “pig” vs “chicken” must be strongly protected always, in a phisical way. Also there is the need of a “sufficiently powerful guiding coalition” (lack of it is error 2 of the 8 defined by John Kotter), to ensures that the message of the importance of those tools is never underestimated.

Books - Toyota Kata, Understanding A3 thinking, Leading Change, The Checklist Manifesto, The Influencer, Expert Political Judgment

Just for fun about the "hindsight bias", you may take a look to this part of “Captain Hindsight” episode of “South Park”

(note: I am not responsible for the content of external resources content, that, moreover, may change since I finished writing this post)

Reasoning, arguing, is related to to supporting conclusions, explaining a position, discovering some truths, by logic and rationality.
I think that having a conversation is more than arguing.
Conversations may question the already established truths and their supporting arguments, and conversations skills are about the techniques used to help uncovering those facts.

The need for

I want to discuss an argument, from this little book available on line:

He saw revolutionary potential in the most obscure things; he even claimed he had “never seen a revolution as profound” as object oriented programming—a niche field that was the focus of his work before he returned to Apple”.

For me the argument is really bad if is meant (as probably is) to talk negatively about the guy, as probably it is actually.

I’d use a parallel reasoning argument to make it clear:

“he saw revolutionary potential in the most obscure things; he even claimed he had “never seen a revolution as profound” as the film making - a niche field that was the focus of his work from 1895 to 1900”  

To be potentially revolutionary something has to be obscure, otherwise there is no need of a “revolution” to make it spread. So in that sense the argument is begging the question.

A little bit better the argument is if the story is that the film making was a niche field in that era, and has been always niche field until now. “such film stuff was obviously destinated to still be a ‘niche field’”

But it did not turned in this way: if the movie industry turned into a commercial, artistic, cultural revolution, then the argument is unsound.

I think a possibility to make it better is if we want to argue that the guy is smart: “he saw the revolutionary potential of the movie making, when it was still was a niche field!”

Back to the original argument “the smart guy did see the revolutionary potential of the object oriented, when it was still was a niche field!” I want also to say why the parallel arguments fits:

object oriented was actually a ‘niche field’ in the period the author talks in the sense that the mainstream programming languages were not object oriented (as Basic, Pascal, C, ...).
After some year, the mainstream programming language became Object Oriented (Java, C#, C++, Objective C, Ruby), so he after all was not so wrong in consider such discipline a potential revolution.

Basically the argument is not good.

What do you think?
(p.s. a friend of mine answered that “object oriented programming” is actually a “niche-field” in the sense of “well done object oriented programming”, and it reminded me the “no true scotsman fallacy” but, it’s enough for now... )

No comments: