Why Your Group Is Falling Apart
And why a 1950s scientist can explain it
Your group or team feels like it’s coming apart, but no one can agree on why. The leader is overwhelmed. People are getting touchy, defensive, or quietly disengaged. What used to work doesn’t work anymore. Every attempt to fix things either escalates conflict or creates a new one somewhere else.
Because you understand how humans bring their own ‘stuff’ to the table and the group, you’re trying to identify which personality or unmet emotional need is causing the problem in what you think is a healthy system.
Unfortunately, your efforts are failing because the system is the problem. Specifically, you have what’s called a complexity mismatch. It shows up as personality clashes, instability, or ideological drift, but the root cause is almost always a mismatch between the complexity of the system and the chaos it’s trying to operate in.
Back in 1956, British psychiatrist Ross Ashby described a rule of control, very bluntly and framed in mathematical terms. In An Introduction to Cybernetics, he outlined what later became known as the Law of Requisite Variety.
Translated into plain language, it goes like this:
If the chaos you’re facing has more variety than your system can respond to, you will lose.
You can be as competent, moral, and intentional as possible, and still lose, because the math doesn’t work.
You might think you’re the exception. Maybe your group doesn’t feel chaotic, or you think it would be fine if you could just figure out which person is screwing it all up. But the reality of a human system is that it has multiple humans in it, and humans bring vast amounts of complexity.
Needs
Trauma
Ego
Biases and preferences
Incentives
Narratives
Beliefs
All of this adds up to orientation. The more humans you have involved, the more possible states your system has to regulate.
Ashby’s work was later expanded by Boisot and McKelvey into the Law of Requisite Complexity, which states that to remain effective, a system must match the complexity of the environment it’s operating in.
This explains why your once-cohesive team now feels unstable, reactive, or outright cannibalistic.
There’s another idea operating at the same time: In any system where constraint on behavior is implicit or assumed rather than structurally defined, pressure will select for defection.
In other words, values that aren’t structurally enforced within the system itself will degrade under pressure. It’s a variation on the idea that “you default to your level of training,” but from a systems perspective.
Here’s an example of that in the real world. Let’s say you work in a place that gets a communal fridge. There are no written rules and no consequences, but everyone agrees on some common values. Basic things like “don’t eat other people’s food” and “clean up after yourself.” Even in the absence of enforcement, it all works out fine…until it doesn’t.
Pressure enters the system in the form of a big project where people are stuck at work longer than they expected. Frank is hungry and irritable, but he can’t afford to order in, so he just grabs a sandwich out of the fridge that he is well aware belongs to someone else. He rationalizes this action by thinking the person won’t mind, and he’ll replace it tomorrow anyway.
This action, however, changes the system. People start ordering in more, putting huge labels or rude notes on their food, they start checking the fridge repeatedly to make sure no one stole their lunch, and suspicion abounds regarding who stole the sandwich because Frank won’t fess up. Trust is gone in the group.
Did the group lose its values? Not at all. The system, however, never enforced the values because the system didn’t have any. And if the system doesn’t have baked-in operating values, it doesn’t matter if 99% of your members do, because there is no constraint on the 1% who don’t. In short, if there is nothing to stop the one person in your group that chooses bad behavior, not only will they not stop their bad behavior (why would they?), but eventually the system will become self-selecting. People will do one of three things:
Opt out of the system and leave, if their own ethics are to remain intact
Learn to be silent about their ethics and ‘look the other way’
Set aside their ethics and join in on the bad behavior
Under pressure, your group will default to whatever behavior is easiest, fastest, and least costly in the moment. If you haven’t already designed the system to protect ethical behavior and constrain unethical behavior, you get exactly what your system is designed to allow.
A Quick Reality Check
Answer these honestly:
How many distinct ways does your group know how to respond to conflict?
Do disagreements always escalate, shut down, or get ignored?
Does your group rely on a few emotionally stable people to absorb everyone else’s instability?
When pressure increases, does decision quality improve or collapse?
If your answer is, “It depends on the situation,” that is critical data. It means your system isn’t actually regulating.
You cannot communicate your way out of a variety mismatch. You cannot “nice” your way out of it. And you cannot train harder without fixing the system that governs the training.
When a group’s internal structure is too simple for the reality they’re operating in, they fail. And when that happens, the group will eventually turn inward, eating itself with conflict.
The Hidden Assumption Your Group Is Making
A lot of groups pride themselves on the idea of operating in a decentralized system. You’re expected to act without permission, use initiative, and act in ethical, principled good judgment within the constraints you have been given.
In military doctrine, this is called Mission Command; you decentralize authority to survive complexity. Put simply, it works like this:
The leader tells you what needs to be accomplished.
They tell you why it needs to be accomplished.
They tell you the limitations and boundaries.
They get out of the way and let you execute, while supporting you even if you did it differently than they would have, as long as you did in fact achieve the goal and didn’t cross any of the constraints.
This might sound like how you and your group go about things, and you might be wondering why it’s not working. After all, even if you didn’t know what it was called, you understand that Mission Command is how you want to go about things. You want to empower your people to act. You don’t want to be micromanaging them. So what’s the problem? Why does it seem like such a cluster?
The problem is that Mission Command assumes that the following conditions already exist:
The people in the group are self-disciplined.
They possess ethical self-regulation.
They have a shared orientation to reality.
There is structural trust that can survive pressure.
When those assumptions are correct and those things exist, Mission Command is highly adaptable, and your group is both effective and resilient.
When those factors are NOT present, however, acting as though they are produces a whole other set of circumstances.
Drama disguised as initiative
Power struggles disguised as empowerment
Emotional volatility disguised as authenticity
Narrative drift disguised as flexibility
This is a control failure caused by human variance inside a decentralized system.
Ashby explains why.
When authority is distributed, the humans themselves become part of the disturbance. If you don’t regulate that, Mission Command collapses into chaos.
You increased external variety, but failed to increase internal regulatory variety.
So instead of absorbing complexity, the system eats itself. Feedback within the group will feel personal to the members. People will start looking for comfort instead of growth, and as a leader you’ll burn out trying to hold it all together.
Decentralized command IS the goal, regardless of whether you call it Mission Command or something else. But if you decentralize without a stabilizing system in place first, command and culture will fail and be taken over by ego and careerism (even in a volunteer system), and you’ll create a zero-defect culture. That’s when you’re so against failure that you are no longer able to learn. It becomes more important to look like you succeeded than to actually achieve the objectives.
Often people will view that as “Mission Command doesn’t work,” when in reality, they never had it. Without the prerequisite conditions, there is no Mission Command. In many cases, groups are claiming that they have adopted it, but what they’re really running in their groups is simply a lax, informal hierarchy with some empowerment language on top. It lacks the infrastructure needed.
This is exactly why Grey Cell Protocols exists.
GCP stabilizes orientation before decentralization turns authority into a liability, and serves as a pre-condition for Mission Command.
You see, Mission Command works beautifully when all of the pieces are present. If they’re not, GCP exists to create them. An effective leader will apply GCP to create infrastructure so that Mission Command emerges.
Now that you see what’s going wrong, let’s break this down and show you how to fix the problem.



