Featured Product
This Week in Quality Digest Live
Innovation Features
Oak Ridge National Laboratory
Hafnium oxide key to novel applications
David Suttle
What is breakthrough technology really capable of?
David Cantor
This article is 97.88% made by human/2.12% by AI
Eric Whitley
Robotic efficiency coupled with human intuition yields a fast, accurate, adaptable manufacturing system

More Features

Innovation News
To be unveiled during PACK EXPO Las Vegas at the Hiperbaric booth, No. N-10857
Educating the next generation of machinists with state-of-the-industry equipment
In a first, researchers have observed how lithium ions flow through a battery interface
Air-extend/spring-retract moves probe out of the way
Precision cutting tools maker gains visibility and process management across product life cycles
Expanded offering includes Smartscope E-series for 3-axis video measurement
Accelerates CAM programming time 80% to make U.S. manufacturers more productive
Pioneers new shape-memory alloys

More News

Phanish Puranam


How to Rapidly Test New Organization Designs

Instead of blindly adopting industry best practice, companies can pilot new organizational designs

Published: Wednesday, April 19, 2023 - 12:01

It’s no secret that there are no universally applicable organization designs. What works in one context may not work in another because each organization has a different history, culture, and cast of characters. And yet there is a thriving segment of the management consulting business that specializes in implementing “best practices”—or sometimes “flavor of the month” organization designs—in companies that vary widely in terms of age, industry, and background.

One might hope that research and theory would help predict which designs work best in a particular context. However, having studied the topic for two decades, I believe this hope is unlikely to become reality anytime soon. Put simply, organizational contexts are dauntingly complex and vary in ways that we can’t fully observe.

This makes it hard to definitively recommend design interventions based on theory alone; context matters enormously. It is nothing short of foolhardy to adopt a new organization design or practice with no evidence that it will work in your organizational context.

The gold standard: Randomized control trials

Field experiments, also known as randomized control trials (RCTs), are the gold standard for determining whether a design will work in a specific context. Experiments involve randomly assigning some units (e.g., people, teams, projects, or departments) to a treatment condition (i.e., the new policy you’re thinking of implementing) and others to the control group (where things stay as they were without the new policy). We then check whether outcomes are statistically different between the two conditions.

Randomization is crucial. Imagine you implement, without randomizing, a new training policy and find that employees who applied for the training and attended it saw their performance improve. You have no way of knowing whether this is because your training was effective or because the people who applied for it are motivated, high performers whose evaluations were going to rise anyway. You might point out that if you make the training mandatory for all employees, you could just see if everyone’s performance improves. But the problem is that you can’t rule out other factors such as industry cycles and demand spikes that may have affected all employees.

Randomization ensures you avoid these problems by creating counterfactuals—that is, an understanding of what would have happened without the intervention. This is possible because randomized treatment and control groups are statistical twins: They are similar enough to be treated as identical, so the control group can serve as the counterfactual. We cannot establish causation without counterfactuals, and randomization is the best way to establish counterfactuals (unless you have a time machine).

But even when an experiment may be desperately needed, it can be logistically challenging to conduct. Consider the following situation: Your company is debating whether it should adopt agile structures to manage its project teams. Despite the general enthusiasm, we know there are good reasons to be cautious; agile structures are not a universally superior design.

Ideally, you would randomly assign half of the teams in your company to the new agile structure, keep the rest the same, and test for statistically and economically significant differences in performance at the end of a few months. In practice, the cost, the risk to business continuity, and the political challenges of pushing through randomization can make this daunting.

Does this mean companies are trapped forever in the limbo of adopting an industry best practice without any proof it will work and just hoping for the best?

An alternative: Gamification meets randomization

Here is an alternative protocol that I believe can beat blind implementation of current “best practices.”

Step 1

Find a team task that can be done in a few hours, but which is a reasonable approximation of what your project teams do. This is tricky but by no means impossible. For instance, business school case studies exactly embody this principle; with a combination of a few pages of text and a few hours of discussion, students get thrown into a simulation of a situation where they must solve a problem that might have unfolded over a period of weeks or months in real life.

Sometimes, you might have small sample sizes—e.g., not enough teams to draw any statistically meaningful conclusions. But the beauty of the gamified approach is that you can select a task in Step 1 that involves a few people, not entire teams. This scales up your sample size. All organization designs ultimately specify how people interact. With ingenuity and drawing on theory, we can find ways to put just the interactions that matter under the microscope.

Two things are crucial about this gamified task: First, it should be a reasonably valid approximation of what project teams in fact do. Second, there should be a clear metric of successful performance on this task.

Step 2

Organize a daylong hackathon. The purpose is to get all the teams in your company to participate at the same time on the case study that you came up with in Step 1.

Step 3

Assign half the teams participating in the hackathon to the new agile structure. Keep the remaining teams in their standard structures with the same team leaders and role allocations. It is crucial this is done in a randomized manner—roll dice or flip a coin if you have to.

Step 4

Compare how the teams in the agile structure vs. those in the traditional structure performed.

That’s it! In one day, you can combine a team-building event with a pilot test of the new design you are thinking of implementing.

Why this is a good idea

This approach creates a “toy” version of the work (e.g., projects) you are trying to improve, and organizational design variants (agile vs. traditional teams) that can be piloted cheaply and fast with randomization. Think of it as equivalent to what aircraft companies do when building a new model airplane: They first test prototypes in a wind tunnel. The wind tunnel is not the same as real-world conditions, but it gives useful signals that can save a lot of money and grief.

Discussing the results at the end of the hackathon (results can be computed in hours) can lead to rich insights about the intended organization design change. This creates broad understanding, rooted in evidence, of the trade-offs and buy-in. It’s also worth highlighting that the entire protocol for gamified randomized control trials can be run online, too (or even within a metaverse application), both within and across teams. In fact, it could be used to answer questions about whether distributed working within teams will be effective for your company.

In sum: The low success rates of organizational redesign projects suggests that companies have nothing to lose and perhaps a lot to gain by trying out gamified randomized control trials. Start playing!

First published March 14, 2023, on INSEAD.


About The Author

Phanish Puranam’s picture

Phanish Puranam

Phanish Puranam is the Roland Berger chaired professor of strategy and organization design at INSEAD. He is also the academic director of INSEAD’s Ph.D. program.



Playing games with people's jobs.

"Put simply, organizational contexts are dauntingly complex and vary in ways that we can’t fully observe."

So how are you going to control these contexts well enough to perform a fruitful RCT on a realistically sized and realistically diverse team? 

A business is a process: not a population. Use process statistics to improve processes. 

If you think that a change for your business is a good idea, articulate what the improvement is supposed to be, implement the change, and use a process behaviour chart to determine what outcome, if any, was likely attributable to your action on the system. Right or wrong, you learn something no matter what. Rinse, and repeat. This is what continuous improvement is. 

Or, you could do what this author proposes, and turn your employees into test subjects for an experiment that will likely tell you very little, if anything. Unless your team is simply massive, there's no way to truly randomize the allocation of talent and adaptibility, which are highly variable from person-to-person... also, you are not comparing a new process to an old process in this RCT, but rather one random team who does what they are comfortable with against another random team who does something foreign to them. 

I'm not a software guy, but I can't help but think that this sort of experiment will leave you with a big, messy, confusing data analysis effort and more questions than answers at the end of what will probably be a frustrating and demoralizing experience for your teammembers.