What psychological safety actually means

The term has been in circulation long enough to have developed a reputation for vagueness. It gets conflated with "nice culture" or "being supportive" or "avoiding conflict." None of these are quite right.

The most useful definition comes from Harvard Business School professor Amy Edmondson, who has spent decades studying it: psychological safety is the belief that you won't be punished or humiliated for speaking up with ideas, questions, concerns, or mistakes. It's not about comfort. It's not about avoiding hard conversations. It's about whether people feel safe enough to take interpersonal risks (to say "I don't know," "I think we're doing this wrong," "I made an error") without fear of consequences that will damage them professionally or socially.

Edmondson's research, and Google's famous Project Aristotle study which followed, showed that psychological safety was the single strongest predictor of team effectiveness, more predictive than individual talent, experience, or any specific technical practice. Teams with high psychological safety learned faster, made fewer serious errors, and performed better on almost every measure.

For engineering teams specifically, the implications run deep.

Why it's particularly important in engineering

Engineering teams face a specific set of conditions that make psychological safety either a significant amplifier of performance or, when absent, a serious drag on it.

The complexity and interdependence of software systems

Modern software systems are too complex for any one person to understand completely. Good engineering requires constant knowledge-sharing: asking questions, flagging when something doesn't make sense, raising concerns about approach before they become production incidents. Teams where engineers feel safe asking "I don't understand this part of the codebase" or "I'm not sure this approach will scale" surface problems before they become expensive. Teams where engineers feel they're expected to already know (where not knowing is a sign of weakness) hide problems until they're much harder to fix.

The inevitability of failure

Outages happen. Bugs ship. Architectural decisions that looked right at the time turn out to be wrong. This is not an indictment of any engineering team, it's the nature of building complex systems under uncertainty. What varies enormously across teams is how they respond when things go wrong. Teams with high psychological safety run blameless post-mortems, share detailed incident reports, and use failures as learning opportunities. Teams with low psychological safety run retrospectives where everyone is careful about what they say, incidents get attributed to individuals rather than systems, and the same types of problems recur.

The innovation premium

The most significant technical advances in most companies come from engineers who felt confident enough to say "what if we tried something completely different?" This requires psychological safety, the willingness to put forward an idea that might be wrong, in front of colleagues who are smart enough to identify its flaws. Engineering cultures that punish bad ideas (through mockery, dismissiveness, or the more insidious mechanism of simply never acknowledging them) get fewer good ideas. The maths is unavoidable.

The post-raise environment: companies that have just raised a significant round often see an unexpected drop in engineering team performance. The team is under more pressure, expectations are higher, the stakes feel more visible. In this environment, engineers who were willing to say "I'm not sure this approach is right" before the raise become less willing, precisely when the quality of their judgment matters most. Building psychological safety before the pressure peaks is one of the highest-leverage things a CTO can do in the months after a funding round.

How to spot low psychological safety in an engineering team

Low psychological safety often doesn't present as obvious conflict or unhappiness. The signs are usually subtler:

  • Meetings where the most senior person's view is never challenged. If the CTO or engineering lead proposes an approach and the response is silence or immediate agreement, that's rarely a sign of genuine alignment. It's usually a sign that people don't feel safe disagreeing.
  • Post-mortems that don't identify systemic causes. If the response to every incident is "we need to be more careful" rather than "we need to change this process or system," the team isn't doing genuine root-cause analysis, they're producing socially safe narratives.
  • Engineers who go quiet after a mistake. A team member who makes an error and then becomes visibly withdrawn, less likely to ask questions, or less participatory in technical discussions is showing the classic pattern of psychological threat response: retreat and protect.
  • Questions asked only in 1:1s, never in group settings. If engineers bring up concerns or questions only when they're alone with their manager (never in team meetings or code review threads) they're managing social risk. The information is flowing, but only through narrow, inefficient channels.
  • Attrition concentrated in your strongest performers. Senior engineers with strong alternatives leave when they don't feel safe. Mid-level engineers without good options stay, but become disengaged. If your best engineers are leaving and citing "culture," psychological safety is often the underlying factor, even if they don't use that term.

How to actually build psychological safety in an engineering team

The honest answer is: slowly, consistently, and primarily through the behaviour of leaders. Psychological safety is not created by workshops, values statements, or mandatory "speak-up" training. It's created by the pattern of what happens when people do speak up. Every interaction is a data point that either reinforces or undermines the sense that it's safe to take interpersonal risks.

Model intellectual humility at the top

The single most powerful signal in any engineering team's psychological safety climate comes from how the most senior people in the room handle not-knowing and being wrong. A CTO who says "I don't know, what do others think?" in a leadership meeting, or who says "I was wrong about that, let me think about why" in a post-mortem, is teaching the entire team what's acceptable. A CTO who always has an answer and never admits error is teaching the opposite lesson, regardless of how many times they say "I want to hear your ideas."

In my experience, this is the hardest shift for technical leaders to make, because expertise and certainty are often how they've been rewarded throughout their careers. But the most effective engineering leaders I've seen (across 25 years of this work) are consistently those who are most comfortable saying "I'm not sure."

Respond to mistakes with curiosity, not judgment

Every incident, bug, or missed deadline is a test of the psychological safety climate. The pattern of response matters enormously. A response that starts with "what happened?" and works towards understanding the system conditions that enabled the failure (without naming individuals as causes) teaches engineers that honesty in failure is safe. A response that starts with "who was responsible for this?" teaches them that it isn't.

This doesn't mean avoiding accountability. It means understanding the difference between accountability (a person understands and owns their contribution to an outcome) and blame (a person is punished for being associated with a bad outcome). The first produces learning and improvement. The second produces caution and concealment.

Make disagreement structurally possible

Psychological safety isn't only built by how you respond to people who speak up, it's also built by how you design the conditions for speaking up. Some practical mechanisms that help:

  • Pre-mortems. Before a significant project begins, hold a meeting where the team imagines the project has failed and works backwards to identify why. This structure makes it safe to raise concerns because the exercise explicitly invites pessimism, it's not read as disloyalty or negativity.
  • Anonymous feedback channels. Not as a substitute for open conversation, but as a training wheel that allows engineers who don't yet trust the culture to surface information that wouldn't otherwise get through.
  • Decisions documented with reasoning. Architecture Decision Records and similar artefacts create a norm of "we explain why we made decisions", which also creates a norm of "decisions can be revisited if the reasoning no longer holds."
  • Explicit invitation of dissent. Not "does anyone have concerns?" (which invites silence) but "who has a concern about this approach that I haven't heard yet?" (which names the desired outcome and makes raising concerns the expected behaviour, not the exceptional one).

Build it deliberately when joining a new team

When I join a company as a fractional or interim CTO, one of my earliest priorities is assessing and improving the psychological safety climate. This doesn't happen by announcing "I'm going to build a psychologically safe team." It happens by the accumulation of small signals: asking questions in team settings that I could answer myself (to show that not-knowing is fine), responding to the first mistake I observe with curiosity rather than criticism (to set the tone for post-mortems), and explicitly naming and thanking people who raise concerns or disagree in meetings (to reinforce that this behaviour is valued, not tolerated).

It typically takes 60–90 days before the team's behaviour starts to shift visibly. Engineers who were quiet in meetings start to contribute. Post-mortems start to produce genuine insight rather than procedurally safe summaries. People start asking questions in public that they previously only asked in private. None of this is dramatic. All of it matters.

The leadership decision this requires

Building psychological safety requires a genuine shift in what you prioritise as a leader. It means valuing the quality of information that reaches you (honest, complete, early) over the maintenance of an image of certainty and control. It means accepting that the short-term discomfort of hearing "I think this is the wrong approach" is worth far more than the short-term comfort of apparent agreement.

The companies where I've seen engineering teams consistently outperform their peers aren't the ones with the best technical stack or the most sophisticated deployment pipeline. They're the ones where the most junior engineer in the room genuinely believes they can speak up, ask a question, or disagree, and that doing so will make things better, not worse.

That's not a culture that happens by accident. It's a leadership choice, made repeatedly, in small interactions, over a long period of time. And it's one of the highest-leverage investments any engineering leader can make.


If you're leading an engineering team and you're not thinking about psychological safety as a primary driver of performance, I'd encourage you to start. The research is unambiguous. The practical mechanisms for building it are learnable. And the engineering leaders who get this right tend to build teams that everyone else is trying to hire from.

Share: