The Shared Account Is a Moral Problem, Not a Technical One
Everyone in security knows that shared accounts are bad. The argument is technical: if five people share a login, you cannot attribute actions to individuals, you cannot build a meaningful audit trail, you cannot do forensics after an incident, you cannot enforce least privilege at the individual level. When something goes wrong in an environment with shared accounts, you know that one of five people did it. You do not know which one.
This framing is correct and it is insufficient. Shared accounts persist not because organizations don't understand the technical argument against them. They persist because the technical argument doesn't address why shared accounts feel reasonable to the people using them. The real problem is not technical. It is moral, in a specific and interesting sense.
Why shared accounts feel right
Consider the oncology nursing unit that shares a single login for the medication management system. There are twelve nurses on the floor. The system requires a password that's changed every ninety days. The password is on a sticky note in the break room.
This is a security failure of the most obvious kind. It is also, from the nurses' perspective, a completely reasonable adaptation to a system that was designed without adequate consideration for how their work actually functions. In an emergency, a nurse cannot stop to remember a password. When a patient's medication needs to be administered immediately, the barrier of individual authentication is not a security feature — it is a patient safety risk. The shared credential exists because the alternative, as the nurses experience it, is worse.
The security team sees a shared account. The nursing unit sees a workaround that allows them to deliver safe patient care. Both of them are right. The failure is in the system design that made those two things incompatible.
"The shared account is not evidence of bad security culture. It is evidence that the security requirements and the operational requirements were never reconciled."
The attribution problem as a moral problem
When something goes wrong in a shared account environment, the organization faces a choice that has moral dimensions. They know an action was taken by one of five people. They may have some circumstantial evidence about which one. Do they investigate all five? Do they discipline all five in the absence of evidence? Do they let it go because they can't prove who did it?
Each of these options is morally uncomfortable in a different way. Investigating all five treats people who did nothing wrong as suspects. Disciplining all five for an action one of them took is collective punishment. Letting it go means the person who caused harm faces no accountability, and the other four know that they could do the same thing without consequence.
The technical problem — you can't do forensics — is also a justice problem. The people who use a shared account are not just accepting a security risk. They are accepting an environment in which individual accountability has been structurally undermined. They are accepting that if something goes wrong, they may bear consequences for something they didn't do, or watch someone else escape consequences for something they did.
Who creates shared accounts and why
Shared accounts are almost never created with the intent to undermine accountability. They are created because the alternative — individual accounts for everyone who needs access — requires provisioning processes that are too slow, or approval workflows that are too onerous, or licenses that are too expensive, or technical integrations that don't support individual authentication.
The nurse manager who put the password on the sticky note made a decision that the operational need for immediate medication access outweighed the security requirement for individual attribution. The IT administrator who created the shared admin account made a decision that the cost of provisioning individual accounts for all twelve contractors outweighed the audit trail benefit. The developer who hardcoded the shared service account credentials made a decision that getting the integration working was more important than getting it working securely.
Each of these decisions is individually defensible. Collectively, they produce an environment where accountability is systematically impossible. Not because anyone intended that outcome. Because nobody was responsible for the overall system, only for their individual decisions within it.
What this means for how we talk about it
The standard remediation advice for shared accounts is: eliminate them. Provision individual accounts. Enforce individual authentication. Require MFA. Audit the audit trail.
This advice is correct and it fails consistently because it treats a social and organizational problem as if it were a configuration problem. The nurses don't need a different password policy. They need a medication management system that was designed for how nursing work actually functions — with rapid authentication options, with role-based access that survives shift changes, with emergency access that doesn't compromise routine accountability.
The moral argument against shared accounts is not that they are lazy or negligent. It is that they create a structural injustice: an environment where individual accountability is impossible, where the people who use the shared account have accepted a risk they didn't fully choose and can't individually opt out of. That argument lands differently than the technical one. It places the obligation for change not on the users but on the system designers.
The shared account is not a user problem. It is a system design problem that has been mislabeled as a user problem. Getting that label right is the beginning of actually fixing it.