Database
Introduction
Database replication is a critical part of the fault-tolerant Passwork architecture. It keeps data synchronized across nodes. When the Primary node fails, the remaining nodes vote to elect a new Primary that application servers start using automatically.
Voting mechanism
How it works
- Every node has a vote — each node can participate in electing the Primary.
- Quorum — electing a new Primary requires a majority (>50%) of votes.
- Automatic election — when the current Primary fails, nodes vote automatically.
- Data synchronization — the new Primary must have up-to-date data.
Why the node count matters
Minimum nodes: 3
To stay fault-tolerant, use an odd number of nodes (3, 5, 7) in the replica set.
Why 2 nodes are not enough
- With 2 nodes: if one fails, the remaining node cannot reach quorum (50% is insufficient; you need >50%).
- With 3 nodes: if one fails, the remaining 2 nodes form a majority (66%) and can elect a new Primary.
Why an odd number is preferred
With an even number of nodes (for example, 4), you risk a split-brain scenario:
- If the network splits into two parts with 2 nodes each, neither side can reach majority (>50% required).
- Both parts switch to read-only mode, and the system becomes unavailable.
Example issue with 4 nodes:
┌─────────────────────────────────────────────────────────────────────────┐
│ NETWORK SPLIT │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Node #1 │ │ Node #2 │ │ Node #3 │ │ Node #4 │ │
│ │ │ │ │ │ │ │ │ │
│ │ Vote: 1 │ │ Vote: 1 │ │ Vote: 1 │ │ Vote: 1 │ │
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │
│ │ │ │ │ │
│ └─────────────────┘ └─────────────────┘ │
│ │ │ │
│ Part 1: 2 nodes (50%) Part 2: 2 nodes (50%) │
│ — Cannot elect Primary — Cannot elect Primary │
│ — Read-only mode — Read-only mode │
│ — Passwork unavailable — Passwork unavailable │
└─────────────────────────────────────────────────────────────────────────┘
Configuration comparison:
| Nodes | 1 node fails | Network split | Recommendation |
|---|---|---|---|
| 2 | Read-only | Read-only | Not recommended |
| 3 | Works | Works (2 of 3) | Minimum recommended |
| 4 | Works | Read-only (2 and 2) | Not recommended |
| 5 | Works | Works (3 of 5) | Recommended |
| 6 | Works | Read-only (3 and 3) | Not recommended |
| 7 | Works | Works (4 of 7) | Recommended |
Voting diagram
┌─────────────────────────────────────────────────────────────────┐
│ REPLICA SET │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Node #1 │ │ Node #2 │ │ Node #3 │ │
│ │ (Primary) │ │ (Secondary) │ │ (Secondary) │ │
│ │ │ │ │ │ │ │
│ │ Vote: 1 │ │ Vote: 1 │ │ Vote: 1 │ │
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │
│ │ │ │ │
│ └─────────────────┼─────────────────┘ │
│ │ │
│ Voting between nodes │
│ (Quorum: 2 of 3 = majority) │
└─────────────────────────────────────────────────────────────────┘
Operating scenarios
Normal operation (3 nodes):
- Primary handles all reads and writes.
- Secondary nodes synchronize with the Primary.
- All nodes participate in voting.
One node fails (2 nodes remain):
- Remaining 2 nodes form a majority (66%).
- A new Primary is elected automatically.
- The system continues to work for reads and writes.
Two nodes fail (1 node remains):
- The remaining node cannot reach quorum (33% < 50%).
- The replica set switches to read-only mode.
- Passwork becomes unavailable for writes.
Read-only mode and Passwork availability
When read-only mode occurs
A replica set goes into read-only mode when:
- No quorum — more than half of the nodes are unavailable.
- Network partition — parts of the cluster each have less than 50% of the nodes.
Impact on Passwork
When the database is in read-only mode, Passwork is fully unavailable. Any action in Passwork (sign-in, viewing data, creating or updating items) requires writes to the database to record activity history. With writes blocked, these operations cannot be completed.
What users see:
- Connection errors when trying to reach the database
- Log messages such as "read-only mode" or "no primary available"
- Error messages in the UI when attempting to use the system
MongoDB Replica Set
Architecture
A MongoDB replica set consists of multiple nodes, one Primary and one or more Secondary nodes.
Node types:
- Primary — handles all write and read operations.
- Secondary — sync from the Primary and can optionally serve reads.
- Arbiter (optional) — participates in elections but does not store data.
How it works
- Writes are performed only on the Primary node.
- Oplog (operation log) stores all write operations.
- Synchronization — Secondary nodes read the Oplog from the Primary and apply operations to their data.
- Voting — when the Primary fails, nodes vote to elect a new Primary.
- Automatic failover — a new Primary is chosen automatically from nodes with up-to-date data.
Connection string
All Passwork application servers connect to the replica set using a single connection string:
mongodb://node1:27017,node2:27017,node3:27017/pw?replicaSet=rs0
The MongoDB driver automatically:
- Detects the current Primary node
- After failover, routes queries to the new Primary elected by the replica set
Requirements for node placement
Importance of independent sites
For maximum fault tolerance, use three independent physical sites (data centers).
Why this matters:
- Protection from disasters — if one site fails, others keep running.
- Independent infrastructure — each site has its own power, cooling, and network.
- Geographic distribution — nodes can be in different locations.
Recommended placement architecture
┌─────────────────────────────────────────────────────────────────────┐
│ RECOMMENDED ARCHITECTURE │
│ │
│ ┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐ │
│ │ DC #1 │ │ DC #2 │ │ DC #3 │ │
│ │ │ │ │ │ │ │
│ │ ┌────────────┐ │ │ ┌────────────┐ │ │ ┌────────────┐ │ │
│ │ │ MongoDB │ │ │ │ MongoDB │ │ │ │ MongoDB │ │ │
│ │ │ Node #1 │ │ │ │ Node #2 │ │ │ │ Node #3 │ │ │
│ │ └────────────┘ │ │ └────────────┘ │ │ └────────────┘ │ │
│ │ │ │ │ │ │ │
│ │ Independent │ │ Independent │ │ Independent │ │
│ │ infrastructure │ │ infrastructure │ │ infrastructure │ │
│ └────────┬─────────┘ └────────┬─────────┘ └───────┬──────────┘ │
│ │ │ │ │
│ │ │ │ │
│ └─────────────────────┼────────────────────┘ │
│ │ │
│ High-speed network │
│ (for data replication) │
└─────────────────────────────────────────────────────────────────────┘
Network requirements
Database nodes need:
- High-speed connections for replication
- Low latency for fast synchronization
- Stable links with minimal packet loss
- Sufficient bandwidth for replication traffic
Minimum requirements
Minimum: 3 nodes across 3 independent sites
- Each node on a separate physical site (data center)
- High-speed network links between sites
- Independent infrastructure per site
Alternative (not recommended):
- 3 nodes in one data center but on different servers/racks
- Less protection from disasters, but still tolerant to a single node failure
Connecting application servers
Single connection string
All Passwork application servers connect through one connection string.
MongoDB drivers discover the Primary automatically when you list all nodes:
mongodb://db-mongo-1,db-mongo-2,db-mongo-3/?replicaSet=rs0
Automatic Primary detection
- The driver detects the current Primary during connection.
- It monitors node health.
- After an election, it switches traffic to the new Primary automatically.
Recommendations
- Use one shared connection string on all application servers.
- List all nodes, do not point to a single host.
- Set reasonable timeouts for connections and operations.
- Monitor the replica set to track which node is Primary and verify elections.