Implementing Raft Election Mechanism in Go – Part 2
This article, the second in a series on Raft, explains the election mechanism, server states, timers, RPC handling, and partition scenarios while providing complete Go code examples for the consensus module and its interactions.
This article is the second part of a series on implementing the Raft consensus algorithm in Go, focusing on the election mechanism.
Code Structure – The Raft implementation isolates the consensus logic in a ConsensusModule defined in raft.go . The module contains fields such as id int , peerIds []int , and a pointer to a Server that handles RPC communication.
Raft Server States – A server can be a follower, candidate, or leader. The article explains the state machine, term concept, and how leaders send heartbeats while followers reset election timers upon receiving messages.
Election Timer – Each follower runs runElectionTimer , which selects a random timeout (150‑300 ms) and checks periodically (every 10 ms). If the timer expires without hearing from a leader, the server becomes a candidate and starts an election.
func (cm *ConsensusModule) runElectionTimer() {
timeoutDuration := cm.electionTimeout()
cm.mu.Lock()
termStarted := cm.currentTerm
cm.mu.Unlock()
// ... loop with ticker, check state, start election if needed ...
}Becoming a Candidate – The startElection function increments the term, votes for itself, resets the election timer, and sends RequestVote RPCs to all peers concurrently. Votes are counted atomically, and a majority triggers startLeader .
func (cm *ConsensusModule) startElection() {
cm.state = Candidate
cm.currentTerm++
savedCurrentTerm := cm.currentTerm
cm.electionResetEvent = time.Now()
cm.votedFor = cm.id
var votesReceived int32 = 1
for _, peerId := range cm.peerIds {
go func(peerId int) {
args := RequestVoteArgs{Term: savedCurrentTerm, CandidateId: cm.id}
var reply RequestVoteReply
if err := cm.server.Call(peerId, "ConsensusModule.RequestVote", args, &reply); err == nil {
// handle reply, count votes, possibly become leader
}
}(peerId)
}
go cm.runElectionTimer()
}Leader Role – When a server becomes leader, startLeader launches a goroutine that sends periodic AppendEntries (heartbeat) RPCs every 50 ms to all peers via leaderSendHeartbeats .
func (cm *ConsensusModule) startLeader() {
cm.state = Leader
go func() {
ticker := time.NewTicker(50 * time.Millisecond)
defer ticker.Stop()
for {
cm.leaderSendHeartbeats()
<-ticker.C
cm.mu.Lock()
if cm.state != Leader {
cm.mu.Unlock()
return
}
cm.mu.Unlock()
}
}()
}RPC Handlers – The article shows the implementations of RequestVote and AppendEntries , which update terms, grant votes, reset timers, and ensure only one leader per term.
func (cm *ConsensusModule) RequestVote(args RequestVoteArgs, reply *RequestVoteReply) error {
cm.mu.Lock()
defer cm.mu.Unlock()
if args.Term > cm.currentTerm {
cm.becomeFollower(args.Term)
}
if cm.currentTerm == args.Term && (cm.votedFor == -1 || cm.votedFor == args.CandidateId) {
reply.VoteGranted = true
cm.votedFor = args.CandidateId
cm.electionResetEvent = time.Now()
} else {
reply.VoteGranted = false
}
reply.Term = cm.currentTerm
return nil
}State Management and Goroutines – Followers, candidates, and leaders each run specific goroutines (election timer, RPC senders, heartbeat sender). The article discusses how old goroutines exit when terms change, preventing leaks.
Network Partition Example – A detailed scenario with three servers (A, B, C) illustrates how partitions cause repeated elections, term jumps, and temporary loss of leadership, demonstrating Raft’s safety guarantees.
The next article will extend the implementation to handle client commands and log replication. References to the Raft paper and source code are provided.
360 Tech Engineering
Official tech channel of 360, building the most professional technology aggregation platform for the brand.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.