basic.md 2.8 KB

Raft Basic Test Scenario

This directory contains a basic test setup for the Raft cluster with 4 nodes.

Scenario Description

  • Cluster Setup:
    • Initial Cluster: Node 1 and Node 2 form the initial cluster.
    • Dynamic Members: Node 3 and Node 4 are started as standalone nodes and can be added to the cluster dynamically using node add.
  • Configuration:
    • Log Compaction: Disabled (LogCompactionEnabled = false). The binary log will grow indefinitely.
    • Memory: Only keys and metadata are cached in memory. Values are stored on disk (default engine behavior).
  • Data Directory: ./data/node{1..4}

Setup and Usage

1. Start Nodes

Open 4 separate terminals and run the following commands:

Terminal 1 (Node 1):

cd example/basic/node1
go run main.go

Terminal 2 (Node 2):

cd example/basic/node2
go run main.go

Terminal 3 (Node 3):

cd example/basic/node3
go run main.go

Terminal 4 (Node 4):

cd example/basic/node4
go run main.go

2. CLI Commands

Each node provides an interactive CLI with the following commands:

Command Description Example
set <key> <val> Set a key-value pair. Requests are forwarded to Leader. set user:1 bob
get <key> Get a value by key (Linearizable Read). get user:1
del <key> Delete a key. del user:1
search <query> [limit] [offset] Search keys using SQL-like syntax. search key like "user:*" 10 0
demodata <count> <pattern> Generate demo data. Pattern supports * replacement. demodata 100 user.name.u*
stats Show current node status, term, and indices. stats
binlog Show the last CommitIndex in the Raft log. binlog
db Show the last CommitIndex applied to the DB. db
join <nodeID> <addr> (Leader Only) Add a new node to the cluster. join node3 127.0.0.1:9003
leave <nodeID> (Leader Only) Remove a node from the cluster. leave node3
help Show this help message. help

3. Test Workflow

  1. Verify Initial Cluster: Check stats on Node 1 and Node 2. One should be Leader, the other Follower.
  2. Generate Data: On Node 1 (or any node), run demodata 100 user.*.
  3. Read Data: Verify data availability on Node 2 using get user.1 or search key like "user.*".
  4. Expand Cluster:
    • Determine the Leader (e.g., Node 1).
    • On Leader, run: join node3 127.0.0.1:9003.
    • On Leader, run: join node4 127.0.0.1:9004.
  5. Verify Replication: Check if Node 3 and Node 4 have the data using search or get.
  6. Delete Data: Run del user.1. Verify it's gone on all nodes.
  7. Inspect Logs: Use binlog and db to see the commit indices matching across the cluster.