A lightweight, fast Zig service that monitors finality across multiple Lean Ethereum beacon nodes with consensus validation. Inspired by checkpointz, leanpoint provides reliable checkpoint sync monitoring for the Lean Ethereum ecosystem.
- Overview
- Features
- Quick Start
- API Endpoints
- Configuration
- Integration with lean-quickstart
- Supported Beacon API Formats
- Consensus Algorithm
- Monitoring with Prometheus
- Troubleshooting
- Advanced Usage
- Architecture Comparison
- Contributing
Leanpoint polls multiple beacon nodes, requires 50%+ consensus before serving finality data, and exposes a simple HTTP API with Prometheus metrics. It's designed for:
- Devnet Monitoring: Track finality across local test networks
- Production Deployments: Provide reliable checkpoint sync data
- Multi-Client Testing: Monitor zeam, ream, qlean, lantern, lighthouse, grandine, and more
- Consensus Validation: Ensure finality agreement across diverse implementations
- ✅ Multi-upstream support with 50%+ consensus requirement (like checkpointz)
- ✅ Parallel polling of all beacon nodes for low latency
- ✅ Per-upstream health tracking with error counts and timestamps
- ✅ Prometheus metrics for comprehensive monitoring
- ✅ Health check endpoint for load balancers and orchestration
- ✅ Lightweight binary (~8MB) with minimal resource usage
- ✅ Easy integration with lean-quickstart devnets
- ✅ Standard Beacon API format support
zig buildUsing the helper script with lean-quickstart:
python3 convert-validator-config.py \
../lean-quickstart/local-devnet/genesis/validator-config.yaml \
upstreams.jsonOr create manually:
cp upstreams.example.json upstreams.json
# Edit as needed./zig-out/bin/leanpoint --upstreams-config upstreams.jsoncurl http://localhost:5555/status
curl http://localhost:5555/metrics
curl http://localhost:5555/healthzReturns current finality checkpoint with metadata:
{
"justified_slot": 12345,
"finalized_slot": 12344,
"last_updated_ms": 1705852800000,
"last_success_ms": 1705852800000,
"stale": false,
"error_count": 0,
"last_error": null
}Fields:
justified_slot: Latest justified slot number (consensus from upstreams)finalized_slot: Latest finalized slot number (consensus from upstreams)last_updated_ms: Timestamp of last update attempt (milliseconds since epoch)last_success_ms: Timestamp of last successful consensus (milliseconds since epoch)stale: Boolean indicating if data is stale (exceeds threshold)error_count: Total number of errors encounteredlast_error: Most recent error message (null if no errors)
HTTP Status:
200 OK: Data available (may be stale ifstale: true)500 Internal Server Error: Server error
Returns Prometheus metrics:
# HELP leanpoint_justified_slot Latest justified slot.
# TYPE leanpoint_justified_slot gauge
leanpoint_justified_slot 12345
# HELP leanpoint_finalized_slot Latest finalized slot.
# TYPE leanpoint_finalized_slot gauge
leanpoint_finalized_slot 12344
# HELP leanpoint_last_success_timestamp_ms Last successful poll time (ms since epoch).
# TYPE leanpoint_last_success_timestamp_ms gauge
leanpoint_last_success_timestamp_ms 1705852800000
# HELP leanpoint_last_updated_timestamp_ms Last update time (ms since epoch).
# TYPE leanpoint_last_updated_timestamp_ms gauge
leanpoint_last_updated_timestamp_ms 1705852800000
# HELP leanpoint_last_latency_ms Last poll latency in milliseconds.
# TYPE leanpoint_last_latency_ms gauge
leanpoint_last_latency_ms 45
# HELP leanpoint_error_total Total poll errors.
# TYPE leanpoint_error_total counter
leanpoint_error_total 0
Metrics:
leanpoint_justified_slot: Latest justified slot (gauge)leanpoint_finalized_slot: Latest finalized slot (gauge)leanpoint_last_success_timestamp_ms: Last successful consensus time (gauge)leanpoint_last_updated_timestamp_ms: Last update attempt time (gauge)leanpoint_last_latency_ms: Poll latency in milliseconds (gauge)leanpoint_error_total: Total errors (counter)
Health check for load balancers:
- Returns
200 OKwhen data is fresh - Returns
503 Service Unavailablewhen stale
Health Criteria:
- Data must not be stale (within
--stale-msthreshold) - At least one successful poll must have occurred
If --static-dir is set, other paths serve files from that directory.
| Option | Default | Description |
|---|---|---|
| Bind address | 0.0.0.0 |
HTTP server bind address |
| Port | 5555 |
HTTP server port |
| Lean URL | http://127.0.0.1:5052 |
Single upstream URL (legacy) |
| Lean path | /status |
Beacon API endpoint path |
| Poll interval | 10000 ms |
Time between upstream polls |
| Request timeout | 5000 ms |
HTTP request timeout |
| Stale threshold | 30000 ms |
Data freshness threshold |
For monitoring a single beacon node:
leanpoint \
--bind 0.0.0.0 \
--port 5555 \
--lean-url http://127.0.0.1:5052 \
--lean-path /status \
--poll-ms 10000 \
--timeout-ms 5000 \
--stale-ms 30000 \
--static-dir ./webMonitor multiple beacon nodes with consensus validation:
leanpoint \
--upstreams-config ./upstreams.json \
--poll-ms 10000 \
--timeout-ms 5000 \
--stale-ms 30000How it works:
- Polls all upstreams in parallel every 10 seconds
- Collects justified/finalized slot pairs from each
- Only serves data when 50%+ of upstreams agree
- Tracks per-upstream health (errors, latency, last success)
Example upstreams.json:
{
"upstreams": [
{
"name": "zeam_0",
"url": "http://localhost:5052",
"path": "/status"
},
{
"name": "ream_0",
"url": "http://localhost:5053",
"path": "/status"
},
{
"name": "qlean_0",
"url": "http://localhost:5054",
"path": "/status"
},
{
"name": "lighthouse_0",
"url": "http://localhost:5055",
"path": "/eth/v1/beacon/states/finalized/finality_checkpoints"
}
]
}All CLI options can be set via environment variables:
LEANPOINT_BIND_ADDR=0.0.0.0
LEANPOINT_BIND_PORT=5555
LEANPOINT_LEAN_URL=http://127.0.0.1:5052
LEANPOINT_LEAN_PATH=/status
LEANPOINT_POLL_MS=10000
LEANPOINT_TIMEOUT_MS=5000
LEANPOINT_STALE_MS=30000
LEANPOINT_STATIC_DIR=/path/to/static
LEANPOINT_UPSTREAMS_CONFIG=/path/to/upstreams.jsonUsage:
leanpoint [options]
Options:
--bind <addr> Bind address (default 0.0.0.0)
--port <port> Bind port (default 5555)
--lean-url <url> LeanEthereum base URL (legacy single upstream)
--lean-path <path> LeanEthereum path (default /status)
--upstreams-config <file> JSON config file with multiple upstreams
--poll-ms <ms> Poll interval in milliseconds
--timeout-ms <ms> Request timeout in milliseconds
--stale-ms <ms> Stale threshold in milliseconds
--static-dir <dir> Optional static frontend directory
--help Show this help
Perfect integration with lean-quickstart devnets:
cd /path/to/lean-quickstart
NETWORK_DIR=local-devnet ./spin-node.sh --node all --generateGenesiscd /path/to/leanpoint
python3 convert-validator-config.py \
../lean-quickstart/local-devnet/genesis/validator-config.yaml \
upstreams.jsonThis automatically creates configuration for all nodes in your devnet:
{
"upstreams": [
{
"name": "zeam_0",
"url": "http://127.0.0.1:5052",
"path": "/status"
},
{
"name": "ream_0",
"url": "http://127.0.0.1:5053",
"path": "/status"
},
{
"name": "qlean_0",
"url": "http://127.0.0.1:5054",
"path": "/status"
}
]
}./zig-out/bin/leanpoint --upstreams-config upstreams.json# Terminal 1: Follow leanpoint output
./zig-out/bin/leanpoint --upstreams-config upstreams.json
# Terminal 2: Poll status
watch -n 2 'curl -s http://localhost:5555/status | jq'
# Terminal 3: Monitor metrics
curl -s http://localhost:5555/metrics | grep leanpoint_The convert-validator-config.py helper script automatically:
- Reads validator-config.yaml from lean-quickstart
- Extracts validator names and network information
- Generates appropriate HTTP endpoints for each validator
- Creates upstreams.json in the correct format
Usage:
# With default paths
python3 convert-validator-config.py
# With custom paths
python3 convert-validator-config.py \
/path/to/validator-config.yaml \
/path/to/output.json
# Adjust base port if needed (default: 5052)
# Edit the script to change base_port parameterLeanpoint automatically handles multiple API response formats:
Used by zeam, ream, qlean, and other Lean Ethereum clients.
Endpoint: /status
Response:
{
"justified_slot": 123,
"finalized_slot": 120
}Configuration:
{
"name": "zeam_0",
"url": "http://localhost:5052",
"path": "/status"
}Used by lighthouse, grandine, lodestar, teku, nimbus, prysm.
Endpoint: /eth/v1/beacon/states/finalized/finality_checkpoints
Response:
{
"data": {
"justified": {"slot": "123"},
"finalized": {"slot": "120"}
}
}Configuration:
{
"name": "lighthouse_0",
"url": "http://localhost:5052",
"path": "/eth/v1/beacon/states/finalized/finality_checkpoints"
}Alternative format with nested structure.
Response:
{
"data": {
"justified_slot": 123,
"finalized_slot": 120
}
}Leanpoint requires 50%+ of upstreams to agree before serving finality data.
- Poll Phase: All upstreams are polled concurrently
- Collection Phase: Justified/finalized slot pairs are collected from each successful response
- Counting Phase: Each unique slot pair is counted
- Consensus Phase: Only pairs with >50% votes are accepted
- Serving Phase: Consensus data is served to clients
| Scenario | Agreement | Result | Example |
|---|---|---|---|
| 3 upstreams, all agree | 3/3 = 100% | ✅ Serve data | All at (100, 99) |
| 3 upstreams, 2 agree | 2/3 = 67% | ✅ Serve data | Two at (100, 99), one at (101, 100) |
| 4 upstreams, 2 agree | 2/4 = 50% | ❌ No consensus | Two at (100, 99), two at (101, 100) |
| 5 upstreams, 3 agree | 3/5 = 60% | ✅ Serve data | Three at (100, 99), rest differ |
| 3 upstreams, all differ | 1/3 = 33% | ❌ No consensus | All on different slots |
- Byzantine Fault Tolerance: Single node failures don't affect service
- Fork Detection: Disagreement indicates nodes may be on different forks
- Data Integrity: Only serve finality data that multiple implementations agree on
- Network Health: Consensus failures indicate potential network issues
Add to prometheus.yml:
scrape_configs:
- job_name: 'leanpoint'
scrape_interval: 10s
static_configs:
- targets: ['localhost:5555']
metrics_path: '/metrics'Current finalized slot:
leanpoint_finalized_slot
Finality progress (slots per minute):
rate(leanpoint_finalized_slot[5m]) * 60
Error rate:
rate(leanpoint_error_total[5m])
Time since last successful consensus (seconds):
(time() * 1000 - leanpoint_last_success_timestamp_ms) / 1000
Data staleness alert (>60 seconds):
(time() * 1000 - leanpoint_last_success_timestamp_ms) > 60000
Poll latency:
leanpoint_last_latency_ms
Create a dashboard with panels for:
- Finalized Slot Timeline: Line graph of
leanpoint_finalized_slot - Finality Progress: Gauge showing
rate(leanpoint_finalized_slot[5m]) * 60 - Error Count: Counter of
leanpoint_error_total - Staleness Indicator: Alert when data exceeds threshold
- Poll Latency: Line graph of
leanpoint_last_latency_ms
Symptom:
{
"last_error": "no consensus reached among upstreams"
}Causes:
- Nodes are not synced or on different forks
- Network connectivity issues
- Insufficient number of upstreams responding
- Nodes returning different data formats
Solutions:
# Check individual node status
curl http://localhost:5052/status
curl http://localhost:5053/status
curl http://localhost:5054/status
# Verify nodes are synced
# Check node logs for sync status
# Test network connectivity
for port in 5052 5053 5054; do
curl -v http://localhost:$port/status
done
# Verify response formats match expected
curl -s http://localhost:5052/status | jqSymptom:
{
"last_error": "poll error: ConnectionRefused"
}Solutions:
# Verify correct ports in upstreams.json
cat upstreams.json | jq '.upstreams[].url'
# Check nodes are running
ps aux | grep -E "zeam|ream|qlean"
# Verify beacon API is exposed
curl -v http://localhost:5052/status
# Check node startup logs for API endpointSymptom:
/healthzreturns503 Service Unavailable/statusshows"stale": true
Solutions:
# Increase stale threshold
leanpoint --upstreams-config upstreams.json --stale-ms 60000
# Decrease poll interval
leanpoint --upstreams-config upstreams.json --poll-ms 5000
# Check if nodes are actually progressing
watch -n 1 'curl -s http://localhost:5052/status'
# Verify nodes are not stuck
curl http://localhost:5052/status
sleep 15
curl http://localhost:5052/status
# Slots should increaseSymptom:
{
"last_error": "poll error: Timeout"
}Solutions:
# Increase request timeout
leanpoint --upstreams-config upstreams.json --timeout-ms 10000
# Check network latency
time curl http://localhost:5052/status
# Verify node is responsive
curl -w "@curl-format.txt" http://localhost:5052/statusSymptom:
{
"last_error": "poll error: UnexpectedResponse"
}Solutions:
# Check actual response format
curl -s http://localhost:5052/status | jq
# For Lean clients, use /status
# For Standard Beacon API, use /eth/v1/beacon/states/finalized/finality_checkpoints
# Update path in upstreams.json accordinglyBuild the image:
docker build -t leanpoint:latest .Run the container:
# Create configuration first
cp upstreams.example.json upstreams.json
# Edit upstreams.json with your beacon node endpoints
# Run container
docker run -d \
--name leanpoint \
--restart unless-stopped \
-p 5555:5555 \
-v $(pwd)/upstreams.json:/etc/leanpoint/upstreams.json:ro \
leanpoint:latest \
leanpoint --upstreams-config /etc/leanpoint/upstreams.jsonMonitor:
# View logs
docker logs -f leanpoint
# Check status
curl http://localhost:5555/status
curl http://localhost:5555/metricsStop and cleanup:
docker stop leanpoint
docker rm leanpointMulti-architecture builds:
# Build for specific platform
docker build --platform linux/amd64 -t leanpoint:amd64 .
docker build --platform linux/arm64 -t leanpoint:arm64 .
# Or use buildx for multi-arch
docker buildx build --platform linux/amd64,linux/arm64 -t leanpoint:latest ./etc/systemd/system/leanpoint.service:
[Unit]
Description=Leanpoint Checkpoint Status Service
After=network.target
[Service]
Type=simple
User=leanpoint
WorkingDirectory=/opt/leanpoint
ExecStart=/opt/leanpoint/leanpoint --upstreams-config /opt/leanpoint/upstreams.json
Restart=always
RestartSec=10
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.targetSetup:
# Install
sudo cp zig-out/bin/leanpoint /opt/leanpoint/
sudo cp upstreams.json /opt/leanpoint/
sudo cp leanpoint.service /etc/systemd/system/
# Create user
sudo useradd -r -s /bin/false leanpoint
sudo chown -R leanpoint:leanpoint /opt/leanpoint
# Enable and start
sudo systemctl daemon-reload
sudo systemctl enable leanpoint
sudo systemctl start leanpoint
# Check status
sudo systemctl status leanpoint
sudo journalctl -u leanpoint -fdeployment.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: leanpoint-config
data:
upstreams.json: |
{
"upstreams": [
{"name": "zeam_0", "url": "http://zeam-0:5052", "path": "/status"},
{"name": "ream_0", "url": "http://ream-0:5053", "path": "/status"}
]
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: leanpoint
spec:
replicas: 1
selector:
matchLabels:
app: leanpoint
template:
metadata:
labels:
app: leanpoint
spec:
containers:
- name: leanpoint
image: leanpoint:latest
ports:
- containerPort: 5555
name: http
volumeMounts:
- name: config
mountPath: /etc/leanpoint
livenessProbe:
httpGet:
path: /healthz
port: 5555
initialDelaySeconds: 10
periodSeconds: 30
readinessProbe:
httpGet:
path: /healthz
port: 5555
initialDelaySeconds: 5
periodSeconds: 10
volumes:
- name: config
configMap:
name: leanpoint-config
---
apiVersion: v1
kind: Service
metadata:
name: leanpoint
spec:
selector:
app: leanpoint
ports:
- port: 5555
targetPort: 5555
name: httpWith SSL and CORS:
server {
listen 443 ssl http2;
server_name checkpoint.example.com;
ssl_certificate /etc/letsencrypt/live/checkpoint.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/checkpoint.example.com/privkey.pem;
# Rate limiting
limit_req_zone $binary_remote_addr zone=leanpoint:10m rate=10r/s;
location / {
limit_req zone=leanpoint burst=20;
proxy_pass http://localhost:5555;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# CORS headers
add_header Access-Control-Allow-Origin *;
add_header Access-Control-Allow-Methods "GET, OPTIONS";
add_header Access-Control-Allow-Headers "Content-Type";
}
}docker-compose.yml:
version: '3.8'
services:
leanpoint:
image: leanpoint:latest
ports:
- "5555:5555"
volumes:
- ./upstreams.json:/etc/leanpoint/upstreams.json
restart: always
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
restart: always
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
volumes:
- grafana_data:/var/lib/grafana
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
restart: always
volumes:
prometheus_data:
grafana_data:| Feature | Leanpoint | Checkpointz |
|---|---|---|
| Language | Zig | Go |
| Binary Size | ~8MB | ~50MB |
| Config Format | JSON | YAML |
| Consensus | 50%+ | 50%+ |
| Finality Status | ✅ | ✅ |
| Block Serving | ❌ (future) | ✅ |
| State Serving | ❌ (future) | ✅ |
| Historical Epochs | ❌ (future) | ✅ |
| Caching | Minimal | Extensive |
| Web UI | Optional | Built-in |
| Target | Lean Ethereum | Standard Ethereum |
| Resource Usage | Very Low | Low |
| Startup Time | Instant | Fast |
Leanpoint:
- Minimalist approach focused on finality monitoring
- Optimized for Lean Ethereum ecosystem
- Single binary with no external dependencies
- Configuration via simple JSON
- Ideal for devnets and lightweight deployments
Checkpointz:
- Full-featured checkpoint sync provider
- Serves complete blocks and states
- Integrated web UI with client guides
- Sophisticated caching strategies
- Production-ready for public endpoints
- ✅ Monitoring Lean Ethereum devnets
- ✅ Lightweight production finality monitoring
- ✅ Multi-client consensus validation
- ✅ Low-resource environments
- ✅ Simple deployment requirements
- ✅ Full checkpoint sync provider
- ✅ Serving beacon chain blocks and states
- ✅ Public-facing checkpoint endpoints
- ✅ Standard Ethereum networks
- ✅ Need for integrated web UI
leanpoint/
├── src/
│ ├── main.zig # Entry point with single/multi mode
│ ├── config.zig # Configuration loader
│ ├── upstreams.zig # Upstream manager with consensus
│ ├── upstreams_config.zig # JSON config parser
│ ├── lean_api.zig # Beacon API client
│ ├── metrics.zig # Prometheus metrics
│ ├── server.zig # HTTP server
│ └── state.zig # Application state
├── zig-out/
│ └── bin/
│ └── leanpoint # Compiled binary
├── upstreams.example.json # Example configuration
├── convert-validator-config.py # Helper script
├── build.zig # Build configuration
├── build.zig.zon # Package manifest
├── .gitignore # Git ignore rules
└── README.md # This file
Contributions welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feat/amazing-feature) - Make your changes with clear commit messages
- Add tests if applicable
- Ensure code compiles:
zig build - Run tests:
zig build test - Format code:
zig fmt src/ - Submit a pull request
The repository includes GitHub Actions CI that automatically:
- Builds the project
- Runs tests
- Checks code formatting
- Builds Docker image
All pull requests must pass CI checks before merging.
# Clone repository
git clone https://github.com/your-org/leanpoint.git
cd leanpoint
# Build
zig build
# Run tests
zig build test
# Format code
zig fmt src/
# Run locally
./zig-out/bin/leanpoint --helpWhen reporting issues, please include:
- Leanpoint version:
./zig-out/bin/leanpoint --help - Zig version:
zig version - Operating system and architecture
- Configuration (upstreams.json)
- Error messages or unexpected behavior
- Steps to reproduce
[Specify your license here]
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: This README
- Inspired by checkpointz by ethPandaOps
- Built for the Lean Ethereum ecosystem
- Written in Zig
Built with ⚡ by the Lean Ethereum community