DTS-Go is a robust, scalable distributed task scheduler system built with Go. It provides a flexible and efficient way to schedule, manage, and execute tasks across distributed systems.
- Job creation, retrieval, updating, listing, and deletion
- Task scheduling with cron expressions
- Resource allocation and management
- Distributed execution of tasks
- Scalable architecture using Kafka and Cassandra
- gRPC and HTTP API support
- CLI tool for easy interaction with the system
- Docker Compose setup for easy development and deployment
DTS-Go consists of several microservices:
- Job Service: Manages job CRUD operations
- Scheduler Service: Handles task scheduling and resource allocation
- Execution Service: Executes scheduled tasks
The system uses Apache Kafka for message queuing and Apache Cassandra for persistent storage.
- Go 1.22+
- Docker and Docker Compose
- Apache Kafka
- Apache Cassandra
- buf (for protocol buffer management)
-
Clone the repository:
git clone https://github.com/yourusername/dts-go.git cd dts-go -
Set up the environment variables in a
.envfile:CASSANDRA_DATA_PATH=./cassandra/data CASSANDRA_CLUSTER_NAME=Turquoise CASSANDRA_KEYSPACE=task_scheduler KAFKA_BROKER_ID=1 KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 DOCKER_HOST_IP=127.0.0.1 JOB_SERVICE_GRPC_PORT=50054 JOB_SERVICE_HTTP_PORT=8080 SCHEDULER_SERVICE_GRPC_PORT=50052 SCHEDULER_SERVICE_HTTP_PORT=8081 EXECUTION_SERVICE_GRPC_PORT=50053 EXECUTION_SERVICE_HTTP_PORT=8082 CASSANDRA_DATA_RETENTION_DAYS=30 KAFKA_TASK_TOPIC=jobs KAFKA_EXECUTION_TOPIC=job-executions -
Start the services using Docker Compose:
docker compose -f docker-compose.dev.yml up -d -
Install buf:
go install github.com/bufbuild/buf/cmd/buf@latest -
Generate protocol buffers:
buf generate
The project uses Apache Cassandra as its database. The initial schema and subsequent migrations are managed using SQL files in the migrations directory.
000_init_schema.up.cql: Initial schema creation000_init_schema.down.cql: Reverses the initial schema creation001_add_next_run_to_jobs.up.cql: Adds thenext_runcolumn to thejobstable001_add_next_run_to_jobs.down.cql: Removes thenext_runcolumn from thejobstable002_add_last_run_to_jobs.up.cql: Adds thelast_runcolumn to thejobstable002_add_last_run_to_jobs.down.cql: Removes thelast_runcolumn from thejobstable
Migrations are automatically applied when starting the services using Docker Compose. For manual migration management, we use golang-migrate. To run migrations manually:
-
Install golang-migrate:
go install -tags 'cassandra' github.com/golang-migrate/migrate/v4/cmd/migrate@latest -
Run migrations:
make migrate-up -
To revert migrations:
make migrate-down -
To create a new migration:
make migrate-create name=add_new_column
This will create new migration files in the migrations directory.
The CLI tool provides an easy way to interact with the DTS-Go system. Here are some example commands:
-
Create a job:
go run cmd/cli/main.go job create --name "My Job" --description "Description" --cron "*/5 * * * *" --metadata '{"key": "value"}' -
Get a job:
go run cmd/cli/main.go job get --id <job_id> -
List jobs:
go run cmd/cli/main.go job list --page-size 10 --status "active" -
Update a job:
go run cmd/cli/main.go job update --id <job_id> --name "Updated Job" --status "paused" -
Delete a job:
go run cmd/cli/main.go job delete --id <job_id>
The system exposes both gRPC and HTTP APIs. You can use tools like grpcurl for gRPC or curl for HTTP to interact with the APIs.
Example HTTP request to create a job:
curl -X POST http://localhost:8080/v1/jobs
-H "Content-Type: application/json"
-d '{"name": "My Job", "description": "Description", "cron_expression": "/5 ", "metadata": {"key": "value"}}'
cmd/: Contains the main applicationsjob-service/: Job service implementationscheduler-service/: Scheduler service implementationexecution-service/: Execution service implementationcli/: Command-line interface tool
internal/: Internal packagesjob/: Job-related logicscheduler/: Scheduler-related logic
pkg/: Shared packagesconfig/: Configuration managementdatabase/: Database clients and utilitiesmodels/: Shared data modelsqueue/: Message queue clients and utilitiesservices/: gRPC service implementations
api/: Protocol buffer definitions
- Create a new directory under
cmd/ - Implement the service logic
- Add the service to
docker-compose.yml - Update the
Makefileif necessary
After modifying the .proto files in the proto/ directory, regenerate the Go code:
buf generate
This command will update the generated Go code based on the changes in the .proto files.
The system can be configured using environment variables. Here are the available options:
KAFKA_BROKERS: Comma-separated list of Kafka brokers (default: "localhost:9092")CASSANDRA_HOSTS: Comma-separated list of Cassandra hosts (default: "localhost")CASSANDRA_KEYSPACE: Cassandra keyspace name (default: "task_scheduler")JOB_SERVICE_GRPC_PORT: Job service gRPC port (default: "50054")JOB_SERVICE_HTTP_PORT: Job service HTTP port (default: "8080")SCHEDULER_SERVICE_GRPC_PORT: Scheduler service gRPC port (default: "50052")SCHEDULER_SERVICE_HTTP_PORT: Scheduler service HTTP port (default: "8081")
- Create Job:
POST /v1/jobs - Get Job:
GET /v1/jobs/{id} - List Jobs:
GET /v1/jobs - Update Job:
PUT /v1/jobs/{id} - Delete Job:
DELETE /v1/jobs/{id}
- Schedule Job:
POST /v1/scheduler/jobs - Cancel Job:
DELETE /v1/scheduler/jobs/{job_id} - Get Scheduled Job:
GET /v1/scheduler/jobs/{job_id} - List Scheduled Jobs:
GET /v1/scheduler/jobs
For detailed API documentation, please refer to the proto files in the api/proto/ directory.
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.