Pure Go, Simple, Embedded, Persistent Job Queue, backed by BadgerDB
DO NOT USE IN PRODUCTION This library is still in Alpha / Work In Progress
Intro article: Introducing Goblero: a Go Embedded Job Queue
- Pure Go library, no cgo
- Simple, embedded, persistent job queue
- Provides in-process job processing to any Go app
- The jobs/status changes are persisted to disk after each operation and pending jobs can continue processing after an app restart or a crash
- Allows multiple "processors", each processor/worker processes one job at a time then is assigned a new job, etc
- The storage engine used is BadgerDB
P.S: Why is the library named Goblero ? Go for the Go programming language obviously, and Badger in french is "Blaireau", but blero is easier to pronounce :)
The full API is documented on godoc.org. There is also a demo repo goblero-demo
Get package
go get -u github.com/didil/goblero/pkg/blero
API
// Create a new Blero backend
bl := blero.New("db/")
// Start Blero
bl.Start()
// defer Stopping Blero
defer bl.Stop()
// register a processor
bl.RegisterProcessorFunc(func(j *blero.Job) error {
// Do some processing, access job name with j.Name, job data with j.Data
})
// enqueue a job
bl.EnqueueJob("MyJob", []byte("My Job Data"))
# Core i5 laptop / 8GB Ram / SSD
make bench
BenchmarkEnqueue/EnqueueJob-4 50000 159942 ns/op (~ 6250 ops/s)
BenchmarkEnqueue/dequeueJob-4 5000 2767260 ns/op (~ 361 ops/s)
- Restart interrupted jobs after app restart/crashes
- Sweep completed jobs from the "complete" queue
- Failed Jobs retry options
- Allow batch enqueuing
- Add support for Go contexts
- Test in real conditions under high load
- Expose Prometheus Metrics in an Http handler
- Optimize performance / Locking
All contributions (PR, feedback, bug reports, ideas, etc.) are welcome !