Releasing BadgerDB v1.6.0

It’s been almost a years since BadgerDB v1.5.0 was released. While both the project and the community surrounding it have changed a lot, not many new releases have seen the light of the day. Happily, this changes today, as we release BadgerDB v1.6.0 and announce our plans for BadgerDB v2 coming out next week!

BadgerDB v1.6.0

The API of BadgerDB has changed slowly over the last year and, while we tried to keep backward incompatible changes to a minimum, some features were deprecated and replaced by improved solutions supported by the community.

So please note that v1.6.0 is not fully backward compatible with v1.5.0. The changes you’ll need to perform to migrate from v1.5 to v1.6 are pretty minimal and only at API level. From a data storage point of view, both versions are completely compatible and a database created with v1.5 can be read and managed without any hiccups with v1.6.

New Features

Let’s see which are the most important features that have been added since 1.5. For a detailed list of the changes to the Badger API, you can read the CHANGELOG.

Faster writes with WriteBatch

Most users write to Badger in a serial loop, one transaction per iteration. Badger is built with the idea of high write throughput with many transactions happening concurrently. Writing to Badger one transaction at a time is not always the most performant way. That’s why we introduced a new API which will help you batching writes to Badger: WriteBatch.

WriteBatch would automatically batch up multiple serial writes into one transaction, send it to be executed asynchronously and continue picking up more writes.

In our benchmarks, the new API allowed us to write keys at a speed almost 7 times faster than what we can do with a single write per transaction. Your own numbers might vary, but we do expect this new API to be of great help for those looking to squeeze that extra performance from Badger.

Write at 1.6 Gbps with StreamWriter

While WriteBatch will help pretty much in every situation where blind writes need to be done as quickly as possible. But, for some more specific use cases we have come up with an even faster solution: StreamWriter.

StreamWriter was designed to be the complementary API to Stream, and can be used to bootstrap new instances of Badger or stream it over the network. In Dgraph, if a server loses data or falls behind, we send a snapshot over using this API.

StreamWriter can write multiple streams, as long as each stream contains sorted keys and the streams are non-overlapping. The output from badger.Stream framework provides that guarantee. StreamWriter works by avoiding LSM tree compactions, instead building SSTables and arranging them directly in the many levels of the tree. This can write almost as fast as disk allows. Our experiments show writes at a rate of 200MBps, writing over 1 billion keys in around 3m20s.

This code demonstrates how to copy a dataset onto a new database with Stream and StreamWriter.

func streamCopy(dest, orig *badger.DB) error {
    sw := dest.NewStreamWriter()
    if err := sw.Prepare(); err != nil {
        return errors.Wrap(err, "could not prepare stream writer")

    s := orig.NewStream()
    s.Send = sw.Write

    if err := s.Orchestrate(context.Background()); err != nil {
        return errors.Wrap(err, "could not orchestrate writes")

    if err := sw.Flush(); err != nil {
        return errors.Wrap(err, "could not flush stream writer")

    return nil
}

StreamWriter can be used without Stream itself, but make sure you read the documentation and understand the constraints on how Write should be called.

Be notified of changes in the DB with Subscribe

Once you’ve opened a BadgerDB you can easily set up a subscription so every time a key changes a function of your choice will be called. You can choose what specific key ranges you’d like to observe, or simply give an empty prefix to subscribe to all key changes.

For a complete example of the functionality, check out this example in the documentation.

New Options API

In the search of more intuitive APIs, we decided to provide a new way of creating the badger.Options passed to badger.Open and others.

We made DefaultOptions a function that receives the path where the database is stored as a parameter. Once you got those default options, you can modify specific bits by using the also new builder API.

So for instance, the code needed to open a Badger DB in read-only mode in previous versions was:

opts := badger.Options
opts.Dir = "/tmp/db"
opts.ValueDir = "tmp/db"
opts.ReadOnly = true
db, err := badger.Open(opts)

With the new builder API, the code becomes less verbose.

db, err := badger.Open(badger.DefaultOpts("/tmp/db").WithReadOnly(true))

New Entry API

Similarly to Options, we’ve removed some methods in badger.Txn and replaced them by a new builder API on badger.Entry. The methods SetWithDiscard, SetWithmeta, and SetWithTTL have been removed. New badger.Entry methods WithDiscard, WithMeta, and WithTTL provide the same functionality and can be used with the badger.Txn.SetEntry.

The benefit of this approach is that it allows all the permutations of SetWith without having to introduce an API covering each.

For instance, in previous versions, the code to set a transaction with TTL of 1 second was the following:

    err := db.Update(func(txn *badger.Txn) error {
        return txn.SetWithTTL(key, value, 1*time.Second)
    })

With the builder API on Entry, you would now write the following:

    db.Update(func(txn *badger.Txn) error {
        return txn.SetEntry(badger.NewEntry(key, value).WithTTL(1 * time.Second))
    })

This change also makes it possible to set both TTL and Meta, which wasn’t the case before, as shown in issues #774 and #332.

Thanks!

BadgerDB is an open source project and as such, we depend on your contributions. So we wanted to take a minute to thank all of our contributors. No matter whether you contributed code, documentation, reported issues, or simply starred the project on GitHub, you made the project better and we thank you all for that!