Running Stack Overflow on Dgraph
We have been taught, conditioned, trained to use SQL all our lives as engineers. It was there in schools, there when we went to college. It was being used at the company that we joined. It was such a common interview question that it no longer is. We don’t have just one, but an array of SQL databases to choose from. MySQL was released 22 years ago, in 1995 (youngest engineer at Dgraph was born the same year).
Build a Realtime Recommendation Engine: Part 2
This is part 2 of a two-part series on recommendations using Dgraph. Check our part 1 here. In the last post, we looked at how many applications and web apps no longer present static data, but rather generate interesting recommendations to users. There’s a whole field of theory and practice in recommendation engines that we touched on, talking about content-based (based on properties of objects) and collaborative (based on similar users) filtering techniques based on a chapter from Stanford MOOC Minning Massive Datasets.
Build a Realtime Recommendation Engine: Part 1
Preface In today’s world, user experience is paramount. It’s no longer about basic CRUD, just serving user data; it’s about mining the data to generate interesting predictions and suggesting actions to the user. That’s the field of recommendations. They’re everywhere. In fact, they happen so frequently that you don’t even realize them. You wake up and open Facebook, which shows you a feed of articles that it has chosen for you based on your viewing history.
go get github.com/dgraph-io/dgraph/...
Thank you Go community for all the love that you showered on Badger. Within 8 hours of announcing Badger, the blog post made it to the first page of Hacker News. And within three days, the Github repo received 1250 stars, having crossed 1500 by the time of this post. We have already merged contributions and received feedback that we need to work on. All this goes to show how much people enjoy Go native libraries.
Introducing Badger: A fast key-value store written purely in Go
We have built an efficient and persistent log structured merge (LSM) tree based key-value store, purely in Go language. It is based upon WiscKey paper included in USENIX FAST 2016. This design is highly SSD-optimized and separates keys from values to minimize I/O amplification; leveraging both the sequential and the random performance of SSDs. We call it Badger. Based on benchmarks, Badger is at least 3.5x faster than RocksDB when doing random reads.