• Scale the shit out of this!

    Starting v0.8, we have aimed to focus purely on the stability and performance of Dgraph. Our feature set is at this point good enough for most users – so we’ve decided to freeze it until we reach v1.0. Part of ensuring stability and performance led us to try and load the entire Stack Overflow on Dgraph; around 2 billion edges. With full-text search, reverses and other indices, this jumps between 6-8 billion edges; which poses unique challenges.

  • Building a Stack Overflow Clone with Dgraph, and React

    I have recently built a Stack Overflow clone with Dgraph and React. I was delightfully surprised by the pleasant developer experience and the performance of my application. In this post, I would like to tell the story of how I built Graphoverflow and share the best practices I learned for using Dgraph to build a modern web application.

  • Orchestrating signal and wait in Go

    One of the common use case in Go is to start a few goroutines to do some work. These goroutines block listening in on a channel, waiting for more work to arrive. At some point, you want to signal these goroutines to stop accepting more work and exit, so you can cleanly shut down the program.

  • Running Stack Overflow on Dgraph

    We have been taught, conditioned, trained to use SQL all our lives as engineers. It was there in schools, there when we went to college. It was being used at the company that we joined. It was such a common interview question that it no longer is. We don’t have just one, but an array of SQL databases to choose from. MySQL was released 22 years ago, in 1995 (youngest engineer at Dgraph was born the same year).

  • Build a Realtime Recommendation Engine: Part 2

    This is part 2 of a two-part series on recommendations using Dgraph. Check our part 1 here. In the last post, we looked at how many applications and web apps no longer present static data, but rather generate interesting recommendations to users. There’s a whole field of theory and practice in recommendation engines that we touched on, talking about content-based (based on properties of objects) and collaborative (based on similar users) filtering techniques based on a chapter from Stanford MOOC Minning Massive Datasets.

< Previous Page 1 2 3 4 5 6 7 Next Page >
Join our community