Making It Up: Generative Adversarial Networks

Goodfellow et al. at Google Brain published Generative Adversarial Nets (GANs) in 2014 that turned the machine learning world on its side. In the past, neural networks had only been used for classification and regression, but these researchers developed a way to generate synthetic data that looks at least somewhat real. Nvidia submitted a study early into this development that generated fake celebrities. The major caveat was (and mostly still is) this methodology has only been applied to generating images using convolutional neural networks; however, more researchers are finding more expansive uses. In this post I will be using MNIST: a popular data set composed of thousands of handwritten numbers such as the following:

MNIST sample digits

Photo credit to GitHub user czala in repo https://github.com/cazala/mnist

I will first present the common architecture of a GAN as well as the theory behind its operation, followed by an example implementation that generates instances of these handwritten digits.

Read more…





Blocking gevent's Hub Part 1: Understanding Blocking

In the beginning we talked about gevent's hub and how greenlets switch in and out of it to implement IO. Following that we showed how locks in gevent are implemented much the same way, by "parking" a waiting greenlet, switching to the hub to let other greenlets run or do IO, and eventually switching back to the parked greenlet.

That's a lot of switching. What does it mean if that switching doesn't happen? What should a programmer know about switching and its opposite, blocking? (There's also part 2.)

Read more…