Lately at NextThought we've been much more focused on using application level metrics to proactively monitor and understand the run-time characteristics of our applications. Much of the open source stack we are built on top of is already instrumented with the great perfmetrics library. Because of this, when it was time to expand the metrics we collected, perfmetrics was the obvious choice. However we quickly ran into a problem. How should we test the metrics we generated were actually emitted as the StatsD metrics we expected?