# Measuring the frequency of the synchronous grid of Continental Europe.

The last C3 had an interesting talk about the project netzsin.us. Its about monitoring the synchronous grid of Continental Europe directly at home by measuring the exact AC-frequency. They developed and built open source hardware consisting of a RaspberryPi and a micro-controller. Their data is freely available for non commercial applications.

As an owner of a RaspberryPi I’d like to measure the frequency myself. However, for reasons of simplification I didn’t like to use a micro-controller. Despite the fact that  the RaspberryPi is unable to reliably read PWM signals I gave it a shot. Knowing that using the I/O pins was not possible I tried using a cheap USB-Soundcard with an microphone input and then record the hum directly just like standard audio.

The only hardware requirements remaining were a 230V -> 9V  transformer, a voltage divider and a Phone connector.

On my first test on my desktop PC I got an ugly signal. Surprisingly the displayed frequency was quite exact.

In a test-recording I heard a truly familiar sound. Even though I’ve listened to the hum earlier in life, this was the first time I enjoyed it.

Despite that, I noticed a periodic interruption approximate every second which lasts for ~1ms.  It seemed like parts of the signals got set to zero. I haven’t tested it in detail, but I assume this effect has great influence on frequency filters, and hence I didn’t want to use them.As I could not find the source of this delay, I ignored the problem.

So, how do I get the local frequency of the signal? Coming from the machine learning community I love building models and approximating things. Why not approximate a function like $g(x) = a \cdot sin( b \cdot x + c )$ on a 10 second stretch of the data? This would allow to directly read the frequency by looking at $\frac{b}{2\pi}$.

Assuming harmonics and other disturbances are normal distributed I can use standard Least-Square regression. Hence, my Loss function is:

$L = \sum\limits_{i=1}^{n} \left(y_i - g(x_i)\right)^2$

If we find the set of parmeters ${ a, b, c }$ which minimize $L$ we have a solution. Minimizing a function can be done by setting the gradient with respect to the parameters to zero.

$\mathbf{\nabla}L(a,b,c) = \mathbf{0}$

Which leads to the gradients

$\frac{\partial L}{\partial a} = \sum\limits_{i=1}^{n} \delta_i \cdot sin(b\cdot x_i +c )$

$\frac{\partial L}{\partial b} = \sum\limits_{i=1}^{n} \delta_i \cdot a \cdot x_i \cdot cos(b\cdot x_i +c )$

$\frac{\partial L}{\partial c} = \sum\limits_{i=1}^{n} \delta_i \cdot a \cdot cos(b\cdot x_i +c )$

with $\delta_i = y_i - a \cdot sin( b \cdot x_i + c)$. As systems of nonlinear equations are quite nasty, I solved them by gradient descent. I initialized the parameter $b$ to 50Hz and $a$ to 0.5 which was the amplitude of my signal. It only took about 100 iterations for the parameters to converge.

I trained a model for each second of my data using a moving data window of 10 seconds width. Hence,  I still get a measurement every second.

The results are quite satisfying.

The good thing is, my measurements are comparable to the ones obtained by the micro-controller hardware of netzsin.us. Especially because we use completely different measurement techniques.
Interestingly the measurements of the University of Erlangen are quite off from ours.

What remains? Porting my approach to the Raspberry-Pi.

Update: Corrected the last Plot. It had a wrong year. I also added more math.

Update2: Smoothed the measurements with an moving average. Looks like the clocks of our PCs are a few seconds off.