Can Python + Qt combination produce a real time spectral analysis tool?

Question:

I want to develop a tool that does the following things.

  1. take in a live voice recording
  2. produce a real time spectrogram
  3. show the time-domain signal
  4. output few values extracted from the spectral analysis

All of these have to be kept updated in a window as the voice is recorded.

I have worked with numpy. But I’m completely new to Qt and other GUI builder tools. What’s the best way to proceed given this situation? My peers recommended Qt after I explained them the task. If someone knew of a better tool to be used with python for this task, please let me know. Also, please help me with technical details as to how to capture the live stream and process it in python which is to be shown in a GUI window. One link that gave me some hope is http://www.swharden.com/blog/2010-03-05-realtime-fft-graph-of-audio-wav-file-or-microphone-input-with-python-scipy-and-wckgraph/ . But it is a bit difficult to comprehend it. May be a less intensive solution will help me in getting started.

Asked By: gopalkoduri

||

Answers:

On Linux, this is definitely feasible. Other platforms too, but I can really only answer for Linux. Python isn’t necessarily your sharpest tool for real-time DSP, but on a suitably modern machine and suitably modest goals you will be fine.

First, you need an interface to the Linux audio drivers. ALSA is pretty universal. There are several different Python wrappers for the ALSA libraries, see Python In Music for a list of libs and applications using them.

Then you do your spectral analysis. SciPy and NumPy have all that.

Then you draw into your Qt window. My expertise is in GTK but you probably want to create a QtCanvas (tutorial), which is an object-oriented drawing area that’s designed for this kind of use.

Or you could just use SciPy, which can probably be convinced to do all of this! AudioLab in particular looks like it might be a big help.

Answered By: Bill Gribble

In Qt 4.6, the QAudioInput API was added. This provides a cross-platform abstraction for getting an audio input signal, and therefore would be of use in achieving point (1).

As for (2) and (3), the Spectrum Analyzer demo which ships with Qt may be of interest.

Screenshot of Spectrum Analyzer demo running on Symbian http://labs.trolltech.com/blogs/wp-content/uploads/2010/05/spectrum.png

The implementation is in C++ rather than in Python, but it may be of use as a reference. Basically what you need for (2) is to calculate the Fast Fourier Transform of the input signal. You’ll probably want to use a library which provides an FFT implementation rather than writing your own – that’s the approach I took when writing the demo 🙂

As for (3), this is conceptually pretty simple, but requires a bit of thought in order to get a smoothly scrolling waveform. Take a look at the tiling approach used in the Waveform class in the demo for some tips.

I think by (4) you mean: reduce the large number of points in the FFT output to a small number of values. This is what the demo does in order to plot a bar chart for the spectrum. Again, refer to the demo code to see how the binning of frequency amplitudes is implemented.

Answered By: Gareth Stockwell

Another example of a real-time audio spectrum analyzer using PyAudio, scipy, Chaco in one script can be found in the list of examples for Chaco. (Worked out-of-the-box on my Precise).

Screenshot

Answered By: alexei