How to implement high speed, consistent sampling?

Question:

The sort of application to have in mind is an oscilloscope or high speed data logger. I have a function which retrieves the required information, I just need to work out how to call it over and over again, very quickly and with high precision.

There are limitations to time.sleep(), I don’t think that is the way to go.

I have looked into the built in event scheduler, but I don’t think it’s precise enough and doesn’t quite fill my needs.

The requirements for this are:

  • High speed sampling. 10ms is the most that will be asked of it.
  • High accuracy intervals. At 10ms, a 10% error is acceptable (±1ms).
  • Fairly low CPU usage, some load is acceptable at 10ms, but it should be less than ~5% for 100ms intervals and beyond. I know this is subjective, I guess what I’m saying is that hogging the CPU is unacceptable.
  • Ideally, the timer will be initialised with an interval time, and then started when required. The required function should then be called at the correct interval over and over again until the timer is stopped.
  • It will (not must) only ever run on a Windows machine.

Are there any existing libraries that fulfil these requirements? I don’t want to re-invent the wheel, but if I have to I will probably use the Windows multimedia timer (winmm.dll). Any comments/suggestions with that?

Asked By: Gareth Webber

||

Answers:

Edit: After writing the stuff below, I’d be inclined to implement a similar test for the python event scheduler. I don’t see why you think it would be insufficiently accurate.

Something like the following seems to work pretty well under Linux with me (and I have no reason to think it won’t work with Windows). Every 10ms, on_timer_event() is called which prints out the time since the last call based on the real-time clock. This shows the approximate accuracy of the timers. Finally, the total time is printed out to show there is no drift.

There seems to be one issue with the code below with events occasionally appearing at spurious (and short intervals). I’ve no idea why this is, but no doubt with some playing you can make it reliable. I think this sort of approach is the way to go.

import pygame
import time

pygame.init()
TIMER_EVENT = pygame.USEREVENT+1

pygame.time.set_timer(TIMER_EVENT, 10)

timer_count = 0
MAX_TIMER_COUNT = 1000

def on_timer_event():
    global last_time
    global timer_count

    new_time = time.time()

    print new_time - last_time
    last_time = new_time

    timer_count += 1

    if timer_count > MAX_TIMER_COUNT:
        print last_time - initial_time
        pygame.event.post(pygame.event.Event(pygame.QUIT, {}))

initial_time = time.time()
last_time = initial_time
while True:
    event = pygame.event.wait()
    if event.type == TIMER_EVENT:
        on_timer_event()

    elif event.type == pygame.QUIT:
        break
Answered By: Henry Gomersall

I know I’m late to the game answering my own question, but hopefully it will help someone.

I wrote a wrapper to the Windows Multimedia Timer purely as a test. It seems to work well, but the code isn’t fully tested and hasn’t been optimized.

mmtimer.py:

from ctypes import *
from ctypes.wintypes import UINT
from ctypes.wintypes import DWORD

timeproc = WINFUNCTYPE(None, c_uint, c_uint, DWORD, DWORD, DWORD)
timeSetEvent = windll.winmm.timeSetEvent
timeKillEvent = windll.winmm.timeKillEvent


class mmtimer:
    def Tick(self):
        self.tickFunc()

        if not self.periodic:
            self.stop()

    def CallBack(self, uID, uMsg, dwUser, dw1, dw2):
        if self.running:
            self.Tick()

    def __init__(self, interval, tickFunc, stopFunc=None, resolution=0, periodic=True):
        self.interval = UINT(interval)
        self.resolution = UINT(resolution)
        self.tickFunc = tickFunc
        self.stopFunc = stopFunc
        self.periodic = periodic
        self.id = None
        self.running = False
        self.calbckfn = timeproc(self.CallBack)

    def start(self, instant=False):
        if not self.running:
            self.running = True
            if instant:
                self.Tick()

            self.id = timeSetEvent(self.interval, self.resolution,
                                   self.calbckfn, c_ulong(0),
                                   c_uint(self.periodic))

    def stop(self):
        if self.running:
            timeKillEvent(self.id)
            self.running = False

            if self.stopFunc:
                self.stopFunc()

Periodic test code:

from mmtimer import mmtimer
import time

def tick():
    print("{0:.2f}".format(time.clock() * 1000))

t1 = mmtimer(10, tick)
time.clock()
t1.start(True)
time.sleep(0.1)
t1.stop()

Output in milliseconds:

0.00
10.40
20.15
29.91
39.68
50.43
60.19
69.96
79.72
90.46
100.23

One-shot test code:

from mmtimer import mmtimer
import time

def tick():
    print("{0:.2f}".format(time.clock() * 1000))

t1 = mmtimer(150, tick, periodic=False)
time.clock()
t1.start()

Output in milliseconds:

150.17

As you can see from the results, it’s pretty accurate. However, this is only using time.clock() so take them with a pinch of salt.

During a prolonged test with a 10ms periodic timer, CPU usage is around 3% or less on my old dual code 3GHz machine. The machine also seems to use that when it’s idle though, so I’d say additional CPU usage is minimal.

Answered By: Gareth Webber

timed-count was designed for exactly this. It doesn’t suffer from temporal drift, so it can be used to repeatedly capture data streams and synchronise them afterwards.

There’s a relevant high speed example here.

Answered By: 101
Categories: questions Tags:
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.