Multi-threaded applications and asynchronous I/O
libusb is a thread-safe library, but extra considerations must be applied to applications which interact with libusb from multiple threads.
The underlying issue that must be addressed is that all libusb I/O revolves around monitoring file descriptors through the poll()/select() system calls. This is directly exposed at the asynchronous interface but it is important to note that the synchronous interface is implemented on top of the asynchonrous interface, therefore the same considerations apply.
The issue is that if two or more threads are concurrently calling poll() or select() on libusb’s file descriptors then only one of those threads will be woken up when an event arrives. The others will be completely oblivious that anything has happened.
Consider the following pseudo-code, which submits an asynchronous transfer then waits for its completion. This style is one way you could implement a synchronous interface on top of the asynchronous interface (and libusb does something similar, albeit more advanced due to the complications explained on this page).
void cb(struct libusb_transfer *transfer) { int *completed = transfer->user_data; *completed = 1; } void myfunc() { struct libusb_transfer *transfer; unsigned char buffer[LIBUSB_CONTROL_SETUP_SIZE] __attribute__ ((aligned (2))); int completed = 0; transfer = libusb_alloc_transfer(0); libusb_fill_control_setup(buffer, LIBUSB_REQUEST_TYPE_VENDOR | LIBUSB_ENDPOINT_OUT, 0x04, 0x01, 0, 0); libusb_fill_control_transfer(transfer, dev, buffer, cb, &completed, 1000); libusb_submit_transfer(transfer); while (!completed) { poll(libusb file descriptors, 120*1000); if (poll indicates activity) libusb_handle_events_timeout(ctx, &zero_tv); } printf("completed!"); // other code here }
Here we are serializing completion of an asynchronous event against a condition - the condition being completion of a specific transfer. The poll() loop has a long timeout to minimize CPU usage during situations when nothing is happening (it could reasonably be unlimited).
If this is the only thread that is polling libusb’s file descriptors, there is no problem: there is no danger that another thread will swallow up the event that we are interested in. On the other hand, if there is another thread polling the same descriptors, there is a chance that it will receive the event that we were interested in. In this situation, myfunc()
will only realise that the transfer has completed on the next iteration of the loop, up to 120 seconds later. Clearly a two-minute delay is undesirable, and don’t even think about using short timeouts to circumvent this issue!
The solution here is to ensure that no two threads are ever polling the file descriptors at the same time. A naive implementation of this would impact the capabilities of the library, so libusb offers the scheme documented below to ensure no loss of functionality.
Before we go any further, it is worth mentioning that all libusb-wrapped event handling procedures fully adhere to the scheme documented below. This includes libusb_handle_events() and its variants, and all the synchronous I/O functions - libusb hides this headache from you.
libusb_handle_events() from multiple threads
Even when only using libusb_handle_events() and synchronous I/O functions, you can still have a race condition. You might be tempted to solve the above with libusb_handle_events() like so:
libusb_submit_transfer(transfer); while (!completed) { libusb_handle_events(ctx); } printf("completed!");
This however has a race between the checking of completed and libusb_handle_events() acquiring the events lock, so another thread could have completed the transfer, resulting in this thread hanging until either a timeout or another event occurs. See also commit 6696512aade99bb15d6792af90ae329af270eba6 which fixes this in the synchronous API implementation of libusb.
Fixing this race requires checking the variable completed only after taking the event lock, which defeats the concept of just calling libusb_handle_events() without worrying about locking. This is why libusb-1.0.9 introduces the new libusb_handle_events_timeout_completed() and libusb_handle_events_completed() functions, which handles doing the completion check for you after they have acquired the lock:
libusb_submit_transfer(transfer); while (!completed) { libusb_handle_events_completed(ctx, &completed); } printf("completed!");
This nicely fixes the race in our example. Note that if all you want to do is submit a single transfer and wait for its completion, then using one of the synchronous I/O functions is much easier.
The events lock
The problem is when we consider the fact that libusb exposes file descriptors to allow for you to integrate asynchronous USB I/O into existing main loops, effectively allowing you to do some work behind libusb’s back. If you do take libusb’s file descriptors and pass them to poll()/select() yourself, you need to be aware of the associated issues.
The first concept to be introduced is the events lock. The events lock is used to serialize threads that want to handle events, such that only one thread is handling events at any one time.
You must take the events lock before polling libusb file descriptors, using libusb_lock_events(). You must release the lock as soon as you have aborted your poll()/select() loop, using libusb_unlock_events().
Letting other threads do the work for you
Although the events lock is a critical part of the solution, it is not enough on it’s own. You might wonder if the following is sufficient…
libusb_lock_events(ctx); while (!completed) { poll(libusb file descriptors, 120*1000); if (poll indicates activity) libusb_handle_events_timeout(ctx, &zero_tv); } libusb_unlock_events(ctx);
…and the answer is that it is not. This is because the transfer in the code shown above may take a long time (say 30 seconds) to complete, and the lock is not released until the transfer is completed.
Another thread with similar code that wants to do event handling may be working with a transfer that completes after a few milliseconds. Despite having such a quick completion time, the other thread cannot check that status of its transfer until the code above has finished (30 seconds later) due to contention on the lock.
To solve this, libusb offers you a mechanism to determine when another thread is handling events. It also offers a mechanism to block your thread until the event handling thread has completed an event (and this mechanism does not involve polling of file descriptors).
After determining that another thread is currently handling events, you obtain the event waiters lock using libusb_lock_event_waiters(). You then re-check that some other thread is still handling events, and if so, you call libusb_wait_for_event().
libusb_wait_for_event() puts your application to sleep until an event occurs, or until a thread releases the events lock. When either of these things happen, your thread is woken up, and should re-check the condition it was waiting on. It should also re-check that another thread is handling events, and if not, it should start handling events itself.
This looks like the following, as pseudo-code:
retry: if (libusb_try_lock_events(ctx) == 0) { // we obtained the event lock: do our own event handling while (!completed) { if (!libusb_event_handling_ok(ctx)) { libusb_unlock_events(ctx); goto retry; } poll(libusb file descriptors, 120*1000); if (poll indicates activity) libusb_handle_events_locked(ctx, 0); } libusb_unlock_events(ctx); } else { // another thread is doing event handling. wait for it to signal us that // an event has completed libusb_lock_event_waiters(ctx); while (!completed) { // now that we have the event waiters lock, double check that another // thread is still handling events for us. (it may have ceased handling // events in the time it took us to reach this point) if (!libusb_event_handler_active(ctx)) { // whoever was handling events is no longer doing so, try again libusb_unlock_event_waiters(ctx); goto retry; } libusb_wait_for_event(ctx, NULL); } libusb_unlock_event_waiters(ctx); } printf("completed!\n");
A naive look at the above code may suggest that this can only support one event waiter (hence a total of 2 competing threads, the other doing event handling), because the event waiter seems to have taken the event waiters lock while waiting for an event. However, the system does support multiple event waiters, because libusb_wait_for_event() actually drops the lock while waiting, and reaquires it before continuing.
We have now implemented code which can dynamically handle situations where nobody is handling events (so we should do it ourselves), and it can also handle situations where another thread is doing event handling (so we can piggyback onto them). It is also equipped to handle a combination of the two, for example, another thread is doing event handling, but for whatever reason it stops doing so before our condition is met, so we take over the event handling.
Four functions were introduced in the above pseudo-code. Their importance should be apparent from the code shown above.
libusb_try_lock_events() is a non-blocking function which attempts to acquire the events lock but returns a failure code if it is contended.
libusb_event_handling_ok() checks that libusb is still happy for your thread to be performing event handling. Sometimes, libusb needs to interrupt the event handler, and this is how you can check if you have been interrupted. If this function returns 0, the correct behaviour is for you to give up the event handling lock, and then to repeat the cycle. The following libusb_try_lock_events() will fail, so you will become an events waiter. For more information on this, read The full story below.
libusb_handle_events_locked() is a variant of libusb_handle_events_timeout() that you can call while holding the events lock. libusb_handle_events_timeout() itself implements similar logic to the above, so be sure not to call it when you are “working behind libusb’s back”, as is the case here.
libusb_event_handler_active() determines if someone is currently holding the events lock
You might be wondering why there is no function to wake up all threads blocked on libusb_wait_for_event(). This is because libusb can do this internally: it will wake up all such threads when someone calls libusb_unlock_events() or when a transfer completes (at the point after its callback has returned).
The full story
The above explanation should be enough to get you going, but if you’re really thinking through the issues then you may be left with some more questions regarding libusb’s internals. If you’re curious, read on, and if not, skip to the next section to avoid confusing yourself!
The immediate question that may spring to mind is: what if one thread modifies the set of file descriptors that need to be polled while another thread is doing event handling?
There are 2 situations in which this may happen.
libusb_open() will add another file descriptor to the poll set, therefore it is desirable to interrupt the event handler so that it restarts, picking up the new descriptor.
libusb_close() will remove a file descriptor from the poll set. There are all kinds of race conditions that could arise here, so it is important that nobody is doing event handling at this time.
libusb handles these issues internally, so application developers do not have to stop their event handlers while opening/closing devices. Here’s how it works, focusing on the libusb_close() situation first:
During initialization, libusb opens an internal pipe, and it adds the read end of this pipe to the set of file descriptors to be polled.
During libusb_close(), libusb writes some dummy data on this event pipe. This immediately interrupts the event handler. libusb also records internally that it is trying to interrupt event handlers for this high-priority event.
At this point, some of the functions described above start behaving differently:
libusb_event_handling_ok() starts returning 1, indicating that it is NOT OK for event handling to continue.
libusb_try_lock_events() starts returning 1, indicating that another thread holds the event handling lock, even if the lock is uncontended.
libusb_event_handler_active() starts returning 1, indicating that another thread is doing event handling, even if that is not true.
The above changes in behaviour result in the event handler stopping and giving up the events lock very quickly, giving the high-priority libusb_close() operation a “free ride” to acquire the events lock. All threads that are competing to do event handling become event waiters.
With the events lock held inside libusb_close(), libusb can safely remove a file descriptor from the poll set, in the safety of knowledge that nobody is polling those descriptors or trying to access the poll set.
After obtaining the events lock, the close operation completes very quickly (usually a matter of milliseconds) and then immediately releases the events lock.
At the same time, the behaviour of libusb_event_handling_ok() and friends reverts to the original, documented behaviour.
The release of the events lock causes the threads that are waiting for events to be woken up and to start competing to become event handlers again. One of them will succeed; it will then re-obtain the list of poll descriptors, and USB I/O will then continue as normal.
libusb_open() is similar, and is actually a more simplistic case. Upon a call to libusb_open() :
The device is opened and a file descriptor is added to the poll set.
libusb sends some dummy data on the event pipe, and records that it is trying to modify the poll descriptor set.
The event handler is interrupted, and the same behaviour change as for libusb_close() takes effect, causing all event handling threads to become event waiters.
The libusb_open() implementation takes its free ride to the events lock.
Happy that it has successfully paused the events handler, libusb_open() releases the events lock.
The event waiter threads are all woken up and compete to become event handlers again. The one that succeeds will obtain the list of poll descriptors again, which will include the addition of the new device.
Closing remarks
The above may seem a little complicated, but hopefully I have made it clear why such complications are necessary. Also, do not forget that this only applies to applications that take libusb’s file descriptors and integrate them into their own polling loops.
You may decide that it is OK for your multi-threaded application to ignore some of the rules and locks detailed above, because you don’t think that two threads can ever be polling the descriptors at the same time. If that is the case, then that’s good news for you because you don’t have to worry. But be careful here; remember that the synchronous I/O functions do event handling internally. If you have one thread doing event handling in a loop (without implementing the rules and locking semantics documented above) and another trying to send a synchronous USB transfer, you will end up with two threads monitoring the same descriptors, and the above-described undesirable behaviour occurring. The solution is for your polling thread to play by the rules; the synchronous I/O functions do so, and this will result in them getting along in perfect harmony.
If you do have a dedicated thread doing event handling, it is perfectly legal for it to take the event handling lock for long periods of time. Any synchronous I/O functions you call from other threads will transparently fall back to the “event waiters” mechanism detailed above. The only consideration that your event handling thread must apply is the one related to libusb_event_handling_ok() : you must call this before every poll(), and give up the events lock if instructed.