--- title: Three more ways to think about asyncronous loop date: 2026-04-19 tags: [code, async, python] description: I created a coroutine-only, unix-only, stateless alternative to asyncio. --- At the core of async programming is the idea that you move all I/O to a central main loop that might look something like this: ```python def main(): while True: events = perform_blocking_io() handle_events(events) ``` While the general structure is always the same, there are a lot of details that can be done differently. I have written about [my thoughts on async programming](https://blog.ce9e.org/posts/2023-01-29-python-async-loops/), specifically python's asyncio, before. Recently I sat down and did my own [toy implementation](https://github.com/xi/xiio). As always, this was a valuable experience and I learned a lot. In this post, I want to share three distinguishing features which I found helpful to categorize async loops. ## Callbacks vs. Suspended Coroutines In my previous post I concentrated on the difference between callbacks and suspended coroutines (async/await). The main benefit of suspended coroutines is that error handling and resource cleanup just works. Consider this example: ```python async def process_file(path): with open(path) as fh: content = await read(fh, 1024) ... ``` If there is any error during the blocking I/O phase, it gets re-raised at the suspension point, and the file gets closed. Proper cleanup is much harder to do with callbacks. The downside is that many languages lack a syntax for suspending execution. If a language does have support for both, it is not that hard to convert one approach to the other. You can call a callback when a coroutine is resumed; and you can resume a suspended coroutine in a callback. Still, idiomatic code for a specific async loop will favor one or the other approach. ## Number of Primitives I built my toy implementation around the [selectors](https://docs.python.org/3/library/selectors.html) module. The idea is that all blocking I/O can be expressed as waiting for a *file descriptor* to be ready. After that, you can do different things with that file descriptor, for example reading a file or receiving from a socket. Not every action on can be expressed this way though. The workaround in that case is to perform the action in a separate thread and then write to a self pipe. This has some overhead, but it allows to keep the loop itself simple. However, half way through development I learned that selectors are not really a thing on Windows. While Unix has one syscall to wait for resources and then many different syscalls to use them, Windows has a system called [IOCP](https://learn.microsoft.com/en-us/windows/win32/fileio/i-o-completion-ports) that does not have this single function for waiting. Instead, it has many different functions that wait for a resource and also use it in the same step. The proper cross-platform abstraction is probably to expose a large set of primitives. This might also allow to add better abstractions for the cases on Unix that cannot directly be expressed as file descriptors. In my case, I was quite proud of my small API surface and didn't want to rewrite everything from scratch. So I just declared my implementation as Unix-only. # Stateful vs. Stateless Loop The big decision I made for my implementation was that I wanted to have a *stateless* loop. Consider this JavaScript snippet: ```js setTimeout(myCallback, 10); ``` This only works because there is a global main loop that is used implicitly. It is *stateful* because it needs to store which callbacks are registered and under which conditions they should be executed. You can execute multiple functions concurrently by registering them all on the main loop. The same structure is also used in asyncio: ```python loop = asyncio.get_running_loop() loop.call_later(10, my_callback) ``` With a stateless loop, on the other hand, each callback must explicitly return the next callback as well as the I/O instructions that need to be performed: ```python def main(): callback, io = initialize() while True: result = perform_blocking_io(io) callback, io = callback(result) ``` An interesting aspect of this approach is that the main loop is only responsible for performing I/O and then returning control to the application code. Concurrency is implemented separately by specific functions like `gather()`. In my mind, this is obviously the much cleaner approach. On the other hand, there are also some benefits to the stateful approach, because it provides a natural place to handle additional global state like signal handlers or a central self-pipe. So I understand why other async loops prefer the stateful approach. # Conclusion I feel like asyncio has taken the messy approach every step of the way: It supports both callbacks and suspended coroutines, has a large set of primitives to abstract over platform differences, and uses a stateful loop. My toy implementation is a radical counter example: It is coroutine-only, unix-only, and stateless, with a small core and modular abstractions on top. This experiment taught me a lot about the design of async loops. But it was ultimately about the lessons, not the product. Those insights now inform how I use asyncio in a more mindful way.