Improving Performance with Asynchronous Programming in Python
The way that computer processors have improved historically has been pretty direct and easy to measure. Clock speed, measured in cycles per second or Hertz (Hz) has been increasing over the years, ever since Gordon Moore’s famous observation in 1965. Over the past few decades, you saw the clock speed, of your personal hardware steadily increase every time you upgraded. Your newly purchased system would be dramatically faster than the old, and the way you knew that was the clock speed... It’s all about the GHz!
It seems that we are nearing the end of that era. Limitations of the principle of Dennard Scaling and the associated slowdown in Moore’s law are fascinating topics that have dramatic implications for the fields of computing, and computational finance. In short, clock speeds are no longer increasing at the rate they once did, and one should not anticipate that the clock speed on your next system will be dramatically higher than that of your current one.
That doesn’t mean that processors aren’t improving. They are! But the improvements these days are mostly in higher core counts, allowing for more parallel processing. That means the code we write today will not run faster, but we have the opportunity to run more of it in parallel, which presents challenges to us in the developer community.
Parallel processing is nothing new of course. Traditionally writing programs which are intended to be run in parallel is done using multi-threaded programming.
However, traditional multi-threaded programs are not without their limitations, which can have big impact on performance. Some of these are:
- Potential for livelock, deadlock and race conditions
- Programs may not be deterministic
- Large IO associated with creating and destroying threads and context switching
- Large memory overhead
- Increased overall code-complexity and difficulty in debugging
That brings me to Asyncio.
Asyncio was introduced to the Python ecosystem in Python 3.4, and since then has seen rapid adoption, because it mitigates many of the limitations of classically multithreaded programs.
The great thing about Asyncio is that it gives us the power to quickly and easily write single-threaded concurrent programs using “Coroutines”. Coroutines are like lightweight threads which allow tasks to run concurrently, but within the context of a single thread.
One example might be if you need to make a number of long running requests to a server. You can create a coroutine for each request and put them into what’s called an “event loop”. When the loop is run, the first coroutine is started and the request is sent. Then, instead of waiting for a response and blocking the interpreter, the event loop begins the next coroutine, perhaps firing another request, or doing some unrelated activities. In some cases the results of the evaluation of one coroutine will be needed within another, and in this case the event loop in Asyncio handles the management of the work in a manner that is much more like single-threading. The outcome is that you get improved performance without having to write (and debug) multi-threaded code.
When using the F3 Analytics Library, many of our clients prefer to interact with it via the F3 Python Toolkit. This uses the Asyncio paradigm and enables complex calculations, such as portfolio VaR and scenario analysis, to leverage the full resources of a computational grid without the need to write multi-threaded code.
I mentioned above the drawbacks of traditional multi-threading, let’s discuss how these are mitigated with Asyncio.
- Live Lock, Dead Lock and Race Conditions – Because the code is single threaded this is not an issue
- Programs are deterministic!
- Coroutines are much lighter than traditional threads in terms of IO.
- Coroutines use much less memory than traditional threads
- While the code-complexity might be slightly more than single threaded, I would argue that it’s simpler and more intuitive than traditionally multi-threaded code.
The Asyncio style of programming is a great way to improve performance and achieve scalability through concurrency, while maintaining a simple, elegant code base.
If you would like to learn more about how FINCAD uses Asyncio within F3, don't hesitate to contact us.