The Other Reason for Node.js Multithreading
Good for more than CPU-bound workloads
Read just about any article on Node.js multi-threading and you will see a statement approximately like this: “Node.js multithreading is for handling synchronous workloads that would otherwise block the event loop, such as AI inference, which is CPU-bound. As far as I/O intensive workload, expect little improvement. Async I/O is best for that.”
The problem with statements like this is that they imply there’s no other reason to use threads. That is not true. There is another reason. However, it’s also true that tools to take advantage of this benefit are scarce or absent altogether. If you look at most thread pools out there, you’ll see they are designed to handle the well-known use case discussed above; but they cannot be used for the other, less well-known reason. Have you guessed it yet? Here’s a hint: while threads won’t make us faster, they will help us do more. The answer is scale.
We can use threads to scale out predictably and programmatically if our thread pool supports concurrency. That is, not only does it support synchronous workloads, where concurrency is set to 1, but also asynchronous, where concurrency is set to >1. What’s more, these concurrent tasks can leverage shared memory in and across threads.
But how do we allocate work to this pool? When is a thread considered too busy? Good questions. There just so happens to be a microservices framework that uses a concurrent thread pool; unfortunately, the source is proprietary at the moment, but the goal is to provide code that is incidental to the application’s true purpose, such as this, back to the community. In the meantime, it might make sense to drum up support for this untapped capability.