Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save isocroft/18c47f5caa362867de7f1719d23fac50 to your computer and use it in GitHub Desktop.
Save isocroft/18c47f5caa362867de7f1719d23fac50 to your computer and use it in GitHub Desktop.
This is a very easy and cool introduction to the world of structure concurrency in 3 different programming languages: JavaScript, Python & Go

Introduction To Structured Concurrency In Javascript, Python And Go

[PRIMER]: The 2 Models Of Concurrency

Threads are usually misunderstood. There are not all bad. They have simply been dealt a bad hand with pre-emptive scheduling and all its' attendant consequences e.g. locks and busy waits. Once threads are implemented using immutable variables, atomic message queues and cooperative scheduling, they operate much better. Sadly, there's little we can do about the overhead of state management for threads.

  1. Thread-based model of concurrency (e.g. well, Threads)
  2. Event-based model of concurrency (e.g. Actors + mailbox, Event Loop)

In 1995, a talented and creative computer scientist named John Ousterhout, challenged the status quo (for good reason) with a paper titled: Why Threads Are A Bad Idea. This paper was ground-breaking when it was published because it highlighted the real issues with threads as they were percieved and experienced at the time by software engineers. Threads were hard to get right especially with issues around locks and performance when you had a lot of them spawned in very short period of time instead of a pool of them.

However, what John Ousterhout was really highlighting in his paper was the issues with the way threads were implementated at that time (in 1995 and before). He was complaining mostly about p_threads. You see p_threads had a one-to-one mapping with kernel threads and were often difficult to manage because they shared mutable state and relied heavily on locks.

He was also one of the first persons to advocate for the use of the Event Loop in servers. The Event Loop was alreaddy being used for GUI threads by many commercial and non-commercial Operating Systems.

Yet, today, Go (the programming language) has done something remarkable with the thread-model of concurrency along the lines of go routines (a.k.a threads with little context-switching overhead), channels (a.k.a atomic message queues on steriods) and wait groups (a.k.a well-abstracted thread joins). This clearly shows that the problem had been the implementation of threads all along and not the concept.

What is clear with what Go has done is that it has united the thread-model of concurrency and the event-model of concurrency under a single scheduling concept: cooperative scheduling which side-steps any locking or waiting and other performance and synchronization issues.

[PRIMER]: Thread States

Threads can exist in one of 6 states at any time.

  1. NEW/IDLE (The thread has been created but not started - (threading.Thread(target=funct), go func funct () {}, setTimeout(funct,0))
  2. READY/RUNNABLE (The thread is in the ready queue of the Operating System and has been started)
  3. BLOCKED (The thread is waiting for a mutex/semaphore lock to be released before it continues execution)
  4. TIMED_WAIT/SLEEP_WAIT (The thread is waiting for a specified period of time to pass before it continues execution)
  5. INDEFINITE_WAIT/BUSY_WAIT (The thread is probably starved of resources and is waiting indefinitely before it continues execution)
  6. RUNNING (The thread is located in the processor actively executing the task for which it was created)

Now, If you look closely at the list of possible state for a thread above, you'll find that thread are waiting most of the time. Plus, whenever a thread is waiting, it is also blocking (i.e. blocking further execution of code). This is the bad situation threads can get into and the solution is to eliminate most of the reasons for threads to block.

[HISTORY]: NodeJS Callback Hell Era (Early days of "Non-Blocking I/O")

In the days when John Ousterhout wrote his paper, threads have 3 main issues:

  1. Shared mutable state
  2. Pre-emptive Scheduling
  3. Excessive Waiting/Blocking when too many locks and thread.sleep(1000) APIs are used.

In 2009, a cool programmer named Ryan Dahl set out to solve the blocking issue of threads by side-steping threads entirely. He used the Event Loop which had been in existence for a long time. The Event Loop was a very nifty idea/concept and had become very mainstream by 2005 even in the face of another paper (Why Events Are A Bad Idea) criticizing it 2 years earlier. The Event Loop solved the excessive blocking issues associated with threads by using something known as cooperative scheduling.

[HISTORY]: From Callback To Promises

If you were an active JavaScript engineers between the years 2010 and 2014/2015, you were familiar with what has come to be known as the callback hell. The callback hell is the result of deeply nested callbacks and it made the code difficult to read and debug.

Concurrent APIs In JavaScript

  • setTimeout(...)
  • setInterval(...)
  • setImmediate(...) (deprecated & non-standard - polyfill)
  • process.nextTick(...) or queueMicrotask(...)
  • requestIdleCallback(...)

Concurrent APIs In Python

  • import asyncio, asyncio.run(...)
  • from threading import Thread, Thread(target=..., args=...)

Concurrent APIs In Go

  • go ...

Macrotasks and Microtasks

The Event Loop runs all tasks (i.e. functions passed to concurrent APIs and Promise APIs) are processed in a specific order/step:

  • All synchronous tasks in the order they were written in the JavaScript source file
  • All asynchronous microtasks in the order they were queued (i.e. the order they were written in the JavaScript source file)
  • Any currently queued asynchronous macrotask

After all of the above 3 steps are done, the Event Loop ticks (i.e. it ends the current set of operations and resumes the loop from the start). Subsequently, at the start of the loop, if there are no more synchronous tasks to run, then the event loop becomes idle and then egins to execute any high priority macrotasks like setTimeout(...) or setInterval(...) then moves to lower priority macrotasks like requestIdleCallback(...) before moving on to render loop (i.e. requestAnimationFrame(...)). Thereafter, the event loop executes any pending microtasks in order and finally any currently queued macrotask andd the loop continues on and on.

It is important to remember that microtasks do not start until all synchronous JavaScript have completely executed. So, if there's a long-running CPU-bound task still in progress, it will ddelay microtasks from running.

All JavaScript environments (i.e. Node.js and browsers) have a macrotask queue and the microtask queue. The microtask queue is used by web features such as MutationObserver, JavaScript language features such as (new Promise((res) => { })).then, queueMicroTask and the obsolete Object.observe, as well as Node.js features such as process.nextTick. Each go-around of the macrotask queue yields back to the event loop once all queued tasks have been processed, even if the macrotask itself queued more macrotasks. Whereas, the microtask queue will continue executing any queued microtasks until it is exhausted.

setTimeout(...) and setInterval(...) Are Terrible APIs For Cooperative Scheduling

The setTimeout(...) and setInterval(...) APIs are one of the most used APIs for scheduling tasks to run at a certain time in the future. However, they are very inefficient and non-cooperative with the event loop. This is because setTimeout(..) do not always run at the specified timeout and ...

requestIdleCallback(...) and queueMicrotask(...) Are Wonderful APIs For Cooperative Scheduling

The requestIdleCallback(...) and queueMicrotask(...) APIs are one of the least used APIs for schedduling tasks on the event loop but are run ...

Promises Are Half Decent Yet async/await Makes Them Weird

Promises are a very nice way to structure asynchronus task resolution and are used primarily for that purpose.

But, they do come at a cost when they are excessively created.

Screenshot 2025-06-14 at 9 54 03 AM

Promises have 3 issues:

  1. Dissociation (a messed up stack trace whenever errors occur)
  2. Bifforcation (the control flow for error handling and the happy path diverge repeatedly for each concurrent task)
  3. Disjointed Flow (Nothing available to .join() the concurrent task represented by the promise. to the current lexical scope or main thread).

Actually, No. 3 leads back to No. 1 and round and round the problem goes.

The first 2 issues can be solved using some skillful programming like i have done here using this tiny library called runn. runn is an abstraction over promises that makes working with promises much better without the need to write endless boilerplate while completely removing the need to think about the difference between synchronous and asynchronous functions.

The 3rd and last issue (i.e. Disjointed Flow) is not solved by runn but by using async/await.

Have you seen Bob Nystroms' article about the colour of functions ?

Why Bother With Structured Concurrency ?

Have you ever noticed that when you use a debugger (or a series of console.log(..) statements in Javascript), a race condition (e.g. with a group of threads in Java/C# or with concurrency APIs like setTimeout() in JavaScript) seem to resolve itself (i.e. goes away) because the debugger is acting as a synchronization mechanism as it adjusts the timing and therefore order of how instructions in thread(s) (or instructions within a setTimeout() callback in JavaScript) execute ? This is called a thread contention flicker and it makes debugging quite hard | A busy-wait loop (spin lock) can be the source of most thread contention issues.

Threading in Python does have a measure of ... when using thread.join() APIs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment