DEV Community

Cover image for Everything you need to know about Parallelism, Threading, and Multi-threading in Python
Ayesha Sahar
Ayesha Sahar

Posted on • Edited on • Originally published at ayeshasahar.hashnode.dev

Everything you need to know about Parallelism, Threading, and Multi-threading in Python

Introduction

I guess everyone who opened this article has been coding for quite a while now. You should have already come across use-cases where you’d want to speed up specific operations in some parts of your code, right? Well, you're at the right place!

Today, we'll talk about how to perform multiple tasks at the same time to speed up your code.

P.S: I'll be explaining while implementing in Python :)


Parallelism

Via Data science, we deal with large amounts of data and extract useful insights from them. Most of the time, our operations on the data are easily "parallelizable". This means that different processing agents can run the particular "operation" on the data one piece at a time and then combine the results at the end to get the complete result.

This might not have made any sense so let's see a real-world example of parallelizability to understand it.

For example, it's your friend's wedding. She wants you to bake a 3 tier cake but you don't have a lot of time. Generally, you have to bake each tier one by one. But if you have two other friends who help you, then each of you can bake each tier. In the latter part, each of you is working parallelly on a part of the whole task. This reduced the total time required to complete the task.

This parallel processing can be achieved in Python in ways like multiprocessing and threading.


Threading

Thread is a lightweight process or any task. Now, you might be wondering, what is a process? I'll explain it here since you will be seeing this word a lot!

A process is basically an instance of a computer program being executed. Every single process has its own memory space it uses to store the instructions being run. Any data it needs to store and access to execute is also in its own memory space.

Let's move back to our topic, which is threading. Threading is a method to add concurrency to your programs. For example, if your Python code is using multiple threads and you look at the processes running on your OS, you would only see a single entry for your script even though it is running multiple threads.

In fact, a Python process cannot run threads in parallel. But the process can run them concurrently through context switching during I/O bound operations.

This limitation is enforced by GIL. GIL or the Python Global Interpreter Lock prevents threads within the same process to be executed at the same time. It is of extreme importance because Python’s interpreter is not thread-safe. It is enforced every time we attempt to access Python objects within threads. At any given time, only one thread can acquire the lock for a specific object.

Example:

Here's an example:

from time import sleep, perf_counter

def Task():
    print("Starting this task.......")
    sleep(1)
    print("Task Completed!!!")


start_time = perf_counter()

Task()
Task()

end_time = perf_counter()

print(f'It took {end_time- start_time: 0.2f} second(s) for the task to complete!')
Enter fullscreen mode Exit fullscreen mode

This is the output:

#Starting this task.......
#Task Completed!!!
#Starting this task.......
#Task Completed!!!
#It took  2.02 second(s) for the task to complete!
Enter fullscreen mode Exit fullscreen mode

Explanation:

As expected, the program took about two seconds to complete. If we called the Task() function 10 times, it would take about 10 seconds to complete.

Let's break down what exactly happened! Firstly, the Task() function executes. It then sleeps for one second. Then, it executes another time and also sleeps for another second. Finally, the program execution is completed.

When our Task() function calls the sleep() function, the CPU is idle. In other words, the CPU doesn’t do anything. This is not efficient in terms of resource utilization.

Our program has one process with a single thread, which is our main thread. Since it has only one thread, it’s called a single-threaded program.


Multi-threading

By formal definition, multithreading refers to the ability of a processor to execute multiple threads concurrently, where each thread runs a process. It is quite useful for IO-bound processes, such as reading files from a network or database since each thread can run the IO-bound process concurrently.

But using it for CPU-bound processes might slow down performance due to competing resources that ensure only one thread can execute at a time, and overhead is incurred in dealing with multiple threads.

Let's take a look at an example to understand what it is and how we can create a multithreaded application!

Example

In order to create a multi-threaded program, you need to use the Python threading module.

from time import sleep, perf_counter
from threading import Thread


def Task():
    print("Starting this task.......")
    sleep(1)
    print("Task Completed!!!")


start_time = perf_counter()

# create two new threads
t1 = Thread(target=Task)
t2 = Thread(target=Task)

# start the threads
t1.start()
t2.start()

# wait for the threads to complete
t1.join()
t2.join()

end_time = perf_counter()

print(f'It took {end_time- start_time: 0.2f} second(s) for the task to complete!')
Enter fullscreen mode Exit fullscreen mode

Output:

Starting this task.......
Starting this task.......
Task Completed!!!
Task Completed!!!
It took  1.00 second(s) for the task to complete!
Enter fullscreen mode Exit fullscreen mode

Explanation:

When we execute the program, there will be three threads; the main thread is created by the Python interpreter + two threads are created by the program.

As the output shows, the program took one second instead of two to complete.


A brief intro to Multiprocessing

It refers to the ability of a system to run multiple processors concurrently, where each processor can run one or more threads.

It is useful for CPU-bound processes, such as computationally heavy tasks since it will benefit from having multiple processors. It's pretty similar to how multicore computers work faster than computers with a single core. Multiprocessing might lead to higher CPU utilization due to multiple CPU cores being used by the program, which is kinda expected.


How are Python Multi-threading and Multiprocessing Related? 👀

Multithreading and multiprocessing both allow our Python code to run concurrently. But, only one of them, "multiprocessing", will allow your code to be truly parallel. Whereas, if your code is IO-heavy (like HTTP requests), then multithreading will still probably speed up your code.


Why Multithreading Is Always Not "The Solution"

Everything has some pros and cons. Just like that, multithreading also has some disadvantages that you really shouldn't ignore. For example:

👉 You really DON'T want to use it for basic tasks because there is overhead associated with managing threads.

👉 It actually increases the complexity of the program. So if the program is more complex, debugging will become more difficult!

👉 Writing applications that have multithreading are not easy to write so only experienced programmers should do this. This is definitely NOT for beginners.

👉 Managing concurrency among threads is very difficult. It also has the potential to introduce new bugs or we can say "features😂", into an application.


Why Parallel Computing or Parallelism Is Always Not "The Solution"

Parallelism to a program is not always useful. Here are some pitfalls that you NEED to be aware of:

Livelock:

It occurs when:

  • threads keep running in a loop but don’t make any progress

  • poor design

  • improper use of mutex locks

Starvation:

It occurs when a thread is denied access to a particular resource for longer periods of time, so the overall program slows down. This situation might be a result of an unintended side effect of a poorly designed thread-scheduling algorithm.

Race Condition:

As I shared before, threads have a shared memory space and so, they can have access to shared variables. This situation occurs when multiple threads try to change the same variable simultaneously. The thread scheduler can arbitrarily swap between threads, so we have no way of knowing the order in which the threads will try to change the data. This can result in incorrect behavior in either of the threads, especially if the threads decide to do something based on the value of the variable. In order to prevent this, a mutual exclusion (or mutex) lock can be placed around the piece of the code that modifies the variable. If this is done, then only one thread can write to the variable at a time.

Deadlock:

Overuse of anything is not good. Just like that, overusing mutex locks also has a downside. It can introduce deadlocks in the program! It is basically a state when a thread is waiting for another thread to release a lock, but that other thread needs a resource to finish that the first thread is holding onto. This situation results in both threads coming to a standstill and the program stops. We can think of deadlocks as extreme cases of starvation. In order to avoid this situation, don't introduce too many locks that are interdependent.


So, What Exactly Should You Use?🤔

  • Multithreading is your best friend because of its low overhead if your code has a lot of I/O or Network usage

  • Multithreading should be used if you have a GUI so your UI thread doesn't get locked up

  • Use multiprocessing if your code is CPU bound. (But only if your machine has multiple cores)

Anyways, just use whatever you think would work best according to your code!


Conclusion

These OS related concepts are pretty advanced but extremely important if you wanna be a pro developer. Take your time to go through these concepts. No need to fuss if you don't understand them in the first go. Even pro programmers have difficulty implementing them.

So, take your time, use multiple resources if needed, and practice!


Let's connect!

Twitter

Github

Top comments (0)