Mutex and Techniques for Resolving Resource Contentions Caused by Race Condition.

Mutex and Techniques for Resolving Resource Contentions Caused by Race Condition.

Daily short news for you
  • For over a week now, I haven't posted anything, not because I have nothing to write about, but because I'm looking for ways to distribute more valuable content in this rapidly exploding AI era.

    As I shared earlier this year, the number of visitors to my blog is gradually declining. When I looked at the statistics, the number of users in the first six months of 2025 has dropped by 30% compared to the same period last year, and by 15% compared to the last six months of 2024. This indicates a reality that users are gradually leaving. What is the reason for this?

    I think the biggest reason is that user habits have changed. They primarily discover the blog through search engines, with Google being the largest. Almost half of the users return to the blog without going through the search step. This is a positive signal, but it's still not enough to increase the number of new users. Not to mention that now, Google has launched the AI Search Labs feature, which means AI displays summarized content when users search, further reducing the likelihood of users accessing the website. Interestingly, when Search Labs was introduced, English articles have taken over the rankings for the most accessed content.

    My articles are usually very long, sometimes reaching up to 2000 words. Writing such an article takes a lot of time. It's normal for many articles to go unread. I know and accept this because not everyone encounters the issues being discussed. For me, writing is a way to cultivate patience and thoughtfulness. Being able to help someone through my writing is a wonderful thing.

    Therefore, I am thinking of focusing on shorter and medium-length content to be able to write more. Long content will only be used when I want to write in detail or delve deeply into a particular topic. So, I am looking for ways to redesign the blog. Everyone, please stay tuned! 😄

    » Read more
  • CloudFlare has introduced the pay per crawl feature to charge for each time AI "crawls" data from your website. What does that mean 🤔?

    The purpose of SEO is to help search engines see the website. When users search for relevant content, your website appears in the search results. This is almost a win-win situation where Google helps more people discover your site, and in return, Google gets more users.

    Now, the game with AI Agents is different. AI Agents have to actively seek out information sources and conveniently "crawl" your data, then mix it up or do something with it that we can't even know. So this is almost a game that benefits only one side 🤔!?

    CloudFlare's move is to make AI Agents pay for each time they retrieve data from your website. If they don’t pay, then I won’t let them read my data. Something like that. Let’s wait a bit longer and see 🤓.

    » Read more
  • Continuing to update on the lawsuit between the Deno group and Oracle over the name JavaScript: It seems that Deno is at a disadvantage as the court has dismissed the Deno group's complaint. However, in August, they (Oracle) must be held accountable for each reason, acknowledging or denying the allegations presented by the Deno group in the lawsuit.

    JavaScript™ Trademark Update

    » Read more

Issues

In programming, we sometimes face situations where we need to prevent a request from accessing a variable, file, or data structure that another request is holding. Allowing concurrent access to a resource can likely lead to various issues or even unexpected errors.

In a load-balanced environment—where we try to create multiple identical server instances to increase fault tolerance and handle multiple requests simultaneously—preventing resource contention becomes more difficult because each server operates independently, making it very challenging to coordinate them. Not to mention the performance and complexity issues when trying to establish a common communication channel.

So, is there a way to manage shared resources? This is where a term called Mutex comes into play. So what is Mutex? In what cases is it applied?

What is Race Condition?

Race condition

Race condition occurs when two or more requests can access shared data and try to change it simultaneously. Because the thread scheduling algorithms can swap between threads at any time, you cannot predict the order in which threads will try to access the shared data. Therefore, the result of data changes depends on the thread scheduling algorithm, meaning that both threads are "racing" to access/change the data.

For this reason, Race Condition can lead to unexpected errors in programming, and it is necessary to find a way to resolve this contention. At least, it is essential to identify which request has the right to manipulate the data, eliminate the other request, or wait for the previous request to complete. That is the reason why Mutex was created.

What is Mutex?

Mutex

Mutex - Mutual Exclusion, also known as "mutual exclusion," is designed to prevent Race Condition. Its goal is that one thread should never access a resource that another executing thread holds.

The shared resource is a data object that two or more threads are trying to modify simultaneously. The Mutex algorithm ensures that if a process is preparing to modify a data object, no other process/thread is allowed to access or modify it until it completes and releases the object for other processes to continue.

When is Mutex Used?

In programming, Mutex is implemented depending on the programming language or tools.

Node.js does not have a clear concept of Mutex. You might have heard that Node.js only has one thread, so where does resource contention occur? The reality is that Node.js has only one thread, and that thread is used to execute JS code, but I/O tasks are mostly performed by parallel threads, also known as Worker Pool, available in libuv, so there is still a possibility of resource contention occurring here.

Some libraries support Mutex implementation such as async-mutex. Essentially, it utilizes solutions that mark a thread accessing a resource for mutual exclusion, using Promises to wait until it is released (resolved)... Everything works, and perhaps the most concerning issue at this point is performance because the subsequent request must wait for the previous request to release the resource. But let's pause a moment; that is only in the case of a single server. So what happens in the case of multiple servers or in a distributed environment?

A distributed environment is one where multiple identical server instances are "replicated," and shared resources may also reside in different servers. In this case, libraries supporting internal communication like async-mutex may not be able to resolve mutex in a conventional way anymore, as they weren't designed to handle distributed cases.

In fact, if the application only needs one server, then there is no need to worry about this distributed case. But who knows, one day it might grow significantly, and replicating them for load balancing is inevitable. Whether sooner or later, it remains an issue that needs to be addressed.

There are several ways to handle contention in distributed cases, each presenting its advantages and disadvantages for specific scenarios. The simplest way is to run a single server focusing on processing tasks related to shared resources. This model can be illustrated by using a message queue or stream... queuing all requests that need processing and handling them sequentially. Of course, in this way, there will be no conflicts anymore.

However, it is not always possible to handle it separately like that. In such cases, we need to look for a new solution. Leveraging the speed of Redis as a communication channel, for example. Essentially, this method works on the principle of creating a key stored in Redis; whichever processing thread grabs the key first will have the right to access the resource first. After completing, it releases the key for the next processing threads.

You can implement your own Mutex algorithm or use existing libraries. warlock or live-mutex are examples. While warlock uses Redis to create a connection between services, live-mutex implements its own connection system based on a client-server model. Generally, these libraries can meet demands to some extent. In an information system, the concerns of "reliability," "fault tolerance," and recovery from failures are always paramount.

Redis also has a product called Distributed Locks, which implements an algorithm called Redlock using multiple Redis servers that adhere to the two principles of "Safety" and "Liveness" for high reliability and fault tolerance in distributed environments.

Additionally, mutex is also manifested in resolving data conflicts in services such as databases, where locks come into play. Using locks to gain full access to a table or a row of data and preventing other queries from reading or modifying the data. This locking mechanism can be utilized to resolve Race Condition, but resources being locked continuously for extended periods is not very effective, reducing performance, or even causing deadlock situations. At that point, it is necessary to balance or find another solution that is more appropriate.

Conclusion

In programming, especially in multithreaded data processing, it is essential to resolve resource contention issues. Contention can occur everywhere, whether in a single server or in a distributed environment. Mutex is one of the solutions to prevent this issue. Depending on the programming language and tools used, there are different implementations of the Mutex algorithm. You can create your own Mutex algorithm or use available libraries to save time while still achieving effectiveness.

Premium
Hello

The secret stack of Blog

As a developer, are you curious about the technology secrets or the technical debts of this blog? All secrets will be revealed in the article below. What are you waiting for, click now!

As a developer, are you curious about the technology secrets or the technical debts of this blog? All secrets will be revealed in the article below. What are you waiting for, click now!

View all

Subscribe to receive new article notifications

or
* The summary newsletter is sent every 1-2 weeks, cancel anytime.

Comments (0)

Leave a comment...