Transforming the Blog into "Web is on the edge"

Transforming the Blog into "Web is on the edge"

Daily short news for you
  • A library brings a lot of motion effects to your website: animejs.com

    Go check it out, scroll a bit and your eyes will be dazzled 😵‍💫

    » Read more
  • A repository that compiles a list of system prompts that have been "leaked" on the Internet. Very useful for anyone researching how to write system prompts. I must say they are quite meticulous 😅

    jujumilk3/leaked-system-prompts

    » Read more
  • For over a week now, I haven't posted anything, not because I have nothing to write about, but because I'm looking for ways to distribute more valuable content in this rapidly exploding AI era.

    As I shared earlier this year, the number of visitors to my blog is gradually declining. When I looked at the statistics, the number of users in the first six months of 2025 has dropped by 30% compared to the same period last year, and by 15% compared to the last six months of 2024. This indicates a reality that users are gradually leaving. What is the reason for this?

    I think the biggest reason is that user habits have changed. They primarily discover the blog through search engines, with Google being the largest. Almost half of the users return to the blog without going through the search step. This is a positive signal, but it's still not enough to increase the number of new users. Not to mention that now, Google has launched the AI Search Labs feature, which means AI displays summarized content when users search, further reducing the likelihood of users accessing the website. Interestingly, when Search Labs was introduced, English articles have taken over the rankings for the most accessed content.

    My articles are usually very long, sometimes reaching up to 2000 words. Writing such an article takes a lot of time. It's normal for many articles to go unread. I know and accept this because not everyone encounters the issues being discussed. For me, writing is a way to cultivate patience and thoughtfulness. Being able to help someone through my writing is a wonderful thing.

    Therefore, I am thinking of focusing on shorter and medium-length content to be able to write more. Long content will only be used when I want to write in detail or delve deeply into a particular topic. So, I am looking for ways to redesign the blog. Everyone, please stay tuned! 😄

    » Read more

The Problem

Hello readers, it's been a week since my last update. Just like before, I have successfully completed my plan. Today, I’m here to present to you what it is.

When people say "Web is on the edge," it means that your website or application will be hosted simultaneously on multiple servers in various locations around the world. When someone requests your website/application, they will be redirected to the nearest geographically located server. These distributed servers not only serve static content but also execute custom code to create a dynamic web application.

Since the creation of this blog, I have had in mind a traditional deployment approach, which is hosting everything on a single server and paying to maintain it. Over the years, I have spent money on various different providers, but my favorite has been Digital Ocean (DO). Alongside this, there were times when I had to move them to new homes due to trial and error: if it was good, I stayed; if not, I looked for a new provider. That's why I chose Docker to support deployment; with Docker, all I had to do to move was install it, "pull" the code, run a command, and all the services would start. Then I would manage the domain name, update the new IP address, and that was it.

In recent months, I have changed my working environment. Through many conversations with the CTO, I have been exposed to many new ideas. Things that I thought I knew but actually overlooked because I didn't realize the true benefits they bring. One of them is "Edge" and "Serverless".

You might remember that I wrote about moving and transferring domain names to Cloudflare (CF) to prevent DDOS attacks. Last week, or technically four weeks ago, I completed the migration of the API to Serverless. Now, all of my services are "on the edge of the internet". I don't know where they are stored on the internet, but one thing is certain: I can confidently shut down my servers on DO.

I will recount the process of migrating my API and other services to Serverless, as this topic may be an inspiration for my future articles. I hope you, the readers, will find it interesting and welcoming.

Preparation and Evaluation Before Migration

Of course, before migrating, we need to evaluate what we have and what needs to be done next.

A quick overview of the previous system architecture included multiple services such as api, blog, admin, markdown, background, image, and docker-sdk. Specifically, api, blog, and admin were the API server, blog interface, and system administration interface, respectively. Markdown was the service to parse markdown into HTML, which I wrote an article about entitled I had to separate a markdown service from my API. Background specialized in running some statistical jobs. Image was a server for storing images on the server, and docker-sdk served for CI/CD.

Previously, I moved from DigitalOcean to Cloudflare Pages. Migrating the API to Cloudflare Worker (Worker), a form of Serverless, took more time than expected for the following reasons:

  • First, the current API was written in Go. Worker does not support Go, but it does support WebAssembly. I could go through the process of transpiling Go to Wasm, but it was difficult and time-consuming to study. Therefore, I chose the faster option of rewriting the entire API in JavaScript.
  • Second, the previous database was Redisearch. Worker did not yet support direct Redis server connection via TCP. I could try switching to Upstash, a platform that allows the use of a free Redis server supported by Worker, but it lacked module support. This meant that I couldn't enable the redisearch and redisjson modules, so all efforts to use them were in vain. That's why I decided to switch the database to PostgreSQL.

You might recognize why I used Redisearch. If not, you can refer back to the article What is Redisearch? 2coffee.dev uses redisearch as a database!. Choosing redisearch would make full-text search more powerful. However, after further research, I found that PostgreSQL also supports full-text search. I checked and found that it fully meets the search requirements, so there was no reason to dismiss this idea.

Superbase is a platform that allows the use of a free PostgreSQL server with certain limitations. After researching, I found that it includes all the features of a real PostgreSQL server and there are no significant limitations to worry about. So, I resolved the storage issue and used the pg package to connect to this database.

Cloudflare provides the R2 service, which is similar to Amazon S3 Object Storage. It is a storage for all types of data such as images, videos, and files. Specifically, R2 offers 1 million free requests, so I can take advantage of it to store images for the image service instead of storing them locally on the server as before.

Cloudflare Worker can do many things, including Cron, which allows you to run a job at a preset time, similar to cron job programs in many programming languages. With Cron, I can replace the background service, which originally used cron to run some statistical jobs.

For the remaining services, markdown and docker-sdk, they can be completely removed. Since the API has switched to JavaScript, I can use the showdown package to convert Markdown to HTML. The previous CI/CD setups were also not suitable for Worker, so docker-sdk was removed as well.

Now, my stack only consists of api, page, admin, image, and background.

The Execution Steps

First, I needed to rewrite the api service in JavaScript. Cloudflare Worker uses the V8 engine from Chrome, so running JS code works well. Why JS and not Node.js? Node also uses V8, but it also implements many of its own N-APIs, which Worker cannot run. In other words, while Worker may not be able to run many Node npm packages, it can still run pure JS npm packages or packages that support browser usage.

If you read the Worker documentation, you may notice that the starting code snippet looks like this:

export default {
  async fetch(request) {
    const data = {
      hello: "world",  
    };

    const json = JSON.stringify(data, null, 2);

    return new Response(json, {
      headers: {
        "content-type": "application/json;charset=UTF-8",  
      },  
    });
  },  
};

The code above returns a JSON object { "hello" : "world" }, which is far from the traditional server-side express.js deployment. That's why we need a library similar to express for Worker to make it easier to write CRUD operations.

hono.dev is one of the small, simple, and fast libraries for "Edges" web applications. Hono has a design more similar to koa.js than express.js. It supports Routing, Middleware, and Adapter.

During the process of rewriting the api, I needed to keep the response as similar as possible to minimize changes in the page and admin. However, in the process, there were still some minor changes that required modifying both the page and admin, causing the time to increase.

R2 also posed a challenge during this process. Although it is similar to S3, I had only heard about it and had never actually worked with it before. After a few days of research, I understood its concepts and needed to create another Worker to save/retrieve images from R2. To put it simply, R2 is a pure storage system that provides APIs to add/edit/delete files. What we need to do is call the appropriate APIs to store and retrieve the desired files.

To run statistical jobs as before, I created another Worker and used Cron.

Finally, migrating data from Redisearch to PostgreSQL. Since the data is not too complex and not much, I only needed to write a script to retrieve data from Redisearch and write it to Postgres. This process took a lot of time because I needed to try multiple times until the data was accurately transferred and fully compatible with the new API.

After all that, deploying everything on Worker was relatively quick and easy. It took me about 10 minutes to complete this task. All I had to do was run the command npm run deploy -e prod for each project.

Conclusion

Technology is always evolving and providing new solutions to many current problems. If I had stuck to the traditional approach, I would probably still be paying $6 per month to host everything on DO. Now, I have minimized everything to "Zero Cost".

Premium
Hello

5 profound lessons

Every product comes with stories. The success of others is an inspiration for many to follow. 5 lessons learned have changed me forever. How about you? Click now!

Every product comes with stories. The success of others is an inspiration for many to follow. 5 lessons learned have changed me forever. How about you? Click now!

View all

Subscribe to receive new article notifications

or
* The summary newsletter is sent every 1-2 weeks, cancel anytime.

Comments (1)

Leave a comment...
Avatar
Xuân Hoài Tống1 year ago

Hoàn thành nâng cấp lên Serverless 😜

Reply