Hello readers, I've been silent for over a week. Just like before, I've just finished my plan. Today, I'm here to show you what it is.
When people say "Web is on the edge", it means that your website or application will be hosted simultaneously on multiple servers in different locations around the world. When someone requests your website/application, they will be redirected to the nearest geographically server. These distributed servers not only serve static content but can also execute custom code to create a dynamic web application.
Since building this blog, I've had in mind a traditional deployment approach, which is to put everything on one server and pay to maintain it. Over the years, I've spent money using multiple providers, but my favorite one is Digital Ocean (DO). Mixed into that are the times moving them to a new home, because of the trial nature: if it's good, stay, if not, find a new provider. That's why I choose Docker to support deployment, because through Docker, all you need to do to move is install it, "pull" the code, run a command, and all services start up, go to domain management, update the new IP address, and it's done.
In recent months, when I changed my working environment, through many conversations with the CTO, I opened myself to many new things. Things that I thought I knew but "overlooked" because I didn't realize the real benefits they bring. One of them is "Edge" and "Serverless".
Readers may remember that I moved and transferred the domain to Cloudflare (CF) to combat DDoS attacks. Last week, or rather the past 4 weeks, I completed the transfer of the API to Serverless. So now, all my services are "on the edge of the Internet", I don't know where they are stored on the Internet, but one thing is for sure, I can confidently turn off the server on DO.
I will recount the process of migrating my API and some other services to Serverless, which can be an inspiration for me in future articles. I hope it will be of interest and well-received by readers.
Of course, before the conversion, we need to evaluate what we have and what needs to be done next.
A quick overview of the previous system architecture included many services such as API, blog, admin, markdown, background, image, and docker-sdk... Among them: api, blog, admin are API servers, the interface to the blog and the system administration page. Markdown is a service to parse markdown into HTML, which I wrote an article about at I had to take a service markdown out of my API. Background is used to run some statistical jobs. Image is a server for storing images on the server, and docker-sdk serves CI/CD...
Previously, I Moved from DigitalOcean to Cloudflare Pages. Moving the API to Cloudflare Worker (Worker), which is a form of Serverless, took me longer than expected:
You may notice why I used Redisearch. If not, you can look back at the article What is Redisearch? 2coffee.dev is using Redisearch as a database!. Choosing Redisearch would make full-text searching more powerful. However, after researching, I found that PostgreSQL also supports full-text searching. I have checked and found that it totally meets the search requirements, so there is no reason to discard this idea anymore.
Superbase is a platform that allows us to use a free PostgreSQL server with some limitations. Through my research, I found that it includes all the functionality of a real Postgres server and there's no reason to "worry" about any limitations. At this point, I have solved the storage issue and used the pg module to connect to this database.
Cloudflare provides the R2 service - Object Storage similar to Amazon S3. This is a storage for all kinds of data such as images, videos, files... Especially, R2 allows 1 million free requests, so it can be utilized as an image storage for the image service - instead of storing locally on the server as before.
Cloudflare's Worker does a lot of things, including Cron - allowing you to run a job at a predetermined time, it is similar to cron scheduling programs in many programming languages you are using. Using Cron, I can replace the background service - which was originally set up to run some statistical jobs.
Markdown and docker-sdk are two services that can be completely removed. Since the API has switched to Javascript, I can use the showdown package directly to convert Markdown to HTML. The previous CI/CD setup is also no longer suitable for Worker, so the docker-sdk is also removed.
So now, my stack only consists of the api, page, admin, image, and background.
First, I needed to rewrite the api service in JavaScript. Cloudflare's Worker uses V8 from Chrome, so it runs JS code very well. Why JS instead of Node.js? Node also uses V8, but it also implements many of its own N-APIs, so Worker cannot run certain Node APIs. In short, while Worker may not run many npm packages for Node, it can run pure JS npm packages or packages that support running in a web browser.
If you read the Worker documentation, you may find that the starting code looks like:
export default {
async fetch(request) {
const data = {
hello: "world",
};
const json = JSON.stringify(data, null, 2);
return new Response(json, {
headers: {
"content-type": "application/json;charset=UTF-8",
},
});
},
};
The code above returns a JSON { "hello" : "world" }
, which is very different from the traditional server deployment using express.js. That's why we need to find an express-like library for Worker to assist in making CRUD operations easier.
hono.dev is one of the small, simple, and fast libraries for "Edges" web applications. Hono has a design similar to koa.js rather than express.js. It supports Routing, Middleware, and Adapter.
During the process of rewriting the api, I needed to keep the response as similar as possible to limit the changes in the page and admin. However, during the process, there were still a few small modifications that required changes in both the page and admin, making the time longer.
R2 was also a challenge in this process, although it is similar to S3, I only heard about it and never really worked with it before. Spending a few days researching, I understood the "concept" of it and needed to create another Worker to store/retrieve images from R2. To understand it simply, R2 is a pure storage, it provides APIs to add/edit/delete files. What we need to do is call the appropriate APIs to store and retrieve the desired files.
To run statistical jobs as before, I created another Worker and used Cron.
Finally, migrating data from Redisearch to PostgreSQL. Since the data is not too complex and not too much, I just needed to write a code snippet to get data from Redisearch and write it to Postgres. This process took quite a long time because many retries were needed until the data was accurately transferred and fully compatible with the new API.
After all that, deploying everything on Worker was relatively quick and easy, it only took me about 10 minutes to complete this process. All I had to do was run a command like npm run deploy -e prod
for each project, and it was done.
Technology is always updating and providing new solutions for many current issues. If I stick to the old ways, I would still be paying $6 per month to host everything on DO. Now, I have minimized everything to "Zero Cost".
5 profound lessons
Every product comes with stories. The success of others is an inspiration for many to follow. 5 lessons learned have changed me forever. How about you? Click now!
Subscribe to receive new article notifications
Hello, my name is Hoai - a developer who tells stories through writing ✍️ and creating products 🚀. With many years of programming experience, I have contributed to various products that bring value to users at my workplace as well as to myself. My hobbies include reading, writing, and researching... I created this blog with the mission of delivering quality articles to the readers of 2coffee.dev.Follow me through these channels LinkedIn, Facebook, Instagram, Telegram.
Comments (1)