Using Cloudflare Tunnel to Public Ollama on the Internet

Using Cloudflare Tunnel to Public Ollama on the Internet

Daily short news for you
  • How I wish I had discovered this repository earlier. github/opensource.guide is a place that guides everyone on everything about Open Source. From how to contribute code, how to start your own open-source project, to the knowledge that anyone should know when stepping into this field 🤓

    Especially, this content is directly from Github.

    » Read more
  • Just the other day, I mentioned dokploy.com and today I came across coolify.io - another open-source project that can replace Heroku/Netlify/Vercel.

    From what I've read, Coolify operates based on Docker deployment, which allows it to run most applications.

    Coolify offers an interface and features that make application deployment simpler and easier.

    Could this be the trend for application deployment in the future? 🤔

    » Read more
  • One of the things I really like about command lines is their 'pipeline' nature. You can imagine each command as a pipe; when connected together, they create a flow of data. The output of one pipe becomes the input of another... and so on.

    In terms of application, there are many examples; you can refer to the article Practical Data Processing Using Commands on MTTQVN Statement File. By combining commands, we turn them into powerful data analysis tools.

    Recently, I combined the wrangler command with jq to make it easier to view logs from the worker. wrangler is Cloudflare's command line interface (CLI) that integrates many features. One of them helps us view logs from Worker using the command:

    $ wrangler tail --config /path/to/wrangler.toml --format json

    However, the logs from the above command contain a lot of extraneous information, spilling over the screen, while we only want to see a few important fields. So, what should we do?

    Let’s combine it with jq. jq is a very powerful JSON processing command. It makes working with JSON data in the terminal much easier. Therefore, to filter information from the logs, it’s quite simple:

    $ wrangler tail --config /path/to/wrangler.toml --format json | jq '{method: .event.request.method, url: .event.request.url, logs }'

    The above command returns structured JSON logs consisting of only 3 fields: method, url, and logs 🔥

    » Read more

Problem

Hello readers of 2coffee.dev. Tet is just around the corner, have you prepared anything for yourself and your family yet? It seems to me that as the year ends, everyone gets busier. Since the beginning of the month, the traffic to the blog has decreased significantly. Sometimes it makes me anxious because I don't know where my readers have gone. Maybe they are taking an early Tet break, or the chatbot is too strong, or it could be due to the content not being engaging enough anymore. 😥

I must admit that in these last few weeks, I have been in the mindset of a busy person, not having much time to write regularly. It could be due to the nature of the job, combined with many issues to handle, so I no longer have the mental space to relax. But it’s okay; today I successfully configured Cloudflare Tunnel in conjunction with Ollama to "public" an API endpoint on the Internet - something I couldn't do a few weeks ago. I thought many people would need this, so I decided to write an article about it right away.

At first, I intended to write a short post in the Threads section, but then I realized it had been too long since I wrote a lengthy article, so I changed my mind. Can you believe it? A long article can be condensed into just a few short lines. Conversely, a short article can easily be made "flowery" enough to turn into a lengthy piece that many might dread. So why should one strive to write longer?

Wow! If I didn't say it, no one might know the reason. Writing is a way for me to relieve stress. By writing, I can connect with my readers, share, chat, or weave in stories and lessons I have learned. In other words, writing serves both as a form of relaxation and a means to interact with everyone.

Since launching the short article section Threads, I never expected so many people would be interested in it. Oh, but to say I didn't expect would be an exaggeration because I did a lot of research before implementing this feature. "Coding" a feature isn't hard; the challenge lies in how to operate it. Threads must ensure that the frequency of posts isn't interrupted; if I write an article infrequently, would anyone even come back to check for updates? This inadvertently creates pressure on how to both gather and summarize interesting and prominent news for readers. Many days I got too busy and forgot to write, and sure enough, the next day I had to publish a make-up post to keep my credibility intact. 😆

I know that many people enjoy reading, and I am one of those who loves writing. Sometimes reading isn't always in the mindset of being "chased by a deadline," on the way to find a solution, or learning something new... I believe that for many people, reading is similar to writing: it is for relaxation. Relaxing while gaining knowledge and experience is indeed a two-for-one deal, isn't it? 😁

I've talked too much already; let's get to the main point. Today I successfully configured Cloudflare Tunnel along with Ollama to publicize an API endpoint on the Internet. From there, anyone can access it without being confined to the local server (localhost) anymore. After reviewing the documentation for Ollama, it turned out to be simpler than I thought!

Cloudflare Tunnel & Ollama

If you don't know about Cloudflare Tunnel, please refer back to the article Adding a "Tunnel Locally" Tool - Bringing Local Servers to the Internet. This is a tool that helps us map local servers to the Internet, effectively turning your computer into a server that anyone with an IP address or domain name can access.

Ollama is a tool that allows us to run some large language models (LLMs) on our computers with just a single command. It simplifies the installation and usage of models. The standout feature is that it supports APIs compatible with OpenAPI.

In a previous article, I mentioned creating a Tunnel through a six-step process - a bit lengthy, right? In fact, Cloudflare Tunnel has a much quicker startup process, requiring only the installation of cloudflared and then using a single command:

$ cloudflared tunnel --url http://localhost:11434  
...  
Your quick Tunnel has been created! Visit it at (it may take some time to be reachable):  
https://tennis-coordination-korea-wv.trycloudflare.com  
....  

Immediately, you will see a random address that cloudflared has generated. It maps to the address http://localhost:11434 on your computer. When accessed from another machine at https://tennis-coordination-korea-wv.trycloudflare.com, we see the same result as accessing http://localhost:11434 on the local machine.

The above is just an example of mapping any port on your machine to the Internet; for Ollama or many other tools, additional configuration for the hostname in the headers is required. In the Ollama documentation, it instructs:

$ cloudflared tunnel --url http://localhost:11434 --http-host-header="localhost:11434"  

After that, try calling the API using the new URL. Note that you must run the llama3.2 model from Ollama beforehand.

curl https://tennis-coordination-korea-wv.trycloudflare.com/api/generate -d '{  
  "model": "llama3.2",  
  "prompt": "Why is the sky blue?"  
}'  

Wonderful! At this point, everything is done, and you have an API endpoint pointing to Ollama on the local server that anyone can access. However, if you have a domain in Cloudflare and want to maintain a fixed address like api-ollama.2coffee.dev, you need to configure it according to the six steps.

Keeping a Fixed Domain

It's very simple; after completing step 4 in the article Adding a "Tunnel Locally" Tool - Bringing Local Servers to the Internet, modify the contents of the config.yml file as follows:

tunnel: <tunnel-uuid>  
credentials-file: path/to/.cloudflared/.json  

ingress:  
  - hostname: api-ollama.2coffee.dev  
    service: http://localhost:11434  
    originRequest:  
      httpHostHeader: "localhost:11434"  
  - service: http_status:404  

Then run:

$ cloudflared tunnel run <tunnel-uuid>  

Although this method can help you create an API address similar to OpenAI's ChatGPT, it has many limitations, such as depending on the machine configuration and the model being used. Ollama can only handle one query at a time, so making continuous or simultaneous requests will not be efficient.

Premium
Hello

The secret stack of Blog

As a developer, are you curious about the technology secrets or the technical debts of this blog? All secrets will be revealed in the article below. What are you waiting for, click now!

As a developer, are you curious about the technology secrets or the technical debts of this blog? All secrets will be revealed in the article below. What are you waiting for, click now!

View all

Subscribe to receive new article notifications

or
* The summary newsletter is sent every 1-2 weeks, cancel anytime.

Comments (0)

Leave a comment...
Scroll or click to go to the next page