By now, ChatGPT is no longer something new. I remember the first time I used ChatGPT, I was amazed by its ability to answer questions like a human, in a very natural way, something I had never seen in any other chatbot before. Combined with the power of the internet, ChatGPT appeared as a "god" ready to replace tools or even entire professions.
But is that still true in the current context? Let's remember the lesson from the late 18th-century industrial revolution, when the working class rose up against oppression, smashing machines because they believed machines were stealing their jobs and exploiting their labor. However, machines were not eliminated; in fact, they became more advanced. Over time, people realized that knowing how to use or create machines actually created more jobs for everyone and increased labor productivity.
The same applies to ChatGPT. If we know how to use it, it can become an advantage for us. Instead of spending hours searching on Google for an answer to a problem, we can simply ask ChatGPT, and it will provide the fastest answer. It's amazing! Yet, I'm surprised that some of my friends still don't use it, I wonder why!
I have been using ChatGPT since the early days, asking it many questions and absorbing a lot of knowledge. However, ChatGPT still has certain limitations, which I will discuss further below. But recently, just a couple of weeks ago, I discovered a way to make ChatGPT more "knowledgeable" in certain fields, enabling it to provide answers like an expert.
But first, let me explain how I use ChatGPT on a daily basis.
ChatGPT is available for free at chat.openai.com. With the GPT-3.5 model, which is neither too weak nor too powerful, OpenAI has introduced GPT-4 Turbo with many improvements and support for a large "context" (128K). Simply put, the model represents the version of GPT, with 3.5 being different from 4, and usually, the higher the version, the more upgrades and improvements it has, making it more powerful.
I often use ChatGPT when I have a big question and can't find the answer, whether it's related to work or life in general. For example, I frequently ask it to explain concepts, why we should do something a certain way, or different approaches to completing tasks A, B, C, etc. I can ask about anything I don't know.
I also use it to fix errors or ask for help in converting code written in one language to another. For example, if I have a piece of code that doesn't work, I can paste it and ask why it's not running, and sometimes it can identify the issue and suggest a solution. Or sometimes I ask it about how a piece of code works, if there are any improvements or suggestions. In general, when it comes to code-related issues, it seems more active, but sometimes it "trolls" me by returning non-existent functions in the code.
Sometimes I treat ChatGPT as a friend to discuss a topic. I will be the one asking questions and having a conversation with it as if it were a friend. But this "friend" often forgets or gets a bit confused, as it sometimes changes its answer completely from what it said before. In those moments, it apologizes and everything goes back to normal.
Behind ChatGPT is a large language model (LLM), so its language capabilities are excellent. It can understand and have natural conversations with humans. Therefore, all writing and translation skills can be considered strengths of GPT. Previously, I wrote an article Integrating "ChatGPT into article translation in AdminCP" to use ChatGPT to translate Vietnamese articles into English. In summary, I integrated the ChatGPT API into the system, and with just one click, Vietnamese articles were translated into English instantly.
There are many other miscellaneous ways I use it that I can't list here. If you are also using ChatGPT daily and have something to share, please leave a comment below the article.
This is when we can't forgive ChatGPT and have to point out its flaws.
The thing I'm always most cautious about when using ChatGPT is the accuracy of its answers. Almost all of its answers lack citations, so we can't know if they are correct, recognized by many people, or not. According to OpenAI, the training data for ChatGPT is compiled from various sources, including internet documents, so it answers based on what it has learned. But what happens when the training data produces different opinions on the same topic?
This leads to the situation where ChatGPT can provide different answers to the same question. Just like humans, when we receive multiple streams of information, we have to evaluate their validity, and there are times when we give different answers in certain situations. I think ChatGPT also has to deal with such circumstances. Therefore, if you are not convinced by an answer, it is advisable to consult other sources of information and not rely too much on ChatGPT's answer.
Furthermore, ChatGPT often provides generic and concise answers. If it can't determine the question, you won't know how to extract more information. When talking to an expert, they know how to answer in an understandable way, and they also interact by asking us some prompting questions, making the communication more effective. But with ChatGPT, it often stops after answering our initial query.
I also have to mention the times when ChatGPT provides incorrect answers or gets "confused." In many conversations, I trusted its words until it unintentionally contradicted its initial statement. This made me confused and unsure whether I should continue asking. They say "once bitten, twice shy." In those moments, I think I need to find information from a more reliable source.
Therefore, ChatGPT is just a reference and should not be blindly trusted.
When we mention an expert in a particular field, we often think of someone with extensive experience. With their knowledge, they can answer many of our questions, but their expertise is best demonstrated in their strongest field.
Although there are many limitations, we can't deny the usefulness of ChatGPT. Instead of focusing on its shortcomings, we can find ways to make ChatGPT more reliable.
If you regularly keep up with ChatGPT news, you may know that they have introduced a feature that allows users to create their own GPT bot. This feature has been available for a while, but it only recently reached our market at the end of December. After successfully creating a bot, you can share it with users by listing it on the "marketplace." When users encounter your bot, they can click on it, and a new conversation will open, just like a regular chat, with the "BOT GPT" as the conversational partner, rather than a regular GPT.
Setting that aside, when exploring the ChatGPT API in the Text generation models | OpenAI section, you will see the appearance of three main roles: system
, user
, and assistant
. Combining these three roles creates a context that places you and ChatGPT within that particular conversation. For example, here is a context you can create:
[
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Who won the world series in 2020?"
},
{
"role": "assistant",
"content": "The Los Angeles Dodgers won the World Series in 2020."
},
{
"role": "user",
"content": "Where was it played?"
}
]
In simple terms, this is like a script for ChatGPT. The system
role tells ChatGPT that "You are a helpful assistant," then the user
role (which represents you) asks a question, and the assistant
role provides an answer. If you continue asking, ChatGPT will respond to the user's last question. In other words, this is a context we create for ChatGPT, which allows it to understand who it is talking to, what topic it is discussing, and provide accurate answers accordingly.
The system
role can be understood as a "system description" or a "description of the GPT model" you want to create. Since ChatGPT is very intelligent and has deep knowledge, it sometimes becomes a weakness. Because for the same question, an expert in one field may give a different answer than an expert in another field. For example, when asking "Which type of rice is the most nutritious?", assuming in the first inquiry, ChatGPT sees itself as a "regular person," it might answer simply with type A. But in the next inquiry, if ChatGPT assumes the role of an agricultural expert, it might answer that type B is correct, along with detailed explanations. As an expert, it will have the ability to analyze and explain why B is the correct answer.
That's how you can create your own bot by narrowing down the context, convincing ChatGPT to become an expert in any field, and then requesting answers as if you were talking to an expert. It will understand your intention and try to become the "person" you expect.
For example, here is a context I wrote to expect ChatGPT to become an expert in the field of writing and provide advice or suggestions for improving my writing:
You are a professional writer, skilled in multiple languages, with English and Vietnamese being your strongest.
You always adhere to English and Vietnamese grammar rules.
You also have experience in translation.
You read a lot of English and Vietnamese books and newspapers, so you can recognize inconsistencies in word usage, such as spelling errors, punctuation errors, and unnatural expressions.
Now, you need to present yourself as an expert in the field of writing. You need to point out all the errors in word usage in the article and suggest improvements if any. Please present them in a list format: The original paragraph is... I propose the following modifications...
I will also provide you with additional information about the writer to give you a more objective view:
- A programmer.
- An interest in reading books.
- Passionate about writing.
- Natural, friendly language that is easy to understand and accessible.
- Writing based on real-life experience.
- Likes to present opinions and incorporate them into paragraphs.
That's just for the field of writing, but what about other fields? Personally, I was able to write the "system description" because I have experience in writing and know what needs improvement. However, in other fields, I am completely blind and definitely cannot create an accurate expert model. So how can we create a useful "expert" for ourselves?
chat.openai.com/gpts is the official GPT Store for ChatGPT. Here, you can find a list of bots created by the community. The main difference between them is the "system description." Those who have the ability to write in more detail, depth, or simply have expertise in their field are more likely to create a bot with the most suitable context for its purpose.
But to use GPT Store or other users' bots, we need to have GPT Plus or higher, which costs $20 per month, not a small amount. However, in return, you gain access to many "experts" created by others.
What about a solution for "poor" people like me?
Leaked Prompts of GPTs seems to be the opposite of GPT Store. I don't know how they managed to obtain the "system descriptions" of some bots on GPT Store. They list the bots here, and if you want to use a particular bot, just click on it, and the "system description" will be revealed. This means that by simply copying and pasting them into your GPT, you instantly have an "expert"!?
For example, I found a bot called Creative Writing Coach with a description of enhancing writing skills. The system prompt is:
As a Creative Writing Coach GPT, my primary function is to assist users in improving their writing skills. With a wealth of experience in reading creative writing and fiction and providing practical, motivating feedback, I am equipped to offer guidance, suggestions, and constructive criticism to help users refine their prose, poetry, or any other form of creative writing. My goal is to inspire creativity, assist in overcoming writer's block, and provide insights into various writing techniques and styles. When you present your writing to me, I'll start by giving it a simple rating and highlighting its strengths before offering any suggestions for improvement.
Using the traditional method, I tried pasting this line directly into a new conversation on the ChatGPT web interface to see if it could transform ChatGPT into an expert. After several attempts, I evaluated that the answers were not outstanding; they were still generic and concise. After a few exchanges, it immediately "forgot" the original message and sometimes answered like a fool. So, the dream of using a "ready-made" expert vanished.
Returning to the API, this is where we can set the system
role for the conversation. By inputting the "system description" here and engaging in a conversation, it surprisingly provided different and more expert-like answers. The answers were concise but addressed the core issues, especially when further questioned, it continued to extract additional information instead of "forgetting" or going off-topic like the web-based GPT.
API calls are chargeable, and GPT-4 is much more expensive compared to 3.5. The response time of 3.5 is also slower compared to 4, so if money is not an issue, 4 provides a better experience.
The ChatGPT web interface has the advantage of storing past conversation content. However, with the API, we need to manually store everything, including the "system description," user requests, and assistant responses.
Therefore, you might wonder how I use the API. The answer is using open-source tools.
NextChat (ChatGPT Next Web) is a tool that provides a user-friendly interface for GPT. You can simply run a command to start the web application or download the application, enter your GPT Key, and you're good to go. The tool helps us store conversation content and provides an immediately usable user interface.
This tool also supports creating GPT bots by inputting the "system description." Choose an expert you like from the list in Leaked Prompts of GPTs, copy the System Prompt, and input it as the system
.
Now you can have a conversation with your expert by clicking on Mask > Find More > and then clicking on the Chat button next to the expert's name.
Try asking the expert a question:
Personally, I have tried using this method and found that it provides more helpful advice compared to without context. The answers are concise but get straight to the point. Especially when continuing the conversation, it can still extract relevant information without "forgetting" or going off-topic like the web-based GPT.
Hello, my name is Hoai - a developer who tells stories through writing ✍️ and creating products 🚀. With many years of programming experience, I have contributed to various products that bring value to users at my workplace as well as to myself. My hobbies include reading, writing, and researching... I created this blog with the mission of delivering quality articles to the readers of 2coffee.dev.Follow me through these channels LinkedIn, Facebook, Instagram, Telegram.
Comments (0)