For regular users of Large Language Models (LLMs), there are some best practices to keep in mind. Firstly, provide clear and specific prompts to get accurate and relevant responses. Experiment with temperature settings to adjust the level of randomness in the generated text. Be aware that LLMs may sometimes produce incorrect or biased information, so verify outputs when necessary. Remember that LLMs lack true understanding, so use them as tools for assistance rather than relying solely on their responses. Finally, be mindful of ethical considerations and responsible use when engaging with LLMs for various purposes.
In ChatGPT you can create a series of separate chats. When examining a specific topic it is good practice to restrict your input to that specific topic and stay on-topic, the benefits of this are as follows;-
The entire conversation is available to the system thus allowing you to iterate, or correct, or delve deeper
In the case of a long detailed chat you can ask the system for a summary or a word count or for possible omissions and/or suggestions.
Beyond that most AI language models can perform various summarization and analysis tasks on conversations. Here are some examples:
These are just a few examples of the many techniques that can be used to analyze and summarize and improve a conversation.
Please bear in mind that, depending on the model used, there may be limitations in the word count. Regardless as to any word/token constraint it is good practice to ask for a brief summary before requesting a full 250, 500 or multi 0,000 word answer. This allows you to fine tune your prompt for the desired result. In the event you exceed the allowed words you can simply ask the system to continue from the last few words or just type Continue.
When a prompt meets your own subjective requirements you can repeat this prompt for any variable. With this in mind it is worth taking the time to build a range of prompts that you have tailored to your own interests. Variables are often enclosed in {whatever} to make is easier for the humans.
In this context it is useful to ‘Set the Scene’, this can be described as the pre-prompt stage. By setting the scene beforehand, you provide a Chat model with a clear framework and highlight the key attributes you wish to emphasize in your prompt. This approach helps guide the AI’s response and ensures that the generated prompt aligns with the desired criteria and expectations.
You can adapt the weightings and attributes based on your specific preferences and requirements for each prompt.
You may find that the system neglects to answer an element / aspect of your prompt or goes off on a tangent or tends to repeat itself. It is important to pause the system to correct the answer. Direct language is the best way to offer a correction, for example:
No, (then explain what is wrong) and ask it to regenerate.
Alternatively you could ‘use the tech’ and use two tabs or windows to ask the system to provide you with an optimum prompt that will achieve what you have asked.