The acronym GI/GO, which stands for “Garbage In/Garbage Out,” has been around since 1957 and still applies to AI technologies like ChatGPT. There is a risk of inaccurate information coming out of the AI due to unverified or untruthful model training data going into it. In fact, OpenAI co-founder John Schulman has expressed concern about AI models’ fabricating information. To mitigate these risks, it is important to craft effective prompts that encourage accurate responses from AI chatbots. Clearly, not all commands are the same in traditional imperative computer programming; likewise, not all prompts are the same in issuing non-imperative AI directives. The words, word-order, and model used have an huge impact. Directing an AI’s response via subtle prompts is completely unlike programming an old-fashioned non-AI computer.
Prompt engineering has become a highly paid discipline, with salaries ranging from $175,000 to $335,000 per year. When interacting with ChatGPT and other AI chatbots, it is crucial to treat the AI as a conversation partner rather than am imperatively programmed computer system. Asking multi-step questions and engaging in interactive prompting can lead to more powerful results. Providing context and relevant background information in prompts can help focus the AI’s responses. Detailed prompts with specific instructions can yield more interesting answers. In addition, don’t be surprised if the AI remembers where you left off, and occasionally needs redirecting if there is a new context to the task at hand, or to the results desired.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…