Here’s the structure of an effective prompt recommended by the SANS Institute in their ‘Introduction to AI and Leveraging it in Cybersecurity‘ course:
context + the question (the task) + instructions
An example in the screenshot below, with the 3 paragraphs below following this structure:
A few more tips to improve the prompt results:
- be polite
- add emotional stimuli to your prompts
- use affirmative directions instead of negative (“do” instead of “don’t”)
- instruct the LLM to think step by step
- if you need to understand a complex topic, ask the LLM to ELI5 (“explain like I’m 5”)
- assign a role to the LLM
We observed that impolite prompts often result in poor performance, but overly polite language does not guarantee better outcomes. The best politeness level is different according to the language. This phenomenon suggests that LLMs not only reflect human behavior but are also influenced by language, particularly in different cultural contexts.
https://arxiv.org/ftp/arxiv/papers/2402/2402.14531.pdf
Our automatic experiments show that LLMs have a grasp of emotional intelligence, and their performance can be improved with emotional prompts (which we call “EmotionPrompt” that combines the original prompt with emotional stimuli). The implementation of EmotionPrompt is remarkably straightforward and requires only the addition of emotional stimuli to the initial prompts:
https://arxiv.org/pdf/2307.11760.pdf
We present 26 guiding principles designed to streamline the process of querying and prompting large language models. The more precise the task or directive provided, the more effectively the model performs, aligning its responses more closely with our expectations. This suggests that LLMs do not merely memorize training data but are capable of adapting this information to suit varying prompts, even when the core inquiries remain constant. Therefore, it proves beneficial to assign a specific role to LLMs as a means to elicit outputs that better match our intended results.
https://arxiv.org/pdf/2312.16171.pdf
- Likes (0)
- Comments (0)
-
Share