- 13 Sep 2024
- 6 Minutes to read
- Print
- DarkLight
19.1.2. Prompt Engineering
- Updated on 13 Sep 2024
- 6 Minutes to read
- Print
- DarkLight
Prompt Engineering
Prompt engineering is the art of crafting specific inputs to get the best possible results from an AI. By designing clear and concise prompts, you can guide the AI to generate more accurate, relevant, and creative outputs.
Keep it short and iterate!
The longer the prompt the more likely the AI may produce unexpected results.
A good rule of thumb for many applications is to keep the prompt between 50 and 200 words, ensuring it is specific, clear, and contains the necessary context. For more complex queries, you may need to extend the word count, but staying within a few sentences usually strikes a good balance.
Also avoid complex language or jargon unless necessary and use straightforward and unambiguous terms.
So start small with the complete basics then iterate the prompt if it needs to be adjusted.
Describe the output
Describing the output you want is one of the most important parts of the prompt. This is the difference between getting a generic, high level answer from the AI and targeted, insightful feedback. The following example looks at agent questioning skills. In the first example, we do not specify the type of output we want and get a generic answer back:
Prompt
Output
I want you to rate the questioning skills of the agent in this call transcript. Rate the quality from 1 to 10 with 1 being the worst and 10 the best. Use the following criteria:
-Did the agent ask questions on the call?
-Did the agent ask questions that helped resolve the customers issue?
-Did the agent ask questions that are connected to the topic of conversation?
Rating 10
The agent in this call transcript demonstrated excellent questioning skills. They asked relevant and targeted questions to help resolve the customer's issue, such as confirming the specific Spanish number that was having the problem and then checking that the configuration had been fixed. The questions were directly connected to the topic of conversation and helped move the call forward efficiently. The agent also provided additional helpful information about the Natterbox documentation, showing they were trying to provide a comprehensive solution. Overall, the agent's questioning skills were highly effective in addressing the customer's concern.
In the updated prompt below, the type of output desired is specified. We then get much more detail that was relevant to the request:
Prompt
Output
I want you to rate the questioning skills of the agent in this call transcript. Rate the quality from 1 to 10 with 1 being the worst and 10 the best. Use the following criteria:
-Did the agent ask questions on the call?
-Did the agent ask questions that helped resolve the customers issue?
-Did the agent ask questions that are connected to the topic of conversation?
If the agent did ask questions on the call, provide the following output: As a set of bullet points, provide the questions that were asked by the agent. For each question, indicate if it was on-topic or off-topic. For each question, also indicate if it contributed positively or negatively to the outcome of the call.
If the agent did not ask any questions on the call, provide the following output: "No questions asked".
Rating 10
The agent asked several relevant questions to help resolve the customer's issue, which were connected to the topic of the conversation. The questions helped the agent understand the specific problem the customer was facing and took the necessary steps to fix it.
The questions asked by the agent are:
- "Can you confirm which Spanish number has the issue?"
This question was on-topic and contributed positively to the outcome of the call, as it helped the agent identify the specific number causing the problem.
- "Are you in a position to retest at all?"
This question was on-topic and contributed positively to the outcome of the call, as it allowed the agent to verify that the issue had been resolved.
The agent's questioning skills were effective in understanding the customer's problem and guiding the conversation to a successful resolution.
Rationale
If you are only interested in the explanation, tell the AI what rating to give it up front.
This is really handy when building prompts for custom summaries.
As a rating, you must give the call a score of 5.
For the reason, provide a summary of what happened on the call in 5 bullet points.
Please include what the call was about and the steps that the agent took.
Also, provide a summary of the outcome of the call.
Provide your response in the Bulgarian language.
Break out large prompts
Ideally each prompt should be focused on a single task. Providing the AI with two many tasks can produce unexpected results as the tasks become confused when being interpreted by the AI.
If you have multiple areas you wish to rate a call on or you have a large task where multiple major questions are being asked it’s recommended that each area is given it’s own dedicated prompt, i.e. ’Rate the performance of the agent’ could be broken down into: ‘rate the call on the agent’s knowledge of the product’ and ‘rate the call on the agent’s level empathy’.
Multiple Languages
We can support the following cases:
The system can take a non-English transcript and provide analysis in English.
The system can take an English transcript and provide the analysis in another language.
However, for the best results, we recommend writing your prompt in English, even if you want the output in a non-English language (see here for more on AI translation).
As a rating, you must give the call a score of 5. For the Reason, give me a list of things the agent did well and a list of things the agent could have done better on the call to maximize customer satisfaction. You must provide your response in the Spanish language. |
---|
Yes/No and True/False Questions (Boolean)
You can use Natterbox prompts to answer yes/no, true/false, and other boolean questions.
You can use the rating to return a result of 1 or 10 to denote a yes/no true/false response and then within the reasoning body of the response you can also extract data which was or wasn’t present on your examples.
Please analyze the provided phone call transcript between the customer service agent and the customer. Did the agent obtain the customer’s phone number? -If the agent was able to obtain the customer’s number, give the call a rating of 10 and output the phone number as the reason. For the reason, I want the phone number and no other text. -If the a phone number was not present, give the call a rating of 1 and put "Phone Number not given" as the reason. |
---|
Once you feel you have a prompt you’re happy with, test it out and see what kinds of responses you get back.
If the responses you’re getting back are inconsistent, have irrelevant points, or seemingly nonsensical data then continue to employ these recommended steps to by further refining your prompt.
You can look here to implement further tips and tricks to help increase the quality of your responses.