What’s the purpose of prompt engineering when working with large language models (LLMs)?
Select one:
To reduce the number of tokens in the LLM’s output
To achieve better text completions for your LLM-assisted projects
To make the LLM’s output 100% deterministic
Hint
What can asking better questions give you in return?
Sorry! There has been an error processing your answer. Please try again.
Got feedback on this question?