Crafting the perfect prompt for artificial intelligence is both an art and a science, according to Andrew Lo, director of MIT’s Laboratory for Financial Engineering. Speaking during a recent web presentation for Harvard University Griffin Graduate School of Arts and Sciences, the professor outlined some key rules for getting high-quality responses from your AI chatbot.
The art of prompt engineering
We’ve all been there: left frustrated after getting a generic, unhelpful answer from your chatbot despite giving a pretty straightforward prompt. Lo, however, explained that receiving this kind of advice is a result of a faulty prompt from your end, repeating the age-old proverb “garbage in, garbage out”.
“I think that there’s a real art and science to prompt engineering. In fact, there is a job description now called prompt engineers,” Lo said.
A vague prompt like “How should I retire?” may seem reasonable for a human, but Lo argues it counts as a “bad prompt” for an AI model. He said that a good prompt should contain enough detail so that a large language model (LLM) — the brains behind chatbots like ChatGPT and Google Gemini — can generate a useful response.
“If you don’t provide that detail, they’ll just give you generic gobbledygook that will not be particularly helpful,” he said.
What is a good prompt, then? Lo gave an example of one such prompt: “Assume you are a fee-only fiduciary advisor. Here are my goals, constraints, tax bracket, state, assets, risk tolerance and timeline. Provide me with number one: base case strategy. Number two: key assumptions. Three: risks. Four: What could invalidate this plan? Five: What information you are missing, and in particular, what are you uncertain about?”
Golden advice for prompting AI models
Prepare before you prompt
Lo shared an important piece of advice when prompting AI models, saying users should spend time thinking about what they actually want to ask and write it down. “It does actually make sense to spend a little bit of time away from the computer and making a list of your questions,” Lo said, comparing it to preparing for a meeting with a doctor, lawyer or accountant.
Expose AI’s blind spots
Another key insight Lo shared was to ask the AI what it doesn’t know. This, he stressed, is important since LLMs can deliver incorrect responses in a highly authoritative tone.
“Always ask the LLM, what are you uncertain about? What information are you missing? Because you want to understand the limitations of what they come up with,” he said.
Reverse engineering better prompts
An advanced technique Lo highlighted was using AI to learn how to prompt itself. He suggested that after arriving at a useful answer through back-and-forth conversation, users should ask the chatbot how it should have been prompted in the first place.
“I will then ask the LLM, can you please tell me what prompt I should have used in order to generate the final answer that I was really looking for?” he said.


