The promise and apparent magic of AI tools such as ChatGPT is that anybody can use them to generate quality, relevant content, with great efficiency. All it takes is a simple “prompt” for the bot to complete, almost instantaneously, any information processing task we need it to. This might still include made-up facts or references, and (ironically) works better with text than with numbers; but the truth remains that, for better and for worse, artificial intelligence seems capable of doing our thinking for us. In reality, however, artificial intelligence cannot think. It can only compute.
Chatbots are programs that have been trained to predict word after word based on billions of texts, trial, error, and reinforcement. Quite literally, what these “stochastic parrots” do is calculate linguistic probabilities. In that sense, artificial intelligence mimics human intelligence, but does not replicate it.
Thanks to the past experience crystalized in their mind, a teacher will understand what a student means when they say that they are late “because they just had PE”. Their words prompt an “intelligere”, or understanding, because their interlocutor knows that physical education involves activities that require changing clothes and showering, both of which take time, and often more than is allocated in tight school schedules. This means that, when the student started apologizing, the teacher already knew that “late” was probably going to be followed by “PE”. But this prediction was based on comprehension, i.e., on a mental representation, as well as on a common understanding of the world they share with their student. Trained on teacher-student conversations, a bot could make the same prediction; but this time it would not be the product of an understanding. It would only mimic, not replicate, the human intelligere. Just like John Searle in his Chinese room, AI might know what to say, and when, to seem like it knows what it is saying and is intelligent; but neither is the case.
Not only does it lack understanding, akin to a child able to paint by numbers, but not to draw; but artificial intelligence is also a one-trick poney. As astonishing as its predictions are, predictions are all it can do. Human and artificial intelligence are sometimes both described as “information processing”, but the reality is that humans can remember, imagine, feel, reason… while chatbots cannot. All they can do is extrapolate based on the data they have and probabilities. It might consist purely in calculations, but artificial intelligence emulates our “system 1” based on intuition and heuristics, rather than our “system 2” based on understanding and logic. I.e., AI resembles us when we think without thinking.
When we talk about “artificial intelligence”, it is thus important to keep in mind what the expression refers to: not an artifact that is intelligent, but an intelligence that is artificial, or illusory. For that reason, AI cannot do our thinking for us based on prompts. To the contrary, we need to do the thinking for it, which is what prompt design is all about.
"When we talk about 'artificial intelligence' it is thus important to keep in mind what the expression refers to: not an artefact that is intelligent, but an intelligence that is artificial, or illusory".
Prompt design is the process of designing prompts that compensate for the lack of intelligence of artificial intelligence. Its function is to bridge the gap between what we mean by our prompts, what we want to achieve, and what AI can do, knowing that it does not share our experience of the world, or mental representations. To put it simply, it is the art of generating thinking out of bots, making artificial intelligence serve our intentions as closely as possible, even though it cannot comprehend them.
In the end, this is an issue of communication. Whatever we ask chatbots to create for us are messages with a specific content, purpose, and audience. AI is only an intermediary. Our prompts are specifications for products that are ultimately meant for a human being (whether ourselves, or other people) to use, and prompt design is there to ensure that what we want to communicate is not lost in AI translation.
How to design such prompts? To answer this question, the most natural approach is computational thinking, which precisely aims at solving problems and setting goals in ways that can be executed by algorithms. An effective process could follow these 6A steps:
Start with the end and goal in mind. What do you want the bot to create for you, and why? Be specific: who is the target audience, and what are the the intended uses, functionalities, or characteristics of the product?
An example could be: creating an engaging learning activity helping grade 11 IB Psychology students understand the scientific method by constructing and conducting a virtual experiment.
Next, break down your aim into its constituent parts, making sure you do not leave out anything important. What are your product’s bullet point requirements? Make sure to clarify any term that might be easy for a human, but hard for a computer, to understand. How would an expert envision your desired output?
For instance, based on UDL principles, an “engaging” activity might need to be active, authentic (e.g., scenario-based, in a simulated real-life context), open to choice, and aligned with each student’s initial ability level. Likewise, "helping" students migth involve modeling and guiding practice, as well as checking for understanding.
Now that you have listed what you need, make sure you did not include anything unnecessary that would obscure the picture and create confusion. As you review your ideas in that light, look for the best way to arrange them in a logical sequence.
Your blueprint in mind, turn it into a series of steps and instructions that an artificial intelligence can reliably follow. Prompt design is not coding, which is not required, since chatbots are already programmed and capable of operating based on natural language. Still, much better results can be achieved if we optimize our instructions for their standard procedures. For very complex tasks, JSON formatting seems effective (see this example). In all other cases, simply let the bot know WHO they are to mimic, WHAT their product should be, for WHOM it is intended, WHY (what is the purpose or end-goal), and HOW they are supposed to create it.
As a (recommended) option, provide examples whenever possible, and reiterate important, overarching instructions and expectations.
What is not an option, however, is to use the first iteration of your prompt as a draft, which you should test and improve, keeping in mind that the products of AI are, ironically, not entirely predictable. Chatbots being chatbots, a back-forth with them will also allow you to refine their products, even once prompts seem stable.
"Human thinking should not only come before, but also after AI sends its probabilistic net into an ocean of text".
It is important to note that this 6A process does not describe how prompts should be written, but the thinking that should happen before writing prompts. Likewise, proper human thinking should not only come before, but also after AI sends its probabilistic net into an ocean of text. Here, thinking outside the bots means adhering to the following cartesian, “guilty until proven innocent” principles of critical computational thinking, or archi-critical intelligence:
We should assume that AI products are doubtful until proven otherwise and corroborated. Facts and references risk being made up, inaccurately interpreted, or biased, e.g. painting a partial picture.
PROOF OF ORIGIN
We should assume that students’ work are AI products, unless they can demonstrate their process and thinking, including in the use of AI.
NEED TO LEARN
We should assume that students will have to use AI intelligently in the future, and that they won't be able to do so unless we teach them how. Which means that we need to teach ourselves and each other first.
We should assume that AI products are only starting points that can always be improved upon by human intelligence.
Prompt design and output evaluation ensure human thinking takes place before and after AI processes. Doing so, they ensure that we keep thinking outside the bots and use them to enhance, rather than to replace, our ability to understand the world and each other. It can, of course, not be stressed enough how important and urgent it is to ensure that these principles be followed, and that these skills be taught in our schools.