Chuniversiteit logomarkChuniversiteit.nl
The Toilet Paper

Prompt patterns for LLMs that help you design better software

Large language models (LLMs) may not be replacing developers anytime soon, but can still be of use in the design process.

Eminem as an M&M
LLMs can’t tell the difference between Eminem and M&Ms

Large language models (LLMs) like ChatGPT and Copilot are rapidly adopted by software developers to solve software design and implementation problems. LLMs can be used via web interfaces, but are nowadays also available in popular text editors and IDEs, such as Visual Studio Code and IntelliJ IDEA.

Developers typically use LLMs via prompts, natural language instructions that and other types of artefacts. Prompt patterns are reusable prompt designs that codify best practices, and can be used to consistently achieve good results. This week’s paper introduces 13 such prompt patterns.

Prompt patterns can take several forms. In this case, prompts typically start with a conversation scoping statement like “from now on” or “act as an X” in order to affect the behaviour of the LLM for the remainder of a conversation.

Here are some example of prompts that use the output automator pattern:

From now on, automatically generate a Python requirements.txt file that includes any modules that the code you generate includes.

From now on, whenever you generate code that spans more than one file, generate a Python script that can be run to automatically create the specified files or make changes to existing files to insert the generated code.

The 13 prompt patterns that are presented in this paper can be grouped into four categories: requirements elicitation, system design and simulation, code quality, and refactoring. All prompt patterns were tested with ChatGPT, but will likely also work with other LLMs.

Requirements elicitation

Link

Requirements elicitation patterns help you create requirements and explore their completeness. These patterns can also be used to reason about the impact of changes to an existing system or set of requirements.

Requirements simulator

The requirements simulator pattern helps you explore and experiment with requirements early in the development phase, when changes can still be made cheaply.

To use this pattern, first provide a list of all requirements to the LLM so that they are in the current context of the prompt. Then, use the following statements to instruct the LLM to act as a simulator:

  1. I want you to act as the system
  2. Use the requirements to guide your behaviour
  3. I will ask you to do X, and you will tell me if X is possible given the requirements
  4. If X is possible, explain why using the requirements
  5. If I can’t do X based on the requirements, write the missing requirements needed in format Y

For example:

Now, I want you to act as this system. Use the requirements to guide your behaviour. I am going to say, I want to do X, and you will tell me if X is possible given the requirements. If X is possible, provide a step-by-step set of instructions on how I would accomplish it and provide additional details that would help implement the requirement. If I can’t do X based on the requirements, write the missing requirements to make it possible as user stories.

After this initial prompt you can quickly expand your set of requirements in an iterative way. When paired with a visual generator prompt pattern, you can let the LLM generate for other AI tools like DALL·E or Midjourney.

Specification disambiguation

This prompt pattern instructs the LLM to review specifications that are provided to a developer or development team by non-technical stakeholders. This can help catch early mistakes and clarify ambiguities – ideally before specifications are delivered to developers, who might misinterpret specifications, ignore ambiguities, or make (incorrect) assumptions.

The prompt pattern should use the following format:

  1. Within this scope
  2. Consider these requirements or specifications
  3. Point out any areas of ambiguity or potentially unintended outcomes

An actual prompt might look like this:

The following will represent system requirements. Point out any areas that could be construed as ambiguous or lead to unintended outcomes. Provide ways in which the language can be more precise.

The pattern instructs the LLM to act as a devil’s advocate attempting to find points of weakness in specifications. This can be helpful in situations where a developer might be afraid to ask clarifying questions (e.g. to a more senior developer) or where it is difficult for developers to get external feedback (e.g. due to secrecy).

Change request simulation

This pattern helps users reason about the possible implications of a change to a system. It makes the LLM act as a potentially unbiased estimator of the scope and impact of the change.

  1. My software system architecture is X
  2. The system must adhere to these constraints
  3. I want you to simulate a change to the system that I will describe
  4. Describe the impact of that change in terms of Q
  5. This is the change to my system

Here is an example of a prompt that uses the change request simulation pattern:

My software system uses the OpenAPI specification that you generated earlier. I want you to simulate a change where a new mandatory field needs to be added to the prompts. List which functions and which files will need to be modified.

One major challenge with this pattern is that it only works if the LLM has sufficient knowledge about the system, which can be hard to fit into the prompt itself. Some of the patterns below provide possible workarounds for this limitation.

System design and simulation

Link

Patterns in this category can be used to create concrete design specifications, domain-specific languages, and explore alternative architectures.

API generator

The API generator pattern is used to generate an API specification from natural language descriptions of a system. This approach allows developers to quickly explore multiple possible API designs.

The prompt pattern consists of the following statements:

  1. Using system description X
  2. Generate an API description for the system
  3. The API specification should be in format Y

For example, you might use the following prompt after having provided a set of requirements to the LLM:

Generate an OpenAPI specification for a web application that would implement the listed requirements.

By using this pattern, developers are more likely to create API specifications early in the design process (rather than waiting until the API has already been implemented). When using something like OpenAPI, the specifications can even be used to generate skeleton code during the implementation phase!

API simulator

The API simulator pattern tells an LLM to simulate an API from a formal or natural language specification. Simulations like these enable developers to directly interact with an API and test it without needing to write any code or manually set up a mock API. This helps developers uncover issues, omissions, and awkward designs early in the design process.

To use it, first provide an API specification to the LLM, followed by a prompt that uses the next pattern:

  1. Act as the described system using specification X
  2. I will type in requests to the API in format Y
  3. You will respond with the appropriate response in format Z based on specification X

A concrete example would look like this:

Act as this web application based on the OpenAPI specification. I will type in HTTP requests in plain text and you will respond with the appropriate HTTP response based on the OpenAPI specification.

A major benefit that LLMs have over traditional API mocking tools, is that LLMs are a lot more flexible: depending on your needs, you may instruct an LLM to reject malformed HTTP requests, or ask it to accept pseudo-data as input and automatically fix any formatting issues for you.

Few-shot example generator

This pattern instructs the LLM to generate training data that can later be fed back to the LLM for few-shot learning, a technique that allows you to “train” an LLM using a very small number of examples. This can be and help “remind” the LLM of the system’s semantics and how it is intended to be used. Finally, the generated examples can also be useful for developers, especially when they convey information about constraints, assumptions, or expectations.

The pattern uses the following format:

  1. I am going to provide you system X
  2. Create a set of N examples that demonstrate usage of system X
  3. Make the examples as complete as possible in their coverage
  4. (Optionally) The examples should be based on the public interfaces of system X
  5. (Optionally) The examples should focus on X

This is an example that shows what an actual prompt looks like:

I am going to provide you code. Create a set of 10 examples that demonstrate usage of this OpenAPI specification related to registration of new users.

Domain-specific language (DSL) creation

LLM prompts have a maximum length, which for larger software projects can greatly limit the amount of contextual information that can be provided at any given time. Domain-specific languages (DSLs) offer a way to describe system concepts like requirements and code in a more succinct way. Designing such a language manually can be time-consuming. Fortunately, LLMs can help with this too using the following prompt pattern:

  1. I want you to create a domain-specific language for X
  2. The syntax of the language must adhere to the following constraints
  3. Explain the language to me and provide some examples

Here is a concrete example of such a prompt:

I want you to create a domain-specific language to document requirements. The syntax of the language should be based on YAML. Explain the language to me and provide some examples.

Architectural possibilities

The architectural possibilities pattern helps developers to explore multiple alternative that would normally have been time-consuming to create or require expertise that they currently do not have.

Prompts for this pattern use the following structure:

  1. I am developing a software system with X for Y
  2. The system must adhere to these constraints
  3. Describe N possible architectures for this system
  4. Describe the architecture in terms of Q

For example:

I am developing a Python web application using FastAPI that allows users to publish interesting ChatGPT prompts, similar to Twitter. Describe three possible architectures for this system. Describe the architecture with respect to modules and the functionality that each module contains.

Code quality

Link

LLMs can often “reason” effectively about abstraction and modularity. Prompt patterns in this category can therefore help improve the quality of both LLM and human-generated code.

Code clustering

When you ask an LLM to solve a problem using code, it will generate code that may solve your specific problem, but is likely not structured in a neat way. The code clustering pattern can be used to separate and cluster code into functions and classes based on a particular property, like the presence or absence of side-effects, tiers or features. This ensures that the LLM will generate or refactor code that is less brittle and more maintainable.

The code cluster pattern consists of the following components:

  1. Within scope X
  2. I want you to write or refactor code in a way that separates code with property Y from code that has property Z
  3. These are examples of code with property Y
  4. These are examples of code with property Z

A prompt that uses this pattern would look somewhat like this:

Whenever I ask you to write code, I want you to write code in a way that separates functions with side-effects, such as file system, database, or network access, from the functions without side-effects.

Intermediate abstraction

The purpose of the intermediate abstraction pattern is similar to the code clustering pattern, as it aims to make generated code more maintainable. Here, the goal is to instruct the LLM to generate code that includes proper use of abstractions and modularity.

The prompt pattern is as follows:

  1. If you write or refactor code with property X
  2. that uses other code with property Y
  3. (Optionally) define property X
  4. (Optionally) define property Y
  5. Insert an intermediate abstraction Z between X and Y
  6. (Optionally) Abstraction Z should have these properties

For example, if you want to shield your system’s business logic from changes in third-party libraries, you would use the following prompt:

Whenever I ask you to write code, I want you to separate the business logic as much as possible from any underlying third-party libraries. Whenever business logic uses a third-party library, please write an intermediate abstraction that the business logic uses instead so that the third-party library could be replaced with an alternate library if needed.

Principled code

Sometimes it is easier to tell an LLM to follow a set of design principles by referencing them by name. The prompt for this pattern is simple:

  1. Within this scope
  2. Generate, refactor, or create code to adhere to named Principle X

For instance, one might ask an LLM to follow SOLID principles:

From now on, whenever you write, refactor, or review code, make sure it adheres to SOLID design principles.

This pattern only works well for commonly known design principles that need to be applied in widely used programming languages, as the LLM must have had a sufficient amount of examples in its training data.

Hidden assumptions

Code generally makes assumptions about the context in which it is used. Bad things can happen when a user is unaware of these assumptions. The hidden assumptions pattern helps users identify assumptions in code, which is especially important for code that was written by someone else (or an LLM).

The prompt pattern is as follows:

  1. Within this scope
  2. List the assumptions that this code makes
  3. (Optionally) Estimate how hard it would be to change these assumptions or their likelihood of changing

A general prompt that asks the LLM to list assumptions that may be hard to change in the future could look like this:

List the assumptions that this code makes and how hard it would be to change each of them given the current code structure.

If you’re interested in a specific type of assumption (e.g. how easy it is to change from a MongoDB database to MySQL), the prompt can be updated accordingly.

Refactoring

Link

Refactoring patterns make it easier to refactor code using high-level specifications.

Pseudo-code refactoring

When generating code, this prompt pattern gives the user finer-grained control over the output without needing to explicitly specify all the details:

  1. Refactor the code
  2. So that it matches this pseudocode
  3. Match the structure of the pseudocode as closely as possible

The prompt should be accompanied by a piece of pseudocode. The snippet below shows an example of Python-like pseudocode:

Data-guided refactoring

The goal of this prompt pattern is to allow the user to refactor existing code to use data in a new format. It works by defining the data format that should be accepted or produced (the “what”) rather than the refactoring steps that are needed to support the new data format (the “how”). The former is usually much easier to do than the latter.

The prompt pattern consists of the following statements:

  1. Refactor the code
  2. So that its input, output, or stored data format is X
  3. Provide one or more examples of X

Summary

Link
  1. This paper presents 13 prompt patterns for LLMs

  2. These patterns can be used to explore and review requirements and designs, or improve the quality of code