The Evolution: System Prompts

The Role of Prompts and System Instructions

  1. Initial Setup:
  • Before a conversation begins some LLM’s may accept a set of instructions or a “system prompt” that defines their role, capabilities, and behavioral guidelines.
  • This prompt is part of the context but isn’t visible to the user in the conversation.
  1. Persistent Influence:
  • Unlike the rolling context of conversations, these initial instructions remain constant or persistent throughout the interaction.
  • They shape LLM responses, tone, and the type of information provided.
  1. Customization:
  • Different instances of AI can be given different system prompts, leading to varied behaviors or specializations.
  • This is why LLM responses might differ from other AI/LLM, even they’re based on similar underlying models / architectures.
  1. Ethical and Behavioral Boundaries:
  • These instructions often include ethical guidelines and restrictions on what an LLM can or cannot do or discuss. In some instances this may be at a ‘Corporate’ level and outside the user’s control.
  1. Task-Specific Guidance:
  • For specialized applications, the system prompt might include specific knowledge or instructions relevant to a particular domain or task.
  1. Invisible to Users:
  • Users typically don’t see these background instructions, but they significantly influence the interaction.
  1. Balancing Act:
  • LLM responses are a result of balancing these fixed instructions with the dynamic context of the conversation.

This aspect is crucial because it explains some of the consistency in LLM behavior across different conversations, as well as any limitations or specific approaches the LLM might take. It’s an important part of understanding how AI / LLM’s function beyond just the visible back-and-forth of the conversation.

In a corporate context this is somewhat similar to email, a sysadmin can control document sharing, objectional content / words et al.