In the context of Large Language Models (LLMs) and a ‘ChatGPT Context,’ the AI’s Rubric represents a hypothetical set of criteria or guidelines that the AI itself follows when generating responses. For regular users, the AI’s Rubric serves as an imagined standard by which the generated text is evaluated. While LLMs don’t have an inherent rubric, this concept highlights the importance of considering the model’s behavior and aligning it with user preferences. Users can compare the output against their own expectations and make adjustments to prompts or interactions accordingly to achieve more desirable AI-generated responses.
In the context of prompts, a “personal rubric” is a set of criteria or guidelines that an individual uses to evaluate or assess their response to a prompt. This could be used in a variety of contexts, such as writing, art, or any other creative endeavour where there is some level of subjective judgment involved.
For example, if you’re responding to a writing prompt, your personal rubric might include criteria like:
This rubric is “personal” because it’s based on your own standards and goals. Different people might have different criteria depending on what they value in their work. It’s a tool to help you reflect on your work, identify areas for improvement, and track your progress over time.
In the context of ChatGPT, a “personal rubric” could refer to a set of criteria or guidelines that the AI uses to generate responses. This rubric would be based on the training data and the model’s understanding of language, context, and user input.
Here are some potential elements of ChatGPT’s “personal rubric”:
This “rubric” may not be explicitly defined or known by the AI, but it’s a useful way to think about how the AI generates responses. It’s also important to note that the AI’s performance against this rubric is not perfect and can vary depending on the specific input and context. The specific implementation details and internal workings of the models, including their hypothetical criteria or guidelines, are not disclosed or directly accessible.
The AI does not have a conscious understanding or explicit list of criteria that it checks off when generating responses. The AI doesn’t “think” in the way humans do. It doesn’t have a checklist or rubric that it consciously refers to.
Instead, the AI generates responses based on patterns it learned during its training. It was trained on a large dataset of text and learned to predict the next word in a sentence based on the words that came before. This process involves complex mathematical computations and doesn’t involve any conscious thought or decision-making on the part of the AI.
In the context of natural language processing, the inference process involves understanding and generating text. The model analyzes the input text, considers the surrounding context, and generates a response or prediction based on the learned patterns and associations in the training data.
So while we can describe the AI’s behavior using concepts like a “rubric,” it’s important to understand that these are just metaphors or simplifications. The actual process is much more complex and doesn’t involve any conscious thought or decision-making.
ChatGPT, like other models based on the Transformer architecture, uses a type of machine learning algorithm called a neural network. These networks are composed of layers of nodes (or “neurons”) that are connected to each other. Each connection has a weight, which determines how much influence one node has on another.
When ChatGPT generates a response, it starts with the input text and passes it through the network. Each node in the network performs a mathematical operation on the data it receives, and the result is passed on to the next layer. These operations can be as simple as multiplying the input by the connection weight, or they can be more complex, involving functions like the softmax function, which is used to create a probability distribution.
The “complex mathematical computations” involve these operations being performed millions or billions of times as the data passes through the network. The final output is a sequence of words that the model predicts as the most likely response based on the input.
The weights of the connections in the network are determined during the training process, where the model is shown a large amount of text data and adjusts the weights to minimize the difference between its predictions and the actual next word in the text.
This is a simplified explanation, and the actual process involves more complex mathematical concepts like vector spaces, attention mechanisms, and gradient descent. But hopefully, this gives you a basic understanding of what is meant by “complex mathematical computations.”
In a neural network, a “weight” is a variable that determines the strength or importance of the connection between two nodes (also known as neurons). Each input into a neuron is multiplied by a weight, and the results are summed up within the neuron. This sum is then passed through an activation function to produce the output of the neuron.
The weights in a neural network are crucial because they essentially define the model’s knowledge. During the training process, the model adjusts these weights based on the data it’s shown. The goal is to find the set of weights that minimizes the difference between the model’s predictions and the actual outcomes. This process is typically done using a method called gradient descent / an optimization algorithm used to train machine learning models and neural networks.
We will return to implications of the Rubric as a Metaphor in later lessons.
To see how the AI makes the word choices you could experiment with https://platform.openai.com/playground?model=text-ada-001
Note: Ada is the fastest / lowest cost model capable of simple tasks.
On the right hand navigation panel at the very bottom is a toggle switch ‘Show probabilities. Turning this on displays the word ‘probabilities’ when you roll over highlighted words.