Back

An Introduction to In-Context Learning

  • Published:
  • Autor: [at] Editorial Team
  • Category: Basics
Table of Contents
    In-Context Learning, eine Flasche, ein Weinglas und eine Tasse vor einem orangefarbenen Hintergrund
    Alexander Thamm GmbH 2025, GenAI

    Training AI and machine learning models is often a complex, expensive and time-consuming process that typically requires large data sets and expensive computer hardware.

    This is especially true for large language models, which often require huge data sets and millions of dollars (or more) worth of computing time. However, in some cases it is possible to teach a large language model new tasks with significantly lower costs and less complexity.

    In fact, so-called in-context learning makes it possible to train a model for new tasks exclusively within the context window, avoiding traditional model training or fine-tuning.

    What is In-Context Learning?

    In-context learning (ICL) is a technique used with large language models (LLMs) that allows LLMs to temporarily learn new ideas and patterns from information given within the context window, while keeping the underlying weights of the model unchanged

    Remember that an LLM's context window is the text input window that we use to communicate with the LLM by providing prompts, relevant information, tasks, and examples. The LLM extracts patterns from the examples provided in the context window, and these patterns help the model adapt to new tasks. 

    In essence, in-context learning is a type of temporary learning that allows an LLM to learn new concepts and patterns exclusively within the current context window.

    An example of In-Context Learning

    To clarify the concept, let's take a look at a simple example of how we can use in-context learning to answer questions (a specific type of LLM task, which I will explain in more detail later).

    In this example, we would provide the LLM with several question/answer pairs as examples. The purpose of the examples is to help the LLM learn the specific question/answer format we want it to work with.

    For example, you could use this text in your language model's prompt window: 
     

    Example:

    • Who directed “Lord of the Rings”? Peter Jackson.
    • Who directed “Star Wars”? George Lucas.
    • Who directed “Blade Runner”? Ridley Scott.
       

    Now, using this example format, answer the following question:

    Who directed Die Hard?

    In this example of in-context learning, we ask three questions about the director of a particular movie and we also provide the answer. If we then ask a new question in the same format, the LLM provides the correct answer in this format.

    Overview of possible uses and applications

    We can use in-context learning for a variety of tasks, but it is particularly useful in areas such as text creation, translation, classification and summarization. For example, if we provide an LLM with several “few-shot” examples showing the translation of a particular language, it can learn and apply these patterns to translate new sequences.

    In-Context Learning vs. Fine-Tuning

    After reading the previous parts of this article, you may notice that in-context learning shares some similarities with fine-tuning, as both techniques allow a pre-trained model to learn new things.

    Important differences between fine-tuning and in-context learning:
     

    Changing the model weight

    The main difference between fine-tuning and in-context learning is in what actually happens when we train the model.

    In fine-tuning, we take a pre-trained large language model and then further train the model with a new, task-specific dataset. Importantly, in this fine-tuning step, the model weights are updated directly through gradient descent. This process changes the model weights permanently. Furthermore, this fine-tuning process usually requires significant data preparation and incurs computational costs associated with conventional model training. 

    In contrast, with in-context learning, the model weights remain unchanged. With in-context learning, the weights of the pre-trained model remain the same, and the model adapts at the surface level strictly within the context window. 

    Permanence

    As mentioned above, there is also a fundamental difference in how permanent the training is. 

    Since fine-tuning directly changes the model weights, fine-tuning causes a permanent change to the model.

    In contrast, in-context learning results in a superficial, temporary change to the model because the underlying model weights remain unchanged during in-context learning. Learning only takes place within the context window for a given session. 

    Model specialization

    There are also differences in the output models generated by fine-tuning or in-context learning.

    Since we directly update the model weights with a new, task-specific dataset during fine-tuning, fine-tuning results in a new model that is more specialized for this task-specific dataset. In other words, fine-tuning creates a new and more specialized model.

    In contrast, in-context learning preserves the generality of the initial model because the underlying model (due to the unchanged model weight) remains essentially unchanged. By preserving the generality of the initial model, in-context learning works better in situations where the model must remain flexible. The flexibility of in-context learning makes this technique useful for situations where you need to quickly teach the model to adapt to a new task but want to avoid the costs of full fine-tuning. 

    Performance

    Finally, let's briefly discuss the performance of models trained with fine-tuning or in-context learning.

    Full fine-tuning often produces models that perform better on specific tasks. In contrast, in-context learning often produces models that are less accurate or perform worse compared to full fine-tuning.

    Therefore, you should use fine-tuning for scenarios in which accuracy and precision are a high priority for a specific task.

    Use cases for In-Context Learning

    In-context learning can be used for a variety of tasks, including: text generation, question answering, and text classification. 

    Text summarization

    You can use in-context learning for text summarization.

    For example, you could give a LLM text pairs consisting of a large block of text and a summarized version. Providing these original text/summary examples could help show a model how to summarize new text in the future.

    Using In-Context Learning for text summarization in this way can help train the model to summarize text in a specific way, such as a particular format or structure. 

    Question-Answer Tasks

    Another task where you can apply in-context learning is question-answering.

    In this application, you can provide an LLM model with a few question-answer pairs in the context window. This allows the model to learn the specific question-answer format and respond accordingly. 

    Text classification

    In-context learning can also be used for text classification tasks such as topic labeling and sentiment analysis. 

    In this type of in-context learning, you provide the model with a few pairs consisting of a block of text and a label. By providing a few example pairs, you enable the model to learn how to categorize new text input based on the patterns observed in the examples provided. 

    This is useful for situations where you might need to quickly create a text classifier that is reasonably strong at the specific task, but want to avoid the cost of a full model fine-tuning (with all the costs of compute power, data preparation, etc.). 

    Other use cases

    The three use cases above are important, but there are a wide range of other tasks where you can use in-context learning, such as: 

    • Text translation
    • Data imputation
    • Code generation
    • Named entity recognition

    Advantages and challenges

    Now that we have discussed what in-context learning is and how we use it, let's briefly look at the advantages and disadvantages of in-context learning. 

    Advantages of In-Context Learning

    The main advantages are: 

    • no downstream model training required
    • task flexibility
    • can be used for few-shot learning

    Challenges of In-Context Learning

    Although in-context learning offers many advantages and possible applications, it also has some disadvantages, such as: 

    • limited context window
    • inconsistent performance
    • higher resource costs

    Conclusion

    In-context learning is a powerful technique that can be used to adapt LLMs to new tasks without incurring the costs of traditional model training or fine-tuning. This technique can be used to teach LLM models new tasks such as summarization, question/answer, data entry, and text classification.

    Author

    [at] Editorial Team

    With extensive expertise in technology and science, our team of authors presents complex topics in a clear and understandable way. In their free time, they devote themselves to creative projects, explore new fields of knowledge and draw inspiration from research and culture.

    X

    Cookie Consent

    This website uses necessary cookies to ensure the operation of the website. An analysis of user behavior by third parties does not take place. Detailed information on the use of cookies can be found in our privacy policy.