Riku.AI
  • Welcome to Riku.AI
  • Fundamentals of Riku
    • Connecting Keys
      • Connecting Aleph Alpha Keys
      • Connecting Muse API Keys
      • Connecting OpenAI Keys
      • Connecting Cohere Keys
      • Connecting AI21 Keys
  • Basics of AI
    • Defining Key Terms
    • Winning with Text Based AI
    • Prompt Building 101
    • Common Issues & Errors
    • Ethical Considerations
  • Exporting Prompts
    • Single Endpoint Export
      • Getting Prompt IDs
      • Structuring the Request
      • Understanding Input Fields
      • Full Example Export
  • Technology Overview
    • OpenAI
    • Cohere
    • AI21
    • EleutherAI
    • Aleph Alpha
    • Muse API
  • Importing Fine-tunes
    • Introduction to Fine-tuning
      • Importing from OpenAI
      • Importing from Cohere
      • Importing from AI21
  • Riku's Playground
    • Creating Prompts
    • Saving & Sharing Prompts
    • Community Showcase
    • Private Prompts
    • Execute Prompts
    • Code Export
Powered by GitBook
On this page
  1. Importing Fine-tunes

Introduction to Fine-tuning

Fine-tuning often confuses people. It doesn't need to be complicated and we're here to put it in simple terms.

Fine-tuning is the process of taking a larger dataset and formulating it into a prompt so that you can get better outputs from the AI models. Think of fine-tuning like building out a prompt but on a much larger scale. With fine-tuning you are no longer limited to a maximum token amount so you can provide a much larger sample of examples than you can with just a prompt.

As an example, imagine you are building out a blog introduction prompt and you are giving a few examples and end up reaching the token limit. You may be able to include perhaps 5 or 10 of these examples maximum. It will give the AI a good opportunity to learn the pattern and the stronger the underlying AI model is, the better the output will be. Often times, this is enough to get an output you are happy with but sometimes for more complicated scenarios, you may want to consider fine-tuning.

If we use the same example and go into how fine-tuning would work, instead of providing just 5 or 10 examples, we could provide 100, 500, or 10,000 examples. Considerably more data and more training for the AI to get a deeper understanding of the content and the type of output that we are expecting as an output. By providing these larger datasets, you are going to get a better model that performs in a whole new level compared to just a vanilla AI model.

PreviousMuse APINextImporting from OpenAI

Last updated 2 years ago