Introduction to Fine-tuning
Fine-tuning often confuses people. It doesn't need to be complicated and we're here to put it in simple terms.
Fine-tuning is the process of taking a larger dataset and formulating it into a prompt so that you can get better outputs from the AI models. Think of fine-tuning like building out a prompt but on a much larger scale. With fine-tuning you are no longer limited to a maximum token amount so you can provide a much larger sample of examples than you can with just a prompt.
As an example, imagine you are building out a blog introduction prompt and you are giving a few examples and end up reaching the token limit. You may be able to include perhaps 5 or 10 of these examples maximum. It will give the AI a good opportunity to learn the pattern and the stronger the underlying AI model is, the better the output will be. Often times, this is enough to get an output you are happy with but sometimes for more complicated scenarios, you may want to consider fine-tuning.
If we use the same example and go into how fine-tuning would work, instead of providing just 5 or 10 examples, we could provide 100, 500, or 10,000 examples. Considerably more data and more training for the AI to get a deeper understanding of the content and the type of output that we are expecting as an output. By providing these larger datasets, you are going to get a better model that performs in a whole new level compared to just a vanilla AI model.
Last updated