In the last post, I showed how to use the llama-index wrapper to finetune a pre-trained embedding model to retrieve the most relevant subdocuments from a corpus of 10Ks. This post shows a more manual approach to finetuning so we can keep track of performance. Given that we only train and evaluate on one document for each, the results are simply for illustration purposes only.


I used the llama-index code as a basis and re-wrote it to accommodate my understanding. My code is posted here in two files: src/adapter.py and experiments/experiment_fine_tune.ipynb.

experiment_fine_tune