HomeArticlesBlogThe Fine-Tuning Process for Custom Training an AI

The Fine-Tuning Process for Custom Training an AI

the fine-tuning process for ai training

What is Fine-Tuning?

The fine-tuning process involves training an artificial intelligence (AI) model like ChatGPT to customize it for better performance in specific specialized tasks. Training may include providing the AI with more knowledge or instructing it to respond in a particular format and style.

Technically, fine-tuning entails making small changes to the structure of a deep learning system to optimize its response. In this post, we’ll explore what is needed to start, how the process works, and under what conditions it produces the best results.

The Key is Data

The key to any fine-tuning process lies in the initial data. In the case of HAL149, this involves collecting and formatting data from the company. Typically, this data exists in the form of content in PDFs, handwritten notes, Word files, etc. In many cases, this content needs manual cleaning.

Fine-tuning has very specific requirements in terms of the format of the information to be used. It is therefore in this initial preparation stage that the main source of friction (and therefore value) in custom training an AI lies. It is a task that requires specific skills for which there are few automated tools.

The challenge is even greater considering that a business regularly produces content, a significant portion of which must be used in the training process. Hence, it is a recurring and real-time process.

Stages of Fine-Tuning

Next, I will describe the more general stages of the fine-tuning process. The text becomes somewhat technical, but I have tried to use descriptions as abstract as possible:

  • Tokenization: The text is tokenized, meaning it is divided into smaller units (tokens), which are usually words or pieces of words depending on the method used.
  • Vectorization: Each token in the text is converted into a numerical representation using different techniques that may involve word embeddings (Word2Vec, GloVe) or sub-word embeddings (Byte Pair Encoding, SentencePie). Each word is assigned vectors of density in a continuous space.
  • Tuning: The actual fine-tuning process using the database of vectors mentioned earlier. The model’s weights (configuration) are updated and personalized to provide responses related to the desired domain or language style.
  • Evaluation: The model is then evaluated to check its performance in the specific task or domain. This may involve specific datasets and metrics: essentially, the process is to examine the model to see how it behaves.
  • Implementation. If the previous result is satisfactory, the model can be used as a specialised AI assistant for chatbot tasks and content generation.

When to Use Fine-Tuning?

The fine-tuning process is slow and expensive because it requires large databases, local processing time (human work in preparing information), and machine processing time (updating the model). Therefore, its use should be optimized.

Its interesting for training with information that either does not change or changes very little over time: it is performed once or with a long periodicity. Master data, such as laws, statutes, statistics, vocabulary, etc. is used for this purpose.

Training with news and other more “volatile” types of content needs to be done using vectorisation or embedding (I will discuss this in a separate post).

 

the fine-tuning process for ai training

https://hal149.com

I work in AI and I believe it is an opportunity for everyone. Join me by following my posts or by signing up for HAL149, the custom-trained AI assistant for your company.