• AlphaSignal
  • Posts
  • 📚 Understanding LLMs From 0 To 1

📚 Understanding LLMs From 0 To 1

Your weekly deep dive on the latest technical topic in AI you should know about.

AlphaSignal

Hey ,

Welcome to our new deep-dive series where we'll dive deep into different AI topics.

Our first series will aim to explain the workings of Large Language Models (LLMs). Since our readers come from all walks of life, these deep-dives will start explaining things from the ground up, requiring very little if any background knowledge.

Let's get into it!

Reading time: 5min 17 sec

DEEP DIVE
Part 1 - Text Vectorization

Machine Learning (ML) models are used for making predictions. Predictions could be about the weather, whether a user clicks on an ad/movie/song, the answer to a question etc. In order to make a prediction the model needs to be provided some input data that contains information that can be used to make a prediction.

The way input data is presented to a model is quite critical and can determine how easy it is for a model to extract information from it. LLMs are no different, today we'll dive into how we need to present input data to them.

Text Vectorization: Converting text to numbers
On receiving input, ML models perform a series of operations like multiplications to provide a numerical output that is translated into a prediction. In the world of LLMs the model is provided with a prompt made up of text, however running the mathematical operations associated with the internal workings of an LLM requires converting the text to numerical values.

The conversion of text into numerical values is called text vectorization. A vector is a sequence of numbers and is analogous to an array of numbers in the context of programming. When dealing with ML libraries it’s common for arrays of numbers to be converted to vector objects since they make mathematical operations run more efficiently. For example in numpy to make an array a vector you'd do something like:


import numpy as np

# Array of numbers
a = [1,2,3]

# Convert array to a vector
vector_a = np.asarray(a)

Tokenization

Tokenization is the process of breaking a piece of text into its units called tokens. Tokens can be alphabets, words, or groups of alphabets that make up words often called subwords based on the methodology. A tokenizer is an algorithm that's responsible for tokenizing text.

The simplest tokenizer that one can imagine (for English) is to split a document at every space character or punctuation.


import re

def tokenize_document(document):
# Define a regular expression pattern to
# match spaces and punctuation as separate tokens
pattern = r"[\s.,;!?()]+|[.,;!?()]"

# Use re.split to tokenize the document
tokens = re.split(pattern, document)

# Remove empty tokens
tokens = [token for token in tokens if token]

return tokens

text = 'sample sentence. It contains punctuation!'

tokenize_document(text)

>>> ['sample', 'sentence', 'It', 'contains', 'punctuation']

Building a vocabulary

A vocabulary is the set of all tokens that an ML model would be able to recognize. The English language has around 170k words. Imagine how huge this number would be for a multilingual use case.

We need to put a cap on the size of our vocabulary to ensure computational efficiency. The size of the vocabulary is often capped by counting the frequency of tokens in a huge corpus of text and choosing the top-k tokens. Where k would correspond to the vocabulary size.


from collections import Counter

def build_top_k_vocab(corpus, k):
# Initialize a Counter to count token frequencies
token_counter = Counter()

# Tokenize and count tokens in each document
for document in corpus:
tokens = tokenize_document(document)
token_counter.update(tokens)

# Get the top 10 tokens by frequency
top_k_tokens = [token for token, _ in
token_counter.most_common(k)]

return set(top_k_tokens)

# Example usage:
corpus = [
"This is a sample sentence with some words.",
"Another sample sentence with some repeating words.",
"And yet another sentence to build the vocabulary.",
]

build_top_k_vocab(corpus, k=5)
>>> {'sample', 'sentence', 'some', 'with', 'words'}

Converting Tokens to Numerical Values

Now that we have a vocabulary we assign an id to each token in our vocabulary. This can be done through simple enumeration and maintaining a map/dictionary between the ids and corresponding token. The id map will help us keep track of tokens that are present in a document.

The identified set of tokens now have to be translated to features that'll help an ML model extract information. While a token can be represented as a scalar or vector a document is always going to be represented as a vector of the token’s representation that it is made up of.

Some of the common ways to encode a document into features are:

  1. Binary Document-Term Vector: Each document is represented as a vector whose size is equal to the size of the vocabulary. The id of each token in the vocabulary corresponds to the index position in the vector. A value of 1 is assigned to the index position in the vector, when the corresponding token is present in the document and 0 if it’s absent.

  2. Bag of Words (BoW): Similar to approach 1, but the vector’s indices map to the frequency of the token in the document.

  3. N-gram vectors: This approach extends the above approach by allowing us to have index positions corresponding to bi-grams, tri-grams etc.

  4. Tf-idf: Similar to BoW, but instead of just the frequency a token is assigned a value based on its frequency in a document and how many unique documents in the corpus the token occurs in. The intuition is that tokens that are rare in general but are present a large number of times in a specific document are important, but the tokens that are plentiful across all documents (like the words "and, an, the, it etc.") are not.

  5. Embeddings: This approach is used in most deep neural networks such as LLMs. Each id is mapped to a unique n-dimensional vector called an embedding. The advantage of this approach is that rather than having a single hand-crafted feature such as the ones mentioned above for each token, an LLM can learn a better high dimensional feature via back propagation. An embedding is supposed to be able to capture the meaning or context around which a token occurs.

Wrap Up

That's all for this week's edition of our deep-dive series. Today we learned:

  • Text needs to be converted to a numerical vector for ML models to process the information in it.

  • How to break text into tokens and build a vocabulary.

  • How a set of ids representing a document is translated to a feature useful to a model.

In the next edition we'll dive into tokenization algorithms like Byte-Pair Encoding, SentencePiece etc. commonly used in LLMs.

Pramodith is a contributing writer at AlphaSignal and AI Engineer at LinkedIn with expertise in Natural Language Processing, Computer Vision, and Reinforcement Learning. A graduate of the Georgia Institute of Technology, he has a strong foundation in Conversational AI. Feel free to connect and reach out.

How was today’s email?

Not Great      Good      Amazing

Thank You

Want to promote your company, product, job, or event to 100,000+ AI researchers and engineers? You can reach out here.