PitchHut
Log in / Sign up
llama3-from-scratch
8 views
Building LLaMA 3: One Tensor at a Time
Pitch

Dive into the intricate process of implementing LLaMA 3 from scratch, focusing on matrix and tensor manipulations. This project demystifies the architecture by loading tensors directly from the model's provided weights. Learn how each piece contributes to bringing powerful AI to life without relying on complex tokenizers.

Description

Llama3 Implementation from Scratch
Welcome to the Llama3 from Scratch project! This repository presents a thorough implementation of the Llama3 model, crafted meticulously one tensor and matrix multiplication at a time. Our aim is to provide a clear, educational path to understanding how the Llama3 works, step by step, while leveraging PyTorch for implementation.

Features

  • Basic Model Structure: We implement Llama3 primarily through tensor operations, demonstrating core concepts in neural network architecture.
  • Token Management: Utilizing the tiktoken library for effective tokenization, we ensure each input is correctly transformed into a format suitable for deep learning models.
  • Direct Tensor Loading: Load tensors directly from the model files provided by Meta for Llama3, requiring you to download the model weights from the official Llama3 weights page before running the implementation.

Tokenizer

Instead of implementing a Byte Pair Encoding (BPE) tokenizer from scratch, a clean alternative by Andrej Karpathy is linked for use: minbpe.

Model Implementation

Here's a quick glimpse of our workflow:

# Load the tokenizer model
from pathlib import Path
import tiktoken
from tiktoken.load import load_tiktoken_bpe
import torch

# Specify tokenizer path  
tokenizer_path = "Meta-Llama-3-8B/tokenizer.model"
# Load special tokens and mergeable ranks
special_tokens = ["<|begin_of_text|>", "<|end_of_text|>"] + [f"<|reserved_special_token_{i}|>" for i in range(5, 256 - 5)]
mergeable_ranks = load_tiktoken_bpe(tokenizer_path)
tokenizer = tiktoken.Encoding(...)

Example Usage

To convert text into tokens, you can use:

tokens = [128000] + tokenizer.encode("the answer to the ultimate question of life, the universe, and everything is ")

Attention Mechanism

As you dive deeper, you will encounter the attention mechanism implemented from scratch with emphasis on understanding query, key, and value calculations, as well as self-attention scores. Here’s a simplification of the attention process:

qk_per_token = torch.matmul(q_per_token_rotated, k_per_token_rotated.T)/ (head_dim) ** 0.5

Final Outputs

In the end, we decode the embedding to predict the following token, effectively training our Llama3 model to process and generate human-like text.

Get Involved

Whether you're a machine learning enthusiast, an educator, or a researcher, you will find this repository valuable. Dive into understanding and replicating Llama3 systematically and efficiently, and feel free to contribute to enhance its scope!

Conclusion

Join us in this journey of building Llama3 from the ground up, engaging with the core principles of large language models and their tensor manipulations. Happy coding!