Topher Collins

Logo

Data Scientist and Machine Learning Practitioner

Python Developer (experiened with Flask frameworks)

View My LinkedIn

View My Articles on Medium

View My GitHub Profile

Fine-tuning LLM for RPG Statblocks > Go to project

Aim

This project aims to use the Llama 3 8B LLM to create a fine-tuned task specific model for creating Dungeons & Dragons monster statblocks in a structured style and format.

Data

The original monster data can be found at D&D Monster Spreadsheet

The processed data for fine tuning can be found at D&D Monster Hugging Face Dataset

Process

Conclusion

Our output results consistently followed the desired output format. As such, we could easily take the model outputs and process them for further needs, such database storage or provding directly to a user. There were some signs of overfitting, which is understanble with our very small fine-tuning dataset. In particular the ‘creativity’ of the model seems limited, often producing a name that is extremely uniform and basic.

Model LoRA adaptors can be found and used here