Over the years in the field of natural language processing, huge improvements have been made thanks to deep learning. More specifically, in the sub-field of language modeling, which is a task to train a model to learn the distribution of the words in a certain language, models such as GPT2 are famous allowing realistic news articles to be written. In this thesis we assess the feasibility of using a fine-tuned GPT2 model as a tool for lyric generation. More precisely, we compare the characteristics between generated lyrics, conditioned on a specific artist or genre of music, and the original lyrics. We find that the models were able to learn the characteristics and styles of each genre and each artist.