ERA

Download the full-sized PDF of Guitar Tablature Transcription using a Deep Belief NetworkDownload the full-sized PDF

Analytics

Share

Permanent link (DOI): https://doi.org/10.7939/R3513V698

Download

Export to: EndNote  |  Zotero  |  Mendeley

Communities

This file is in the following communities:

Graduate Studies and Research, Faculty of

Collections

This file is in the following collections:

Theses and Dissertations

Guitar Tablature Transcription using a Deep Belief Network Open Access

Descriptions

Other title
Subject/Keyword
instrument transcription
music information retrieval
deep learning
Type of item
Thesis
Degree grantor
University of Alberta
Author or creator
Burlet, Gregory D.
Supervisor and department
Hindle, Abram (Computing Science)
Examining committee member and department
Hindle, Abram (Computing Science)
Schuurmans, Dale (Computing Science)
Smallwood, Scott (Music)
Boulanger, Pierre (Computing Science)
Department
Department of Computing Science
Specialization

Date accepted
2015-08-07T13:36:28Z
Graduation date
2015-11
Degree
Master of Science
Degree level
Master's
Abstract
Music transcription is the process of extracting the pitch and timing of notes that occur in an audio recording and writing the results as a music score, commonly referred to as sheet music. Manually transcribing audio recordings is a difficult and time-consuming process, even for experienced musicians. In response, several algorithms have been proposed to automatically analyze and transcribe the notes sounding in an audio recording; however, these algorithms are often general-purpose, attempting to process any number of instruments producing any number of notes sounding simultaneously. This work presents a transcription algorithm that is constrained to processing the audio output of a single instrument, specifically an acoustic guitar. The transcription system consists of a novel note pitch estimation algorithm that uses a deep belief network and multi-label learning techniques to generate multiple pitch estimates for each segment of the input audio signal. Using a compiled dataset of synthesized guitar recordings for evaluation, the algorithm described in this work results in a 12% increase in the f-measure of note transcriptions relative to a state-of-the-art algorithm in the literature. This thesis demonstrates the effectiveness of deep, multi-label learning for the task of guitar audio transcription.
Language
English
DOI
doi:10.7939/R3513V698
Rights
Permission is hereby granted to the University of Alberta Libraries to reproduce single copies of this thesis and to lend or sell such copies for private, scholarly or scientific research purposes only. The author reserves all other publication and other rights in association with the copyright in the thesis and, except as herein before provided, neither the thesis nor any substantial portion thereof may be printed or otherwise reproduced in any material form whatsoever without the author's prior written permission.
Citation for previous publication

File Details

Date Uploaded
Date Modified
2015-08-10T20:51:31.792+00:00
Audit Status
Audits have not yet been run on this file.
Characterization
File format: pdf (PDF/A)
Mime type: application/pdf
File size: 2625683
Last modified: 2016:06:24 17:05:14-06:00
Filename: Burlet_Gregory_D_201507_MSc.pdf
Original checksum: 3f08eaa07962304f317b469f8c112771
Activity of users you follow
User Activity Date