This paper proposes a novel approach to accomplish the automatic segmentation of singing voice within music signals, based on the difference between the dynamic harmonic content of singing voice and that of musical instrument signals. The obtained results are compared with those of another approach proposed in the literature, considering the same music database. For both techniques, an accuracy rate around 80% is obtained, even using a more rigorous performance measure for our approach only. As an advantage, the new procedure presents lower computational complexity. In addition, we discuss other results obtained by extending the tests over the whole database (upholding the same performance level) and by discriminating the error types (boundaries shifted in time, insertion and deletion of
singing segments). The analysis of these errors suggests some alternative ways of reducing them, as for example, to adopt a confidence level based on a minimum harmonic content for the input signals. In this way, considering only signals with confidence level equal to one, the obtained performance is improved to almost 87%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.