Created
January 29, 2019 22:34
-
-
Save ffont/59a66a56dcbee3d8d16816a7b43c261b to your computer and use it in GitHub Desktop.
AudioCommons code for generating list of publications
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
JOURNALS AND BOOK CHAPTERS | |
************************** | |
* 2018 | |
------ | |
Choi, K., Fazekas, G., Sandler, M., Cho, K. (2018). The Effects of Noisy Labels on Deep Convolutional Neural Networks for Music Tagging. In: IEEE Transactions on Emerging Topics in Computational Intelligence Vol. 2, No. 2. URL: https://ieeexplore.ieee.org/document/8323324 | |
Liang, B., Fazekas, G., Sandler, M. (2018). Measurement, Recognition and Visualisation of Piano Pedalling Gestures and Techniques. In: Journal of the AES, Vol. 66, Issue 2. URL: http://www.aes.org/e-lib/browse.cfm?elib=19584 | |
Xambó, A., Lerch, A., Freeman, J. (2018). Music Information Retrieval in Live Coding: A Theoretical Framework. In: Computer Music Journal. | |
* 2019 | |
------ | |
Estefanía Cano, Derry FitzGerald, Antoine Liutkus, Mark D. Plumbley and Fabian-Robert Stöter (2019). Musical Source Separation: An Introduction. In: IEEE Signal Processing Magazine . URL: http://epubs.surrey.ac.uk/849940/ | |
CONFERENCE PAPERS | |
***************** | |
* 2015 | |
------ | |
Font, F., Serra, X. (2015). The Audio Commons Initiative. In: Proc. of the International Society for Music Information Retrieval Conference (ISMIR, late-breaking demo). URL: https://www.audiocommons.org/assets/files/audiocommons_ismir_2015.pdf | |
* 2016 | |
------ | |
Allik, A., Fazekas, G., Sandler, M. (2016). An Ontology for Audio Features. In: Proc. of the International Society for Music Information Retrieval Conference (ISMIR). URL: https://wp.nyu.edu/ismir2016/wp-content/uploads/sites/2294/2016/07/077_Paper.pdf | |
Allik, A., Fazekas, G., Sandler, M. (2016). Ontological Representation of Audio Features. In: Proc. of the 15th International Semantic Web Conference (ISWC). URL: https://link.springer.com/chapter/10.1007/978-3-319-46547-0_1 | |
Bogdanov, D., Porter, A., Herrera, P., Serra, X. (2016). Cross-collection evaluation for music classification tasks. In: Proc. of the International Society for Music Information Retrieval Conference (ISMIR). URL: http://mtg.upf.edu/node/3498 | |
Buccoli, M., Zanoni, M., Fazekas, G., Sarti A., Sandler, M. (2016). A Higher-Dimensional Expansion of Affective Norms for English Terms for Music Tagging. In: Proc. of the International Society for Music Information Retrieval Conference (ISMIR). URL: https://wp.nyu.edu/ismir2016/wp-content/uploads/sites/2294/2016/07/253_Paper.pdf | |
Choi, K., Fazekas, G., Sandler, M. (2016). Automatic Tagging Using Deep Convolutional Neural Networks. In: Proc. of the International Society for Music Information Retrieval Conference (ISMIR). URL: https://arxiv.org/abs/1606.00298 | |
Choi, K., Fazekas, G., Sandler, M. (2016). Towards Playlist Generation Algorithms Using RNNs Trained on Within-Track Transitions. In: Proc. of the User Modeling, Adaptation and Personalization Conference (UMAP), Workshop on Surprise, Opposition, and Obstruction in Adaptive and Personalized Systems (SOAP). URL: https://arxiv.org/abs/1606.0209 | |
Font, F., Brookes, T., Fazekas, G., Guerber, M., La Burthe, A., Plans, A., Plumbley, M. D., Shaashua, M., Wang, W., Serra, X. (2016). Audio Commons: bringing Creative Commons audio content to the creative industries. In: Proc. of the 61st AES Conference on Audio for Games. URL: https://www.audiocommons.org/assets/files/audiocommons_aes_2016.pdf | |
Font, F., Serra, X. (2016). Tempo Estimation for Music Loops and a Simple Confidence Measure. In: Proc. of the International Society for Music Information Retrieval Conference (ISMIR). URL: http://mtg.upf.edu/node/3479 | |
Juric D., Fazekas, G. (2016). Knowledge Extraction from Audio Content Service Providers’ API Descriptions. In: Proc. of the 10th International Conference on Metadata and Semantics Research (MTSR). URL: http://link.springer.com/10.1007/978-3-319-49157-8_ | |
Porter, A., Bogdanov, D., Serra, X. (2016). Mining metadata from the web for AcousticBrainz. In: Proc. of the 3rd International Digital Libraries for Musicology workshop. URL: http://mtg.upf.edu/node/3533 | |
Wilmering, T., Fazekas, G., Sandler, M. (2016). AUFX-O: Novel Methods for the Representation of Audio Processing Workflows. In: Proc. of the 15th International Semantic Web Conference (ISWC). URL: https://link.springer.com/chapter/10.1007/978-3-319-46547-0_24 | |
* 2017 | |
------ | |
Bogdanov D., Serra X. (2017). Quantifying music trends and facts using editorial metadata from the Discogs database. In: Proc. of the International Society for Music Information Retrieval Conference (ISMIR). URL: http://hdl.handle.net/10230/32931 | |
Bogdanov, D., Porter A., Urbano J., Schreiber H. (2017). The MediaEval 2017 AcousticBrainz Genre Task: Content-based Music Genre Recognition from Multiple Sources. In: MediaEval Workshop. URL: http://hdl.handle.net/10230/32932 | |
Choi, K., Fazekas, G., Sandler, M., Cho, K. (2017). Convolutional Recurrent Neural Networks for Music Classification. In: Proc. of the 42nd IEEE International Conference on Acoustics. URL: https://arxiv.org/abs/1609.0424 | |
Choi, K., Fazekas, G., Sandler, M., Cho, K. (2017). Transfer Learning for Music Classification and Regression Tasks. In: Proc. of the International Society for Music Information Retrieval Conference (ISMIR). URL: https://arxiv.org/abs/1703.09179 | |
Fonseca, E., Gong R., Bogdanov D., Slizovskaia O., Gomez E., Serra, X. (2017). Acoustic Scene Classification by Ensembling Gradient Boosting Machine and Convolutional Neural Networks. In: Proc. of the Detection and Classification of Acoustic Scenes and Events Workshop (DCASE). URL: https://repositori.upf.edu/handle/10230/33454 | |
Fonseca, E., Pons J., Favory X., Font F., Bogdanov D., Ferraro A., Oramas S., Porter A., Serra X. (2017). Freesound Datasets: A Platform for the Creation of Open Audio Datasets. In: Proc. of the International Society for Music Information Retrieval Conference (ISMIR). URL: http://hdl.handle.net/10230/33299 | |
Font, F., Bandiera G. (2017). Freesound Explorer: Make Music While Discovering Freesound!. In: Proc. of the Web Audio Conference (WAC). URL: http://hdl.handle.net/10230/32538 | |
Page, K., Bechhofer, S., Fazekas, G., Weigl, D., Wilmering, T. (2017). Realising a Layered Digital Library: Exploration and Analysis of the Live Music Archive through Linked Data. In: Proc. of the ACM/IEEE Joint Conference on Digital Libraries (JCDL). URL: http://ieeexplore.ieee.org/document/7991563 | |
Pauwels, J., Fazekas, G., Sandler, M. (2017). Exploring Confidence Measures and Their Application in Music Labelling Systems Based on Hidden Markov Models. In: Proc. of the International Society for Music Information Retrieval Conference (ISMIR). URL: https://ismir2017.smcnus.org/wp-content/uploads/2017/10/195_Paper.pdf | |
Pearce, A., Brookes, T., Mason, R. (2017). Timbral attributes for sound effect library searching. In: Proc. of the Audio Engineering Society Conference on Semantic Audio. URL: http://www.aes.org/e-lib/download.cfm/18754.pdf?ID=18754 | |
Wilmering, T., Thalmann, F., Fazekas, G., Sandler, M. (2017). Bridging Fan Communities and Facilitating Access to Music Archives Through Semantic Audio Applications. In: Proc. of the 143st Convention of the Audio Engineering Society. URL: http://eecs.qmul.ac.uk/~gyorgyf/files/papers/wilmering2017aes.pdf | |
* 2018 | |
------ | |
Bogdanov, D., Porter A., Urbano J., Schreiber H. (2018). The MediaEval 2018 AcousticBrainz Genre Task: Content-based Music Genre Recognition from Multiple Sources. In: MediaEval Workshop. URL: http://hdl.handle.net/10230/35744 | |
Ceriani, M., Fazekas, G. (2018). Audio Commons Ontology: A Data Model for an Audio Content Ecosystem. In: Proc. of the 17th International Semantic Web Conference (ISWC). URL: https://link.springer.com/chapter/10.1007%2F978-3-030-00668-6_2 | |
Choi, K., Fazekas, G., Sandler, M., Cho, K. (2018). A Comparison of Audio Signal Preprocessing Methods for Deep Neural Networks on Music Tagging. In: Proc. of the 26th European Signal Processing Conference (EUSIPCO). URL: https://arxiv.org/abs/1709.01922 | |
Choobbasti, A., Gholamian, M., Vaheb, A., and Safavi, S. (2018). JSPEECH: A Multi-lingual conversational speech corpus. In: Proc. of the Speech and Language Technology Workshop (SLT). | |
Favory, X., Fonseca E., Font F., Serra X. (2018). Facilitating the Manual Annotation of Sounds When Using Large Taxonomies. In: Proc. of the International Workshop on Semantic Audio and the Internet of Things (ISAI), in IEEE FRUCT Conference. URL: https://arxiv.org/abs/1811.10988 | |
Favory, X., Serra, X. (2018). Multi Web Audio Sequencer: Collaborative Music Making. In: Proc. of the Web Audio Conference (WAC). URL: https://webaudioconf.com/papers/multi-web-audio-sequencer-collaborative-music-making.pdf | |
Ferraro, A., Bogdanov D., Choi K., Serra X. (2018). Using offline metrics and user behavior analysis to combine multiple systems for music recommendation. In: Proc. of the Conference on Recommender Systems (RecSys), REVEAL Workshop. URL: https://drive.google.com/open?id=1_CCCZiyy7J962hcYOO3pqEvtYnd5VPSp | |
Ferraro, A., Bogdanov D., Yoon J., Kim K. S., Serra X. (2018). Automatic playlist continuation using a hybrid recommender system combining features from text and audio. In: Proc. of the Conference on Recommender Systems (RecSys), Workshop on the RecSys Challenge. URL: https://dl.acm.org/citation.cfm?doid=3267471.3267473 | |
Fonseca, E., Gong R., & Serra X. (2018). A Simple Fusion of Deep and Shallow Learning for Acoustic Scene Classification. In: Proc. of the Sound and Music Computing Conference. URL: https://arxiv.org/abs/1806.07506 | |
Fonseca, E., Plakal M., Font F., Ellis D. P. W., Favory X., Pons J., Serra X. (2018). General-purpose Tagging of Freesound Audio with AudioSet Labels: Task Description, Dataset, and Baseline. In: Proc. of the Detection and Classification of Acoustic Scenes and Events Workshop (DCASE). URL: https://arxiv.org/abs/1807.09902 | |
Oramas, S., Bogdanov D., & Porter A. (2018). MediaEval 2018 AcousticBrainz Genre Task: A baseline combining deep feature embeddings across datasets. In: MediaEval Workshop. URL: http://hdl.handle.net/10230/35745 | |
Pearce, A., Brookes, T., Mason, R. (2018). Searching Sound-Effects using Timbre. In: BBC Sounds Amazing. | |
Safavi, S., Pearce, A., Wang, W., Plumbley, M. (2018). Predicting the perceived level of reverberation using machine learning. In: Proc. of the Asilomar Conference on Signals, Systems, & Computers. | |
Safavi, S., Wang, W., Plumbley, M., Choobbasti, A., and Fazekas, G. (2018). Predicting the Perceived Level of Reverberationusing Features from Nonlinear Auditory Model. In: Proc. of the International Workshop on Semantic Audio and the Internet of Things (ISAI), in IEEE FRUCT Conference. | |
Sheng, D., Fazekas, G. (2018). Feature Design Using Audio Decomposition for Intelligent Control of the Dynamic Range Compressor. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). URL: http://www.mirlab.org/conference_papers/international_conference/ICASSP%202018/pdfs/0000621.pdf | |
Vaheb, A., Choobbasti, A., Mortazavi, S., and Safavi, S. (2018). Investigating Language Variability on the Performance of Speaker Verification Systems. In: Proc. of the 21st International Conference on Speech and Computer (SPECOM). | |
Xambó, A., Roma, G., Lerch, A., Barthet, M., Fazekas, G. (2018). Live Repurposing of Sounds: MIR Explorations with Personal and Crowd-Sourced Databases. In: Proc. of the New Interfaces for Musical Expression (NIME). URL: http://www.musicinformatics.gatech.edu/wp-content_nondefault/uploads/2018/04/Xambo-et-al.-2018-Live-Repurposing-of-Sounds-MIR-Explorations-with-.pdf | |
* 2019 | |
------ | |
Ferraro, A., Bogdanov D., Serra X. (2019). Skip prediction using boosting trees based on acoustic feature of tracks in sessions. In: Proc. of the 12th ACM International Conference on Web Search and Data Mining, 2019 WSDM Cup Workshop. | |
SUBMITTED, IN PRESS OR PLANNED PAPERS | |
************************************* | |
* accepted | |
------ | |
Pearce, A., Brookes, T., Mason, R. (accepted). Modelling Timbral Hardness. In: Journal of Applied Sciences. | |
* planned | |
------ | |
(planned). Audio Commons: Achievements and future perspectives (working title). In: -. | |
* submitted | |
------ | |
Fonseca, E., Plakal M., Font F., Ellis D. P. W., Favory X., Serra X. (submitted). Learning Sound Event Classifiers from Web Audio with Noisy Labels. In: Proc. of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). URL: https://arxiv.org/abs/1901.01189 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import csv | |
AC_PUBLICATIONS_CSV_FILENAME = 'AC publications - Sheet1.csv' | |
data = csv.DictReader(open(AC_PUBLICATIONS_CSV_FILENAME), | |
fieldnames='Partner,Type,Title,Authors,Year of publications,Title of the Journal or equivalent,Number / date,Publisher,Place of publication,Relevant pages,DOI,ISSN or eSSN,Peer reviewed?,Open access type,Has private participation?,Has been added to the AudioCommons website?,License,Download link,'.split(',')) | |
data = list(data)[2:] # Skip first rows (header and note) | |
data = sorted(data, key=lambda x: '{0}-{1}'.format(x['Year of publications'], x['Authors'])) # Sort by year and author | |
data_journals_books = [pub for pub in data if (pub['Type'] == 'Journal Paper' or pub['Type'] == 'Book chapter') and pub['Year of publications'].isdigit()] | |
data_conferences = [pub for pub in data if pub['Type'] != 'Journal Paper' and pub['Year of publications'].isdigit()] | |
data_submitted_planned = [pub for pub in data if not pub['Year of publications'].isdigit()] | |
def print_publication_for_deliverable(pub): | |
print('{0} ({1}). {2}. In: {3}.{4}\n'.format( | |
pub['Authors'], | |
pub['Year of publications'], | |
pub['Title'], | |
pub['Title of the Journal or equivalent'], | |
' URL: {0}'.format(pub['Download link']) if pub['Download link'] != '' else '')) | |
def print_list(title, pubs): | |
print('\n\n{0}\n{1}\n'.format(title, '*'*len(title))) | |
current_year = None | |
for pub in pubs: | |
if pub['Year of publications'] != current_year: | |
print('* {0}\n------\n'.format(pub['Year of publications'])) | |
current_year = pub['Year of publications'] | |
print_publication_for_deliverable(pub) | |
print_list('JOURNALS AND BOOK CHAPTERS', data_journals_books) | |
print_list('CONFERENCE PAPERS', data_conferences) | |
print_list('SUBMITTED, IN PRESS OR PLANNED PAPERS', data_submitted_planned) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment