Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published in GLOBECOM 2017, 2017
Recommended citation: S. Hu, A. Berg, X. Li and F. Rusek, "Improving the Performance of OTDOA Based Positioning in NB-IoT Systems," GLOBECOM 2017 - 2017 IEEE Global Communications Conference, Singapore, 2017, pp. 1-7, doi: 10.1109/GLOCOM.2017.8254510. https://ieeexplore.ieee.org/abstract/document/8254510/
Published in ICPR 2020, 2020
Code: https://github.com/axeber01/dold
Recommended citation: A. Berg, M. Oskarsson, and M. OConnor. "Deep Ordinal Regression with Label Diversity." arXiv preprint arXiv:2006.15864 (2020). https://arxiv.org/abs/2006.15864
Published in Interspeech 2021, 2021
In this paper, we apply the popular Transformer architecture to keyword spotting, where the task is to classify short audio snippets into different categories. By partitioning the audio spectrogram into different time windows and applying self-attention, we show that the Keyword Transformer outperforms other network architectures while maintaining a low latency at inference time.
Recommended citation: Berg, A., O’Connor, M., Cruz, M.T. (2021) Keyword Transformer: A Self-Attention Model for Keyword Spotting. Proc. Interspeech 2021, 4249-4253, doi: 10.21437/Interspeech.2021-1286 https://www.isca-speech.org/archive/pdfs/interspeech_2021/berg21_interspeech.pdf
Published in ICPR 2022, 2022
Code: https://github.com/axeber01/point-tnt
Recommended citation: Berg, Axel, Magnus Oskarsson, and Mark O’Connor. "Points to patches: Enabling the use of self-attention for 3d shape recognition." 2022 26th International Conference on Pattern Recognition (ICPR). IEEE, 2022. https://arxiv.org/pdf/2204.03957
Published in Interspeech 2022, 2022
Code: https://github.com/axeber01/ngcc
Recommended citation: Berg, A., O´Connor, M., Åström, K., Oskarsson, M. (2022) Extending GCC-PHAT using Shift Equivariant Neural Networks. Proc. Interspeech 2022, 1791-1795, doi: 10.21437/Interspeech.2022-524 https://www.isca-speech.org/archive/pdfs/interspeech_2022/berg22_interspeech.pdf