Professor Jon Barker
PhD
School of Computer Science
Personal Chair
School Ethics Lead
Member of the Speech and Hearing (SpandH) research group


Full contact details
School of Computer Science
Regent Court (DCS)
211 Portobello
91Ö±²¥
S1 4DP
- Profile
-
Professor Jon Barker is a member of the Speech and Hearing Research Group. He has a first degree in Electrical and Information Sciences from Cambridge University, UK. After receiving a PhD from the University of 91Ö±²¥ in 1999, he worked for some time at GIPSA-lab, Grenoble and IDIAP research institute in Switzerland before returning to 91Ö±²¥ where he has had a permanent post since 2002.
His research interests lie in noise-robust speech processing. Key application areas include distant-microphone speech recognition, speech intelligibility prediction and improved speech processing for hearing-aid users.
- Research interests
-
Professor Barker’s research interests are focused around machine listening and the computational modelling of human hearing. A recent focus has been on modelling speech intelligibility, ie can we predict whether or not a speech signal will be intelligible to a given listener?
This understanding will help us produce better signal processing for application such as hearing aids and cochlear implants. Another strand of his work is about taking insights gained from human auditory perception and using them to engineer robust automatic speech processing systems.
- Publications
-
Journal articles
Chapters
Conference proceedings papers
Posters
Theses / Dissertations
Other
Preprints
- Grants
-
Current grants
- EnhanceMusic: , EPSRC, 06/2022 - 11/2026, £377,568, as PI
- , EPSRC, 10/2019 to 10/2025, £480,416, as PI
- , EPSRC, 04/2019 to 09/2027, £5,508,850, as Co-PI
Previous grants
- TAPAS: , EC H2020, 11/2017 to 06/2022, £468,000, as Co-PI
-
, EPSRC, 03/2016 - 09/2019, £974,161, as Co-PI
-
Deep learning of articulatory-based representations of dysarthric speech, Industrial, 02/2016 to 01/2017, £46,624, as Co-PI
- , EPSRC, 10/2015 to 09/2018, £125,493, as PI
- INSPIRE: Investigating Speech In Real Environments, EC FP7, 01/2012 to 12/2015, £308,473, as PI
- EPSRC, 07/2010 to 09/2010, £9,978, as PI
- CHIME: , EPSRC, 06/2009 to 05/2012, £326,245, as PI
- Audio-Visual Speech Recognition in the Presence of Non-Stationary Noise, EPSRC, 02/2005 to 05/2007, £116,853, as PI
- Professional activities and memberships
-
- Member of the research group
- Co-founder of the CHiME series of International Workshops and Robust Speech Recognition Evaluations, 2011 onwards.
- EURASIP Best Paper Award, 2009; for best paper in Speech Communication during 2005.
- ISCA Best Paper Award, 2008; for best paper in Speech Communication 2005-2007.