Find more staff
Role:
Department staff:
Collaborations:
- Point Blank Music School
- Huddersfield University
- St George's Hall
- Bristol Beacons
- nu-desine
- JukeDeck
- TENOR Network
Research staff:
- Music composition
- music technology
- User experience (UX) software design
- HCI
- End-user digital media technologies and community e-enabling
- Digital literacies
- Pedagogic Research
- Games & Computer Graphics
- game audio
- Algorithmic music composition
- creativity
Teaching staff:
- Qualifications:BEng (York), MPhil (TCD), PhD (Cambridge)
- Position:Senior Lecturer : Music Technology
- Department:FET - Computer Science and Creative Technologies
- Telephone:+4411732 83011
- Email:Chris.Nash@uwe.ac.uk
- Social media:
About me
Chris Nash is a professional software developer, composer, educator, and researcher, currently Senior Lecturer in Music Technology (Software Development for Audio, Sound, and Music) at UWE Bristol, where he leads the computing pathway for the BSc(Hons) music technology courses.
He completed his 2011 PhD on “Supporting Virtuosity and Flow in Computer Music”, at the University of
Cambridge (Computer Laboratory and Centre for Music & Science, under Profs. Alan Blackwell and Ian Cross), on
new techniques for modelling and developing systems for music production and creativity, supported by a two-year
study of over 1,000 DAW users, working towards better support for flow, expertise, virtuosity, creativity, and
liveness – with findings that have influenced subsequent industry practice.
Within academia, he remains one of the
few researchers focused on consumer, professional, and end-user audio technologies, actively practicing end-to-end development of user-ready technologies designed for mainstream, real-world use.
With extensive expertise in DSP, C++, and HCI for audio and music applications, he has taught many of the
industry’s current generation of audio developers, contributing to textbooks on audio DSP and UI/UX design.
In his
own professional practice, he has lead technology projects for Steinberg/Yamaha, BT, the BBC, and numerous
Bristol/London-based start-ups, including nu:desine and JukeDeck, while also maintaining his own projects under
nash.audio, including both the award-winning reViSiT (a cross-platform tracker-inspired ‘DAW-in-a-plugin’) and
Manhattan (a hybrid DAW and music programming language, bridging coding and traditional music workflows to
make procedural music more accessible and relevant to end-users, musicians, and interactive audio practitioners).
He has significant experience and specific knowledge of audio application architectures (in C++), synthesis and
effects processing (including virtual analogue, physical modelling, reverb and filter design), interaction design (and
testing), music notation (score rendering), realtime collaboration (multi-user sharing), hardware controllers and
embedded devices, VR and spatial audio, MIR and big data analysis, applied AI (machine vision, pitch detection),
web development (client- and server-side), documentation and support systems, and tools for audio programmers
(including language psychologies and design, and a “mini-plugin” open framework for C++ synthesis and effects).
Outside the box, he has written music for TV, radio, live performance, and installations, including several major
public events and projects for the BBC, often supported by his own technologies – for example, BBC Music Day
2018, which coupled Manhattan and machine vision, to generate live crowd-driven music throughout the day (in
classical or trip-hop styles), played live on the platform of Bristol station, based on the live movement of people,
trains, etc. Further details of specific projects, see http://nash.audio.
Area of expertise
music software development, UI/UX design and theory, music composition, music content analysis, end-user computing, digital creativity research
Publications
Publications loading...