Abstract
This paper presents the design, development, and analysis of my real-time audio visualization system created entirely in Python using PyQt5, called WaveCatcher. The system captures live audio input from a microphone and provides simultaneous visual feedback through multiple signal representations: including a time-domain waveform, a frequency-domain FFT spectrum, a scrolling spectrogram, harmonic peak visualization, dynamic range, amplitude envelope, spectral centroid, and spectral bandwidth. These features offer insight not just into the raw structure of sound, but into how humans perceive its qualities–like timbre! This terminology may seem intimidating—it certainly was when I first began learning it—but I’ll explain all of these systems in depth later in the paper!
Unlike pre-made tools or closed-source analyzers, this project was built from scratch and required a hands-on understanding of both the physics of sound and the principles of digital signal processing (DSP). This system showcases how audio can be transformed from subtle air vibrations to visual data by applying Fourier transforms, windowing techniques, and feature extraction, including harmonics. As part of the analysis, I recorded various real-world sounds: including a tuning fork, flute, human singing, guitar, and human speech. I then compared and contrasted them. These comparisons reveal how harmonic structure and spectral features differ across various sound sources, and how those differences map to what we hear as timbre!
Beyond technical goals, this project reflects a deep personal interest in connecting music, physics, and technology; it serves not only as an audio engineering tool, but also as an educational platform for exploring sound as a physical, mathematical, and perceptual marvel. By making these invisible properties of sound visible and interactive, this system encourages deeper curiosity about how the world works–both scientifically and artistically!
Creative Commons License

This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.
Recommended Citation
Roach, Aidan
(2026)
"From Vibration to Visualization: Building an Real-Time Audio Visualization System for Learning and Exploration using PyQt5,"
The Transdisciplinary STEAM+ Journal:
Vol. 6:
Iss.
1, Article 8.
Available at:
https://scholarship.claremont.edu/steam/vol6/iss1/8
Fig 1: Tuning Fork Playing A=440
flute playing A 440.jpg (178 kB)
Fig2: Tuning Fork Playing A=440
person singing A 226.jpg (192 kB)
Fig3: Person Singing A=440
strumming guitar.mp4 (9774 kB)
Fig4: Guitar Strumming
person talking.mp4 (10137 kB)
Fig5: Person Talking
Included in
Data Science Commons, Music Education Commons, Other Physics Commons
Author/Artist Bio
My name is Aidan Roach. I am a high school student in Grand Prairie, Texas, and I'm looking to get my first paper published! My research so far has been related to physics, math, and music, so finding this journal was very exciting! I hope I will be considered for this journal.