Ashish Srivastava - 7 min

Lab Notes Proposal - Crowdsourcing the Perfect EQ Presets

I remember as a kid, I downloaded images of different EQ presets from Reddit and 9gag, and tried them out in the iTunes console to see how the sound changed. In an EQ (audio equalizer), you're adjusting the gain of different frequencies in the audio spectrum, basically boosting or lowering different frequency bands to shape the sound. Some presets made songs sound drastically different, others felt imperceptible. I probably couldn't tell because I was using cheap Skullcandy earphones at the time. But it was still a hell of a lot of fun to play around with trying to find the "right" setting.

Screenshot of Spotify EQ Configuration
Figure 1: A screenshot of the Spotify Equalizer (source: Medium)

EQ settings are weirdly personal and hardware-dependent. Your friend's bass-maxed car setup that rattles the windows might sound perfect to them, but you'd never copy those exact settings. But people share their EQ settings anyway. There are forum threads like this one on Gearspace with hundreds of screenshots of people's favorite configurations. People compare notes: we're clearly curious about this, even if we know the settings won't translate perfectly. There's something interesting about seeing someone else's audio fingerprint.

The Problem with Sharing EQ

Most music platforms (Spotify, Apple Music, even the old iPod Classic) give you EQ controls because headphones vary wildly in their frequency response. What sounds balanced on AirPods might be muddy on Sony WH-1000s. Between hardware differences and personal preference, a setting that works for you probably won't work for me.

But what if instead of sharing single configurations, we could see the distribution of people's configurations? Not "here's my perfect preset" but "here's where 1,000 people with your headphones typically boost the bass." Or from another angle, "for this preset, what does the landscape of distributions look like across all headphones".

My Proposal

So I've been thinking: what if you could crowdsource EQ presets in a way that actually shows you the distribution of preferences?

What I'm picturing is a web tool with standard EQ sliders (let's say 5 bands covering 60Hz to 16kHz), but overlaid with density plots to the side of the sliders showing the distribution of where other users cluster their settings. Pick your headphone model, pick a preset, listen to reference tracks to test different frequency ranges in the preset, adjust until it sounds right, and save anonymously. Then see where you fall on the distribution.

If you're using AirPods Pro and everyone with those headphones is boosting 3-4dB around 60Hz, maybe there's something to that. Or maybe you discover you're an outlier who likes way more treble than the median user. Either way, you're learning something about your own perception.

This only works if everyone's adjusting against the same material. You need tracks that stress-test different parts of the spectrum: something with sub-bass you can feel, something with crisp vocals in the midrange, something with shimmering hi-hats up top.

It could also double as a form of audio engineering learning tool: drop in a link to any YouTube video or song, adjust the EQ parameters live, and hear how the sound changes in real time. You'd start to develop an ear for what different frequency bands actually do, rather than just guessing.

A Quick Aside on Proposals

If I were submitting this to a funding committee, this is where I'd insert a literature review (citations to auditory perception research, existing EQ standardization efforts, maybe some HCI papers on collaborative filtering interfaces). I'd need to argue why this specific approach is novel, project user growth curves, outline "broader impacts."

But this is just a doodle in my head around a really interesting idea I'm curious about so I'm not going to do any of that. I'm just going to think about how to build it.

Technical Sketch

Frontend: Web Audio API handles the real-time EQ adjustments. The browser does all the DSP work locally, so no round-trip to a server needed. Each frequency band gets a biquad filter in peaking mode:

const audioContext = new AudioContext();
const filter = audioContext.createBiquadFilter();
filter.type = 'peaking';
filter.frequency.value = 250;   // Hz
filter.Q.value = 1.0;           // bandwidth
filter.gain.value = 3.0;        // dB boost/cut

Chain five of these together (say 60Hz, 250Hz, 1kHz, 4kHz, 12kHz), connect them to your audio source, and you've got your working EQ. When the user drags a slider, just update the corresponding filter's gain parameter.

Backend: AWS Lambda for storing configurations. Each submission could be stored as a JSON blob: `{headphone_model: "AirPods Pro", bands: [3.2, -1.5, 0.8, 2.1, -0.3]}`. No auth required if it's truly anonymous, just generate a client-side unique ID. DynamoDB works well here since we're basically collecting data points in a 5-dimensional space.

Visualization: Density plots per frequency band, filtered by headphone model. As a first pass, I think histograms with a certain configured bin width would work great. D3.js if I'm feeling ambitious, plain Canvas API would make it super speedy, or I could use Python and something like Dash.

What I'm Actually Building

This isn't really a formal research project with milestones and deliverables (yet!), it's me sketching a plan out loud for how I can answer a question I'm curious about. It's the same thing as my Image EQ Doodle: I wondered what would happen if you could adjust spatial frequencies like audio frequencies, so I built it to find out. This is the audio version of that curiosity: what happens when you aggregate everyone's listening preferences instead of just sharing individual presets?

Maybe the data will reveal something interesting about auditory perception. Maybe it'll just confirm that everyone boosts the bass and calls it a day. Either way, I'll learn something, and hopefully it'll be a useful tool for people trying to figure out why their music sounds muddy, and me trying to learn EQing for audio engineering.

No committee needs to approve this. No grant panel needs to decide if it's "impactful enough." I want to build it because the question is interesting and I have the skills to answer it. That's the point.

That being said, If you want to chat about it, or help build it hit my line! My emai's at the bottom of the page. If you think this is a waste of time... well, a lot of great projects usually are until they're not. Only one way to find out!