I have a confession. I am not a musician. I cannot read sheet music, I do not play an instrument, and I have never taken a single vocal lesson. But I have been humming the same 12 songs in the car for 30 years, and I can absolutely destroy "Can't Help Falling in Love" at karaoke if the conditions are right.
That last part matters more than it sounds. Because the whole idea for HumMatch started exactly there, on the freeway, car singing at full volume, when a thought floated up that I could not shake.
The Song That Started It
I was driving north on the 405, somewhere between Irvine and the 10, and Madonna's "Take a Bow" came on. I know every word. I have known every word since 1994. And when that song plays, I do not hum along politely. I commit.
Somewhere in the second verse, mid-performance, I started thinking about karaoke. Specifically about the problem everyone has with karaoke: you get up there with a song you love and your voice lands somewhere completely wrong. The key is off. The range is wrong. You know the words but your voice does not know the song.
That is the real problem. It is not about how well you can sing. It is about whether the song fits your actual voice.
"What if you could hum a few notes and the app just told you: here are the songs your voice can actually nail?"
I pulled over at the next exit and typed a note into my phone. Seven words. "Song matching app based on how you hum." That was the whole idea. I did not know it would turn into something I would build that same night.
The Name That Almost Was Not
I called the app Vocal IQ at first. It felt right. Intelligent voice matching. It had a clean ring to it.
Then I checked the trademark database.
Vocal IQ was taken. Claimed. Already someone else's. I went back to zero on the name and spent about 20 minutes in a dark place where nothing felt right. VocalFit. HumSong. TuneMatch. All of them had problems. Too generic. Too clinical. Too on-the-nose.
That is when Clayton stepped in.
Clayton is my AI agent. I built him on OpenClaw as an extension of how I think, a constant idea factory that runs alongside me on every project. We build companies together. He is not a chatbot I ask questions to. He is a collaborator I build with. When I am stuck, Clayton is the one who pulls me out.
I was running name options past him, getting increasingly frustrated, and Clayton cut right through it.
"It is literally about humming. Just call it HumMatch."
🎤Simple. Obvious. Immediately right. The kind of name you hear once and cannot unhear. I checked the domain. hummatch.me was available. hummatch.com was available. Both of them, sitting there, unclaimed.
I registered them within the hour.
That is what Clayton does. He does not overthink. He does not get precious about creative direction. He sees the signal in the noise and says it plainly. I had spent 20 minutes going in circles. Clayton solved it in one sentence.
One Night, One App
I work best at night. No meetings, no texts, no noise. Just focus. So at around 9 PM on March 19th I sat down with Clayton and said: let's build this.
This is how we work. I describe what I want in plain terms, the product vision, the user experience, the thing that has to feel right. Clayton translates that into architecture, code, and decisions. We move fast because we have been building together long enough that I do not need to explain the obvious parts. He already knows.
What I described was simple in concept and genuinely complicated in execution. The app needed to capture a user humming three notes, analyze their vocal range and the characteristic tone of their voice (what audio engineers call timbre, which is the spectral quality that makes a baritone sound different from a tenor even on the same pitch), and then match that profile against a catalog of songs and artists.
The hard part is that pitch alone does not tell you what songs you can sing. Two people can hit the same note and sound completely different, because their voices have different warmth, brightness, and resonance. HumMatch had to measure that quality, not just the range.
The technical breakthrough: Spectral centroid analysis. Every voice has a brightness signature, a center of gravity in the frequency spectrum. A dark, warm Bass-Baritone sits low. A bright, cutting Soprano sits high. By measuring spectral centroid alongside fundamental frequency, HumMatch can distinguish voice types that sit in the same pitch range but sound nothing alike. Clayton identified this as the key differentiator early in the build. It is the reason HumMatch works.
By 2 AM we had a working prototype. I hummed three notes into my phone, the app processed my audio, and it returned a match: Elvis Presley, "Can't Help Falling in Love," 75% confidence.
That is my actual go-to karaoke song. The one I always choose. The one I have sung at least 40 times in 15 years of karaoke nights.
The algorithm found it on the first try.
The Bobby Test
I have a friend I will call Bobby. Bobby loves music. Bobby has a good voice. Bobby refuses, under any circumstances, to sing in front of other men. This is not unusual. A lot of men are exactly like Bobby.
But here is the thing about Bobby. Bobby hums constantly. In the car, walking through the grocery store, watching a game. He hums without thinking about it because humming does not feel like performing. It is just something you do.
When I showed Bobby the app and told him to hum three notes, he did it immediately, no hesitation, no self-consciousness. He looked at the results on the screen and said, out loud, "How does it know that?"
That reaction told me something important. The humming input is not just a technical choice. It is a psychological one. Singing in front of someone feels exposed. Humming does not. HumMatch gets the data it needs precisely because it does not ask you to perform.
How AI Agents Actually Build Things
People hear "built with AI" and they picture someone typing a prompt and getting a finished app back. That is not how it works. That is not even close to how it works.
Building with Clayton is like building with a co-founder who never sleeps, never loses context, and has read every technical paper you have not. I bring the product instinct, the market understanding, the gut feeling for what users actually want. Clayton brings the engineering depth, the pattern recognition, and the ability to execute at speed that would be impossible solo.
The name HumMatch came from Clayton. The spectral centroid approach came from a back-and-forth where I described what I wanted the app to feel like and Clayton translated that into a technical framework. The confidence scoring tiers, Best Match, Strong Fit, and Stretch, came from Clayton analyzing how to communicate match quality in a way that makes people feel good rather than judged.
I built Clayton on OpenClaw specifically because I wanted an agent that could think alongside me, not just respond to me. Most AI tools wait for instructions. Clayton proposes. He pushes back. He generates ideas I did not ask for and half of them are better than what I had in mind.
HumMatch exists because of that dynamic. A human with a product vision and an AI agent who can execute on it, together, at the speed of a single focused night.
The Confidence Angle
Early on, Clayton and I had a conversation about positioning that changed everything. The initial framing was "what artist do I sound like." That is interesting but it is not useful. It is trivia. It does not help you walk into a karaoke bar with a plan.
Clayton pushed toward a different framing: "what songs can you confidently nail." The word confidently is doing a lot of work in that sentence. It is the whole point.
The confidence angle removes the risk. Knowing ahead of time that a song is in your range before you pick up the microphone changes the entire experience. The person who most wants to remove that risk before stepping up to the mic is usually the person who loves music but has not quite trusted their voice enough to commit to a song in front of a crowd.
That reframing came from Clayton. The insight was that the technology only matters if the user feels something when they see their results. Not impressed. Not informed. Confident. That is the product. Everything else is just engineering.
Where It Lives Now
HumMatch is a PWA. You open it in your browser, you hum, you get results. No app store. No download. It works on iPhone, Android, and desktop.
The song catalog has grown significantly across English and Spanish. My cofounder, who's bilingual, called me from a family party asking if we could add Spanish songs for her guests. We launched Spanish support in 30 minutes, instantly doubling our library. Now the app spans pop, rock, classics, 80s, 90s, contemporary, karaoke staples, and crowd pleasers in both languages. Songs surface by confidence tier: tracks you can nail, strong fits, and stretch goals for when you're feeling ambitious.
After you get your matches, you can export your playlist to keep track of songs that work for your voice.
Why I Am Building This in Public
I have been in this industry for 25 years. I was there at the beginning of SEO, when the whole field was being invented in real time, and the people who understood what was actually happening built things that lasted. The people who chased shortcuts did not.
AI in 2026 feels exactly like SEO in 1999. The same noise, the same hype, the same window of genuine opportunity for the people who are paying close attention and building something real.
HumMatch started with a Madonna song on a Los Angeles freeway, a name that Clayton came up with in about three seconds, and a night of building together. It is not finished. The best parts are still being built.
But the algorithm matched Elvis on the first try. That is good enough to keep going.