
Recently, a music tech company asked Ms Mavy, a notable Afrohouse DJ/producer, if she would appear in an ad for their new AI product. There was a problem, though: the product was an Amapiano music plugin that would directly compete with Mavy’s own sample library business, Afroplug.
“It was crazy,” says Mavy. “It was like I was a child – you want my face, and you’re taking my business.”
Mavy is calling from Belgium, where she lives; she has just dropped one of her sons off at school and is sitting in her car in a parking lot with her other son, who’s snacking in the back seat. Born in France and with roots in Cameroon and Guadeloupe, Mavy (born Maëva Nkouwap) built Afroplug after seeing that sample marketplaces weren’t keeping up with the exploding interest in genres like Afrobeats, Amapiano and Afrohouse. She now has a six-figure business selling samples on her own site, as well as on other sample marketplaces such as Splice.
As a businessperson, Mavy sees opportunities in building AI tools — Afroplug has released an AI music agreements generator, and has plans for other AI products. But it’s also an existential threat. Mavy’s entire business is based on her deep knowledge and contacts with music makers from across Africa and the Caribbean.
For now, that knowledge is her business’ “moat” against AI tools. “I know I have a treasure with my niche,” she says. The attempts she’s heard from AI full-song generators to create Amapiano and other music from the African diaspora “don’t sound authentic,” she says. “It’s harder for AI to replicate” the dozens and dozens of genres and subgenres (she estimates around 100).
But when they do get better at it, it could be all over for Afroplug. It’s for that reason that Mavy doesn’t listen to tools like Suno; she says she needs to protect her mental health, and doesn’t want to spend too much time obsessing over them. “I cannot lie, I’m very shocked [with them]…. It’s easy to fall into a toxic mindset.”

Trust issues
“We have a trust and a tooling problem,” says BT, the electronic artist and film composer, about full-song AI tools. He’s no AI sceptic; he’s been experimenting with music and technology for decades and runs an AI music start-up called Sound Labs. But he sees a lot of “righteous anger” from fellow artists and producers about the copyright and attribution questions with full-song AI tools. In this case, he says, the classic Silicon Valley saying “‘move fast and break things’ has become ‘move fast and break musicians’.”
Generative tools now do something that many producers usually want to do themselves – build an entire song — with ingredients they might not have chosen. Writing a prompt to generate a song is “a really unnatural interface for a musician,” says BT. “We’re not conditioned on language.”
And while some full-song tools now offer advanced features such as stem separation, they still don’t provide the granular control that experienced producers demand. A single sound can “show me what’s missing” when BT’s making a song, he says. “Some little Rhodes piano loop — I hear it and I go, ‘Oh, shit, I know what that’s supposed to be carved into.’” If those ingredients are off, Mavy can tell right away: the song doesn’t “sound like my husband,” she says; “the guy making reggae music on my island.”

With AI song generators, “the output is more important than the input,” says Ale Koretzky, Splice’s head of AI. Despite their limitations, AI song generators pose a deep challenge to companies like Splice, which are built on a catalogue of fixed sounds.
“Static catalogues are now under existential threat,” says Koretzky. “When you can create what you want on the fly, and it’s unique to you, that is a very compelling value proposition.”
Variations, which Splice launched in April after two years of internal development, is the company’s wager on a way through. Rather than building a full-song generator, Koretzky and team built a tool that can take any sample in Splice’s catalogue and generate new versions of it. Most important, it preserves what Koretzky calls the sound’s “DNA” — its timbre and close-to-undefinable character — while giving producers the ability to change the melody and structure. A producer who loves a flute sample’s tone but needs a different melody can now generate alternatives that match its tone, with each licensed variation triggering a payout to the original creator.
Trying to compete with full-song generators directly, Koretzy says, is “a lost battle starting from the data.” Splice isn’t scraping terabytes of data from YouTube and other sources for its model. The bet is that producers will value something different. BT says he does: “If I can play one guitar part and repurpose that guitar part as a marimba part — these are tools that would be so useful for me personally.”
“What we are proposing,” says Koretzky, “is that input is just as important as output.”

Liquid sampling
A musician and electrical engineer from Argentina, Koretzky was part of the University of Southern California team that pioneered techniques for stem separation, the technology that lets you pull individual instruments out of a finished mix; he’s been at Splice for eight years.
A couple of years ago, Koretzky proposed what he describes as a moonshot: make the Splice catalogue “liquid.” Instead of a fixed library you browse, he wanted to build a system where any sound could be reshaped, extended and made your own.
The core technical challenge was one that researchers have wrestled with for decades: how do you capture the essence of a sound? The qualities that make one flute recording distinguishable from another — the room it was played in, the timbre, the player’s breath and articulation, the microphone and signal chain — have resisted measurement for decades. “This is literally an impossible problem in the field of psychoacoustics and signal processing,” Koretzky says. “I’m crazy enough to think that we can solve these problems, and we sort of did.”
The breakthrough came when his team trained a model to learn a sound’s identity (its DNA) by telling the model to analyse everything except measurable aspects, such as the melody and structure. The result is a system that can extract the indefinable parts of a recording — the attack of a musician’s playing, room acoustics — and then inject that fingerprint into a generative model to create new variations.
Koretzky describes it as “the world’s first universal synthesizer, able to reproduce any sound in the universe.” A cello variation can capture the air of the bow, along with reverberation and compression, which are elements that would take a sound designer hours to approximate manually, if they could approximate them at all. The same model can blend two samples into a new hybrid; Koretzky calls this “semantic sound design.” A producer can specify a ratio (30% of one sound, 70% of another), and the model synthesizes something new that carries the DNA of both. Crucially, both original creators are credited and paid.

The philosophical risk
Many Splice staffers are musicians, so internal debates around AI and music during the development of Variations reflected the outside world. Andy Thompson, Splice’s product lead for the project, describes spending hours one weekend playing with a full song generator and an AI vocal persona he’d built, generating roughly 40 versions of a track with “Maggie Rogers energy.” When he finally tried to pick up his bass to play, for the first time in two months, he couldn’t. “I just was stuck in that moment,” he says.
“After hearing the final output so many times, I didn’t know how to put myself back into it.” That experience, he says, crystallised what Splice was up against. Thompson calls it a “philosophical risk” alongside the obvious competitive one: that “the value of making music would be eroded, and the struggle that people go through to become really good at their craft would be diminished.”
The same anxiety surfaced inside Splice’s own testing process. Thompson runs an internal program called Soundcheck, in which over 80 producers, vocalists and artists give unfiltered feedback during the development process. On consecutive days during one round of testing, two creators — one from Splice’s content team, one of them an external producer — heard outputs from an early version of the model and gave Thompson nearly identical reactions: “This is really cool,” followed by, in his paraphrase, “What’s my role? What happens to me in the future?”
Mavy, who played with Variations before the public launch, had a different reaction. “I love that you can play with it but it doesn’t distort the original samples or loops,” she says. She also sees a practical advantage for creators who’ve heard the complaint that too many producers use the same Splice packs — something that Julian Bunetta, producer for Sabrina Carpenter’s Espresso, heard when other Splice customers noticed he was using samples from Power Tools: Sample Pack III, made by veteran producer Vaughn Oliver. “Now you don’t have that excuse anymore,” says Mavy. “You can take the loops and make another variation.”
Mavy is optimistic about where AI music tools eventually land. “I think the real artists and real creators will have more value,” she says. “People want authenticity.”
BT, for his part, ends with the same diagnosis he started with. The industry, he says, is in “an awkward moment” that it will eventually move through — but only by working across two conversations at once. “You really can’t have one without the other,” he says. “The trust problem is still there. The tooling problem is still there. This is one of the most exciting times to be alive as a musician. We just need to fix the ethics piece of it.”
The post Generative AI’s threat to music sample libraries is existential — Splice thinks it has a solution appeared first on MusicTech.
Originally published at musictech.com. Curated by AI Maestro.
Stay ahead of AI. Get the most important stories delivered to your inbox — no spam, no noise.
