Text-to-Audio Models Make Music from Scratch #ASA183
Text-to-Audio Models Make Music from Scratch #ASA183
Much like machine learning can create images from text, it can also generate sounds.
Media Contact:
Ashley Piccone
AIP Media
301-209-3090
media@aip.org
NASHVILLE, Tenn., Dec. 7, 2022 – Type a few words into a text-to-image model, and you’ll end up with a weirdly accurate, completely unique picture. While this tool is fun to play with, it also opens up avenues of creative application and exploration and provides workflow-enhancing tools for visual artists and animators. For musicians, sound designers, and other audio professionals, a text-to-audio model would do the same.
As part of the 183rd Meeting of the Acoustical Society of America, Zach Evans, of Stability AI, will present progress toward this end in his talk, “Musical audio samples generated from joint text embeddings.” The presentation will take place on Dec. 7 at 10:45 a.m. Eastern U.S. in the Rail Yard room, as part of the meeting running Dec. 5-9 at the Grand Hyatt Nashville Hotel.
“Text-to-image models use deep neural networks to generate original, novel images based on learned semantic correlations with text captions,” said Evans. “When trained on a large and varied dataset of captioned images, they can be used to create almost any image that can be described, as well as modify images supplied by the user.”
A text-to-audio model would be able to do the same, but with music as the end result. Among other applications, it could be used to create sound effects for video games or samples for music production.
But training these deep learning models is more difficult than their image counterparts.
“One of the main difficulties with training a text-to-audio model is finding a large enough dataset of text-aligned audio to train on,” said Evans. “Outside of speech data, research datasets available for text-aligned audio tend to be much smaller than those available for text-aligned images.”
Evans and his team, including Belmont University’s Dr. Scott Hawley, have shown early success in generating coherent and relevant music and sound from text. They employed data compression methods to generate the audio with reduced training time and improved output quality.
The researchers plan to expand to larger datasets and release their model as an open-source option for other researchers, developers, and audio professionals to use and improve.
———————– MORE MEETING INFORMATION ———————–
Main meeting website: https://acousticalsociety.org/asa-meetings/
Technical program: https://eppro02.ativ.me/web/planner.php?id=ASAFALL22&proof=true
ASA PRESS ROOM
In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at https://acoustics.org/asa-press-room/.
LAY LANGUAGE PAPERS
ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are 300 to 500 word summaries of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at https://acoustics.org/lay-language-papers/.
PRESS REGISTRATION
ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the meeting or virtual press conferences, contact AIP Media Services at media@aip.org. For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.
ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.