However, Dr. Shankler ultimately used RAVE in his performance of “The Duke of York” because RAVE’s ability to enhance the sound of individual performers “resonated thematically with the work.” It was because it seemed to be For this to work, the two had to train on a personalized recording corpus. “We sang and talked for three hours straight,” Wang recalls. “I sang every song I could think of.”
Antoine Caillon developed RAVE in 2021 while in graduate school at IRCAM, a research institute founded by composer Pierre Boulez in Paris. “RAVE’s goal is to reconstruct that input,” he said. “This model compresses the received audio signal and tries to extract the salient features of the sound in order to resynthesize it properly.”
Wang felt comfortable performing with the software because he could hear himself in RAVE’s synthesized voice regardless of what the software was saying at that moment. “The gestures were amazing, the textures were amazing,” she said. “But the tone was incredibly familiar.” And because RAVE is compatible with popular electronic music software, Dr. We were able to create other versions of this halo around her,” they said.
Musicians have been using a variety of AI-related technologies since the mid-20th century, said Tina Talon, a composer and professor of AI and arts at the University of Florida.
“We have rule-based systems, which are like artificial intelligence in the 60s, 70s and 80s,” she said. In the 90s, you had to bring in a lot of data to make inferences about how a system would work. “