New York, June 18: In a first, researchers have developed a robot that can write and play its own music compositions using artificial intelligence and deep learning.
The robot, named Shimon, was given a vast amount of musical data: more than 5,000 complete songs, two million motifs, riffs and short passages of music by researchers at Georgia Institute of Technology. It was then asked to compose and perform its own music.
Once it had been fed the data it was able to use deep learning techniques to create two 30 second pieces of original music.
Georgia Tech researchers says the pieces sound like a cross between jazz and classical music. We’d call them delicately soothing. As well as the deep learning, Shimon also uses computer vision through a camera on its robo-head to detect what notes it should be playing.
“The robot analyses a large dataset of music (including pop, classical, jazz and more) in an effort to identify patterns that appear in all songs and genres in the dataset,” Gil Weinberg, the director of the Center for Music Technology at the University, tells WIRED. “It then uses what it learned (which can include melodic, harmonic and rhythmic patterns) to generate its own personal music based on a musical seed”.
Before the robot starts to compose a piece it is given a starting point to work from. For the first piece of music it was given eight notes, the second was based on 16.