Contents

Machine Learning in Ableton LIVE

Machine learning comes to Ableton Live with Factorsynth, a Max For Live device that uses a data analysis algorithm called matrix factorization to decompose any audio clip into a set of temporal and spectral elements. By rearranging and modifying these components you can do powerful transformations to your clips, such as removing notes or motifs, creating new ones, randomizing melodies or timbres, changing rhythmic patterns, remixing loops in real-time, applying effects selectively only to certain elements of the sound, creating complex sound textures

Factorsynth 2 Machine Learning in Ableton LIVE
Factorsynth 2 Machine Learning in Ableton LIVE 2

Machine Learning

Factorsynth is a new kind of musical tool. It uses machine learning techniques to decompose any input sound into a set of temporal and spectral elements.

By rearranging and modifying these elements you can do powerful transformations to your clips, such as removing notes or motifs, creating new ones, randomizing melodies or timbres, changing rhythmic patterns, remixing loops in real time, creating complex sound textures…

Factorsynth is a one-of-its-kind device that uses machine learning to deconstruct any sound into elements. After 2 years of the initial release comes Factorsynth 2, the first major update. Following many user suggestions and requests, version 2 is an even more versatile yet easier to use device, with a simplified workflow and numerous new features.

It is now possible to individually pan the components, allowing to do things such as upmixing a mono clip to stereo. Another powerful new feature is the quantized shifting of the components, which allows changing the rhythmic structure of riffs and drum loops. A second, alternative decomposition algorithm is available, as well as a more detailed control of the playback region.

Factorsynth Workflow

Unlike traditional audio effect devices, which take the track’s audio as input and generate output in real-time,
Factorsynth is a clip-based device. It works on audio clips from your Live set that has been loaded into Factorsynth by drag and drop. Once an audio clip has been loaded into Factorsynth, it will be decomposed into
elements (exactly how to do this will be covered in the next sections). The decomposition process is called
factorization because it is based on a technique called non-negative matrix factorization (NMF).

Factorsynth Allows You to

About JJBurred

J.J. Burred is an independent researcher, software developer and musician based in Paris. With a background in machine learning and signal processing, his work aims at developing innovative tools for music and sound creation, analysis and search. After earning a PhD from the Technical University of Berlin, he worked as a researcher at IRCAM and Audionamix, on topics such as source separation, automatic music analysis, sound classification, content-based search and sound synthesis. His current main activity concerns the exploration of machine learning techniques for new methods of sound analysis/synthesis aimed at musical creation.