Skip to content

Commit

Permalink
Add multi-channel interface documentation to README and improve example
Browse files Browse the repository at this point in the history
  • Loading branch information
kevinstadler committed Oct 28, 2023
1 parent 21ca5b8 commit 90b4f0b
Show file tree
Hide file tree
Showing 2 changed files with 38 additions and 11 deletions.
16 changes: 15 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,23 @@ For detailed changelogs, have a look at the [Github releases page](https://githu
Audio files loaded with the [`SoundFile`](https://processing.org/reference/libraries/sound/SoundFile.html) class are fully loaded into raw memory. That means your sketch will require ~20MB of RAM per minute of stereo audio.

- if you get `OutOfMemoryError: Java heap space` errors on resource-lean platforms (e.g. Raspberry Pi), make sure to increase the heap size in Processing's `File > Preferences > Running` menu. As a rough rule, your heap size should be at least twice as much as the largest audio sample used in your sketch.
- depending on the format and length of the audio files, loading them for the first time with `new SoundFile("<yourfilename.ext>")` might block the sketch for several seconds, while calls to `sf.play()` execute instantly
- depending on the format and length of the audio files, loading them for the first time with `sf = new SoundFile("yourfilename.ext")` might block the sketch for several seconds, while subsequent calls to `sf.play()` execute instantly. It is generally advisable to create all SoundFile objects in your `setup()`.
- decoding of compressed formats (mp3, ogg, etc) can be quite slow on Raspberry Pi (20 seconds for a 3 minute mp3, 14 seconds for ogg on a Raspberry Pi 3B 32bit). Since all audio samples loaded by the library end up being stored as raw uncompressed data in RAM anyway, we generally recommend using WAV format for loading audio files

### Multi-channel audio interface support

The newest release of the Sound library adds support for [multi-channel audio output](https://github.com/processing/processing-sound/blob/main/examples/IO/MultiChannelOutput/MultiChannelOutput.pde). Most audio interfaces should work out of the box, for sake of completeness we have assembled a list of devices that have been tested to be working (for the devices marked with `*`, see below). If you have troubles getting any audio interface to be recognized correctly, please report them in [this Github issue](https://github.com/processing/processing-sound/issues/87).

- Focusrite Scarlett 2i4
- Motu Mk5 *
- Presonus Studio 26c
- Roland Rubix24
- RME Fireface 802 *
- output is through the 30 channel device, not the 8 channel device
- on Windows, select a Buffer Size of 512 samples or less in the Fireface USB Settings

Devices marked with a `*` work out of the box on MacOS, on Windows they are recognized but show up as several stereo devices, rather than one multi-channel device. To be able to use them as one multi-channel devices, you will need to install ASIO drivers and add an explicit call to `MultiChannel.usePortAudio()` at the beginning of your sketch.

### Contributing

Pull requests for bug fixes as well as new features and example sketches are always welcome! Check [CONTRIBUTING.md](CONTRIBUTING.md) for help on how to get started.
Expand Down
33 changes: 23 additions & 10 deletions examples/IO/MultiChannelOutput/MultiChannelOutput.pde
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
import processing.sound.*;

SinOsc sines[];
int initialised;
float frequency;

void setup() {
size(640, 360);
Expand Down Expand Up @@ -29,28 +31,39 @@ void setup() {
println("Playing back different sine waves on the " + MultiChannel.availableChannels() + " different channels");

sines = new SinOsc[MultiChannel.availableChannels()];
initialised = 0;
frequency = 100;

textSize(128);
fill(0);
textAlign(CENTER);
}

void draw() {
// loop through all channels and start one sine wave on each
float frequency = 100;
for (int i = 0; i < sines.length; i++) {
MultiChannel.activeChannel(i);
// create and start the sine oscillator.
sines[i] = new SinOsc(this);
sines[i].freq(frequency);
sines[i].play();
if (initialised < sines.length) {
// add a nice theatrical break
delay(1000);

background(255);
text((initialised + 1) + " of " + sines.length, width/2, height/2);

MultiChannel.activeChannel(initialised);
// create and start the sine oscillator.
sines[initialised] = new SinOsc(this);
sines[initialised].freq(frequency);
sines[initialised].play();

// increase frequency on the next channel by one semitone
frequency = frequency * 1.05946;
initialised = initialised + 1;
return;
}
}

void draw() {
// as long as the oscillators are not stopped they will 'stick'
// to the channel that they were originally added to, and we can
// change their parameters freely
float frequency = map(mouseX, 0, width, 80.0, 1000.0);
frequency = map(mouseX, 0, width, 80.0, 1000.0);

for (SinOsc sin : sines) {
sin.freq(frequency);
Expand Down

0 comments on commit 90b4f0b

Please sign in to comment.