Skip to content

Latest commit

 

History

History
53 lines (38 loc) · 1.59 KB

README.md

File metadata and controls

53 lines (38 loc) · 1.59 KB

ChimeraNet

An implementation of music separation model by Luo et.al.

Getting started

Sample separation task with pretrained model
  1. Prepare .wav files to separate.

  2. Install library pip install git+https://github.com/leichtrhino/ChimeraNet

  3. Download pretrained model.

  4. Download sample script.

  5. Run script

python chimeranet-separate.py -i ${input_dir}/*.wav \
    -m model.hdf5 \
    --replace-top-directory ${output_dir}

Output in nutshell

  • the format of filename is ${input_file}_{embd,mask}_ch[12].wav.
  • embd and mask indicates that it was inferred from deep clustering and mask respectively.
  • ch1 and ch2 are voice and music channel respectively.
Train and separation examples

See Example section on ChimeraNet documentation.

Install

Requirements
  • keras
  • one of keras' backends (i.e. TensorFlow, CNTK, Theano)
  • sklearn
  • librosa
  • soundfile
Instructions
  1. Run pip install git+https://github.com/leichtrhino/ChimeraNet or any python package installer. (Currently, ChimeraNet is not in PyPI.)
  2. Install keras' backend if the environment does not have any. Install tensorflow if unsure.

See also