Labs
Most Common Keys For House and Deep Tech Music
Posted on | March 6, 2018 | No Comments
In an attempt to get my tracks to sound more like the tunes I like to DJ, I ran across an article about how club sub freq’s can range down to about 40Hz, which is just below the note E1.
Note | Hz |
C1 | 32.7 |
D1 | 36.7 |
E1 | 41.2 |
F1 | 43.7 |
G1 | 49.0 |
A1 | 55.0 |
The article (wish I could find it again) mentioned that’s why a bunch of tracks are in F and G — because those root bass notes fall comfortably in the range that can be played back. A low F1 note is probably one of those tones that sounds KILLER on a good rig.
Looking at some of my favorite house, techno, and deep tech tracks from 2017, I noticed that most are in F, Bb, Eb, G, or C. I’m not sure my analysis software (djay Pro 2) is discerning major vs minor here.
But check out the circle of fifths:
(By Just plain Bill — Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=4463183)
… those keys are all clustered in the upper-left corner, which means that they will be easy to transition to during a set because jumps to neighboring keys in the circle of fifths sound good.
Now interestingly, this guy’s analysis of the Beatport Top 100 Tracks showed a different set of popular keys — but I don’t play tracks as “Pop”-y as the Top 100. Who knows.
Wogglebug + DPO as White Noise Source
Posted on | April 6, 2014 | No Comments
So I recently got a MakeNoise SharedSystem modular rig, and one thing missing from it was an apparent lack of the ability to make… noise. White noise.
However, by pushing the Wogglebug and the DPO’s internal modulation routing to the extreme, you can get some decent-sounding white noise. Basically, you turn most of the knobs on both modules all the way clockwise and listen to the DPO final output.
Here’s how it sounds, going through an MMG for filter sweeps and the Echophon for some delay:
FX Halos in Ableton Live
Posted on | February 29, 2012 | No Comments
This is a very simple trick to do, but not so obvious to figure out that it’s even possible. The idea is to sidechain compress the processing on a Return bus by its own input signal, in order to clear out some “empty” space around the dry signal. It’s like making a “breathing fx bus”.
For example, if you have a staccato vocal sample being sent into a reverb or a delay, using this trick the effect tails will “swell in” over time after the dry signal stops. It’s similar to kick sidechaining.
Here’s an example without a halo:
Now with:
That’s not the most inspiring demo, but this can sound very organic, and helps clear space in a full mix. To set up in Live:
- Send sound from an Audio track to a Return track.
- On the Return track, add a plugin that creates a temporal tail: ie reverb or delay.
- Add a compressor after the fx.
- Enable Sidechain, and set the Audio From dropdown to the same Return track you’re on.
- Set the Audio From position to “Pre FX” in order to sidechain from the dry signal.
- Set up your threshold, release, ratio etc. to get your desired “halo” pumping sound around the input signal.
This can be a really nice way to get some breathy fluttering organic motion in a network of Return tracks that might even be cross-sending signal to each other in a feedback network…
MiniCommand, Machinedrum, and OS X
Posted on | May 11, 2011 | No Comments
So I’ve had a Ruin & Wesen MiniCommand for a little under a year, but haven’t been using it as much as I would like because it didn’t integrate well with my setup — until last night.
The standard way to use the MiniCommand is to connect it in a closed MIDI loop with the device in question — which makes it hard use in a computer-based MIDI setup with a sequencer. There are ways around this, eg. daisy-chaining the MiniCommand between the computer’s MIDI interface and the device you want to control, but I have found that this introduces some small timing delays (enough to drive me crazy).
Alien Autopsy Via Sample-Rate Reduction
Posted on | January 6, 2011 | 2 Comments
Here’s a cool sound-design trick. If you want to get a vocal-sounding ‘formant filter’ effect out of a synth that only has a normal lowpass filter, you can take advantage of a quirk of sample-rate reduction effects to generate multiple “mirrored” filter sweeps through the wonder of aliasing.
Here’s a sound clip from my machinedrum with a simple sawtooth note and a resonant lowpass filter being modulated down over a quick sweep. It’s played four times, each with increasing amounts of sample-rate reduction applied:
This sample looks like this in a sonogram (I used the Sonogram View plugin that Apple includes with XCode). Horizontal axis is time, vertical is frequency:
Notice that as the aliasing (reflected frequencies) increase with the sample-rate reduction effect, you begin to see multiple copies of the filter sweep. This creates the lovely, complicated “alien voice” sound. Here’s a short MachineDrum loop I was playing around with when I realized what was going on here:
And for the Elektron-heads reading this, here’s the MD sysex for that pattern+kit:
alien-autopsy-md.syx
PS: the wikipedia article on aliasing has a good rundown on the details of this phenomenon.
Ableton Live, The Machinedrum and The Monomachine (Part 2): Minimizing Latency
Posted on | June 6, 2010 | 4 Comments
In Part one of this series, I posted tips for getting the Monomachine and Machinedrum synced and recording properly with your Live sessions. The other half of the equation is which operations to avoid that might introduce latency and timing errors during your sessions.
Ableton Prints Recordings Where It Thinks You Heard Them
I guess this design must be intuitive for many users, but it confused me for a while. If you have a setup with anything but a miniscule audio buffer, monitoring through a virtual instrument witha few latency-inducing plugins in the monitoring chain, you will hear a fair amount of monitoring latency when you play a note. The same goes for recording audio.
When recording a MIDI clip, I expected that Live puts the actual MIDI events when I played them — which it doesn’t. It shifts the MIDI notes later in time to match when you actually heard the output sound — trying to account for your audio buffer delay, the latency of your virtual instrument, and any audio processing delay from plugins in the downstream signal path. There’s one exception to this — it doesn’t worry about delays you might hear due to any “Sends” your track is using.
So your MIDI notes (and CC’s) are recorded with “baked-in” delays the size of your monitoring chain latency. I’m going to call this baked latency.
Ableton Live, The Machinedrum and The Monomachine: Midi Sync Notes
Posted on | June 6, 2010 | 9 Comments
Recently I’ve been (going crazy) getting the timing tight between Ableton and two outboard sequencers — the Elektron Monomachine and Machinedrum. On their own, these silver boxes have amazingly tight timing. They can sync to each other to create a great live setup.
Add a computer DAW into the loop, and you introduce jitter, latency, and general zaniness to the equation. And it’s not trivial — this is obviously-missing-the-downbeat, shoes-in-a-dryer kind of bad. I tested the jitter / latency by ear, as well as by recording audio clips and measuring the millisecond offsets from the expected hit times.
I don’t think this is fundamentally a slow computer / poor setup issue either — I’m running a good interface, using a tiny 32 sample audio buffer. The rest of the setup is an i7 Intel Mac running OS X 10.6.3, Ableton Live 8.1.3, Emagic Unitor 8 midi interface and an Elektron TM‑1 TurboMidi interface for the Machinedrum.
Below is a journal of what’s working, what isn’t, and my theories on why… Read more
How To: Algorithmic Music with Ruby, Reaktor, and OSC
Posted on | November 20, 2009 | 2 Comments
The basic idea is to use a simple OSC library available for Ruby to code interesting music, and have Native Instruments’ Reaktor serve as the sound engine. Tadayoshi Funaba has an excellent site including all sorts of interesting Ruby modules. I grabbed the osc.rb module and had fun with it.
I’m giving a brief presentation at the Bay Area Computer Music Technology Group (BArCMuT) meet-up tomorrow, un-officially as part of RubyConf 2009 here in San Francisco.
Here’s a link with downloads and code from my talk. It should be all you need to get started, if you have a system capable of running Ruby, and a copy of Reaktor 5+ (this should work with the demo version too).
Ruby mono sequence example:
reaktorOscMonoSequences-192 MP3Ruby polyphonic drums example:
reaktorOscPolyphonicDrums-192 MP3Leave a comment below if you have any questions, or cool discoveries!
Machinedrum Recursive Sampling Test 02
Posted on | November 16, 2009 | 4 Comments
So this is another example of using the MD’s internal sampler to create a recursive “feedback loop” of sampling and resampling and resampling.… This has a tendency of psychedelically twisting the underlying beat. The way this stuff sounds has really surpassed my wildest dreams.
Machinedrum Recursive Sampling Test 01
Posted on | November 4, 2009 | 1 Comment
This was a first test at using the Machinedrum’s internal sampler recursively. I was trying to emulate my fractal wavetables sounds in hardware, as closely as the MD could do it.
keep looking »