Table of Contents
Welcome
This site and its related Github repository contain a collection of tutorials, lessons-learned and video performances on the topic of musical algorithmic composition.
Algorithmic composition?
"Algorithmic composition is the technique of using algorithms to create music" -- Wikipedia
"Live coding is a new direction in electronic music and video, and is starting to get somewhere interesting. Live coders expose and rewire the innards of software while it generates improvised music and/or visuals. All code manipulation is projected for your pleasure. Live coding is inclusive and accessible to all." -- toplap.org
Punters watching a livecoding performance at The Shunt, London 2008
Videos ↵
Videos: livecoding perfomances
A collection of screencasts and other livecoding performances I have made over the years.
Tip
All of the videos live on the 'Musical Code' YouTube channel. If you can't wait till my next livecoding ramblings, feel free to take a look and subscribe there.
Techno jam (2024-07)
Overlapping basslines and industrial noise sounds. Listen with a good pair of headphones!
Rhytmn patterns
Interesting rhythm patterns can be simply generated by using non standard beats.
Eg here we have a 5/3 downbeat, with the hhat playing at each 1/4. This results in two distinct syncopated beats every bar.
Synth line
Generated using a varying diatonic interval on a minor scale. Which can be modyfyed in real time for nice effects etc..
Oscillators drive the drumkit
The kick drum and snare are activated only when the synth oscillator (driving the cutoff and resonance) goes beyond a certain threshold.
This allows to build up momentum semi automatically.
;; cyclical drums
(if> osc1 60
(at 1 1/2 (playk *kit-kick* 90)))
(if< osc1 60
(at 10/3 1/4 (playk *kit-snare* 90)))
Full source
(define drones
(lambda (beat)
(let ((dur (random)))
(atbtw 14/5 0 1 (play drone C1 80 2))
(atbtw 2 0 1/2 (play glitch C4 (cosr 20 20 1/3) 4))
(callback-at dur 'drones ))))
(drones (*metro* 'get-beat 1))
(define main
(lambda (beat)
(let ((dur 1/4)
(osc1 (cosr 60 60 1/128))
(osc2 (cosr 60 60 5/256))
)
;; drums
(at 1 0 (playk *kit-hhat* 90))
(at 2 1/2 (playk *kit-hhat3* 90))
(at 5/3 1/4 (playk *kit-hhat3* 90))
;; bassline
(play bass (mkint C2 (oneof 0 3 4 5) 'm ) 90 dur)
(mcc *cutoff* osc1)
(mcc *reso* osc2)
;; cyclical drums
(if> osc1 60
(at 1 1/2 (playk *kit-kick* 90)))
(if< osc1 60
(at 10/3 1/4 (playk *kit-snare* 90)))
(if< osc1 60
(at 50 0 (play voice C4 80 50)))
(callback-at dur 'main ))))
(main (*metro* 'get-beat 1))
Also on GitHub.
Ambient, suspenseful generative music using recursive 'ping pong' mallets sounds.
Ambient music inspired by the words of Alan Watts. Tip: listen with good headphones for full effect.
Overlapping cosine driven piano lines in B minor.
Overlapping cosine driven piano lines in C minor.
Ambient music inspired by the words of Alan Watts on the self.
Chilled electronica piece, vaguely inspired by the works of Aphex Twin.
Livecoding noise sound waves with Bento and Extempore
Improvised acid loops using Extempore + Bentō.
Bentō is a standalone noise box with tape recorder, inspired by the japanoise scene. Thanks to its unstable and very unique oscillators, Bentō can create an enormous number of sounds and impredictable noises that are not possible with traditional subctractive synthesizers.
See the PDF user manual
Take 1
Just trying to control it using MIDI-CC from Extempore.
Note: I previously created some MIDI mappings and saved them to a file I can reload each time.
Take 2
After 10 minutes the demo version goes silent unfortunately.. but it doesn't take long to restart it.
Drumkit sounds generated with Ableton Live.
Slowly getting the hang of it!
Study for Cello and Double-bass (2022-04)
Game of Thrones inspired intertwining cellos melodic lines.
Creating chords using a cosine function
The main technique used in this piece is to generate chord/harmonic variations using a cosine functions.
Every 8 beats the root chord (used by all instruments in order to generate musical patterns) gets updated. Two cosine functions are used to simultaneously:
- Determine the amplitude of the interval (major or minor, starting from C3) that generates the root note of the chord.
- Determine the number of notes in the chord.
The two cosine functions have different frequencies, leading to a variety of combinations of chord shapes that keep cycling around.
The sounds I used
Sounds are generated by sending MIDI events to Ableton Live 11. I'm using two main virtual instruments:
- Simpler built-in presets
- Spitfire Audio LABS
Full source code
(define *melody* (mkchord 48 '-))
(define *durs* (list 1/2 1/2 1 1/2))
(define loop
(lambda (beat melody durs)
(let ((dur (car durs))
(p (car melody)))
(at 8 0
(set! *melody*
(:mkchord (:mkint 48 (cosrfloor 7 7 1/30) 'M)
'M (cosrfloor 7 3 1/5))
)
(play cello (octave (car *melody*) 3 4) 60 8 )
(play 2 strings (octave (:mkint (car *melody*) 3 'M) 7 9) 50 8 )
(play 5/2 strings (octave (:mkint (car *melody*) 5) 3 5) 40 6 )
(play 4 strings (octave (:mkint (car *melody*) 12) 7 9) 30 4 )
)
(play pluck p 60 (* dur .9) )
(play 3/2 pluck (add -12 p) 60 (* dur .9) )
(at 4 0
(play pluck (:mkint p (oneof 12 4 5) ) 50 (* dur 2) )
(play (oneof 1 1/2) pluck (:mkint p 24 ) 40 (* dur 2) )
)
(callback (*metro* (+ beat (* 1/2 dur)))
'loop (+ beat dur)
(cdr-or-else melody *melody*)
(cdr-or-else durs *durs*)))))
(loop (*metro* 'get-beat 1) *melody* *durs*)
Also available on GitHub.
Rhythmic Cycles (2021-04)
Minimalism-ispired rhythmic patterns progressions.
Using 'map' to generate musical textures
The gist of this experiment relies on the map
function.
Using map
and lists of notes and offsets, it is possible to schedule repeated calls to the play
note function:
When the map
pattern above gets repeated via a loop, changing the input parameters generates a texture of sounds with a touch of randomness.
For example, some parameters to experiment with:
- times can be shifted up or down by 1/4 beat or so
- notes can be transposed using different chord structures
- volumes can use cyclical variations e.g. a cosine function
What you get is a mesmerising tune, which keeps repeating itself but it's never exactly the same.
Rendered using a simple sine-wave synth it may sound similar to 8-bit video game music patterns.. but when using more interesting sounds/instruments, the end result is much more interesting too.
Source code
The full source code can be found on Github.
(define notes (list c3 g3 bb3))
(define times (:mklist 8 (oneof 1/2 1/4)))
(define inc
(lambda (alist)
(map (lambda (x)
(if (< x 1)
(ifr .7 (add 1/4 x) x) 0)
)
alist)
))
(define lp1
(lambda (beat)
(let ((dur 1/16)
(v1 (cosr (cosr 50 18 1/64) 30 1/64))
(v2 (cosr (cosr 50 18 1/150) 30 1/40))
(fc 8))
(println v1 v2)
(onbeat 4 0
(set! times (inc times)))
(onbeat 32 0
(if (< v1 40)
(set! notes (rotate notes -1))))
(map (lambda (x y z)
(onbeat x 0 (play y z (* dur .9) 1))
(onbeat x 0 (play y z (* dur .9) 3))
)
(slice fc times)
(slice fc (:mkchord (car notes) '-6 8))
(slice fc (list v1 v2 v1 v2 v1 v2 v1 v2))
)
(callback (*metro* (+ beat (* 1/2 dur)))
'lp1 (+ beat dur)))))
(lp1 (*metro* 'get-beat 1))
Piano Scales (2020-11)
Musical patterns emerging from time-triggered overlapping piano scales.
Repeated scales with a touch of randomness
The gist of this musical algorithm is amazingly simple.
Pick a scale. You play it using a variable time-interval between its notes, which is determined by a cosine function (cosr
). The variable interval gives the final result a touch of suspense and makes it less computer-like.
After each note, more notes are played programmatically, after brief (random) intervals of half a beat, or 3/2 of a beat. Fifths, octaves, minor sevenths... as you please.
This whole thing repeating itself, at each iteration of the loop though the sound volume gets quieter by a fixed amount. Eventually, when the volume goes to 0, the repetition stops.
(define xsc
(lambda (beat vel scale)
(let ((dur (cosratio 4 2 1/128)))
;; piano
(play (car scale) vel dur 1)
(play 5 (+ 12 (car scale)) 1 (* dur 2) 1)
(play (oneof 3/2 2) (+ 24 (car scale)) 1 (* dur 2) 1)
;; bass
(play 5 (car scale) 90 (* dur 2) 2)
(:chance .8 (play 6 (+ (car scale) 2) 90 (* dur 2) 2))
;; repeat
(set! scale (rotate scale -1))
(set! vel (- vel 1))
(if (> vel 0)
(callback (*metro* (+ beat (* 1/2 dur))) 'xsc
(+ beat dur)
vel
scale)))))
;; set scale to play so that scales overlap with each other
(xsc (*metro* 'get-beat 1)
50 ; vol
(:mkscale c1 'pentatonic 2))
;; run again with 'ionian, 'aeolian etc.. for interesting harmonic effects
The full source code on GitHub.
About Extempore
Extempore is a programming language and runtime environment designed by Andrew Sorensen to support livecoding and cyberphysical programming, where a human programmer operates as an active agent in the world.
Algorithmic composition is the technique of using algorithms to create music.
Ambient electronica experiment.
Just trying to get going again with livecoding. Wiring up Ableton, VsCode and Extempore feels already quite like an achievement!
Ziggurat 51 (2013-11)
Experiment using a mixer built with the Impromptu OSX UI bindings.
You do need a mixer!
In the video I'm using the mixer UI I've previously talked about here. I quite like it, as you can see it's so much easier to focus on composition and performance by not having to worry about volumes in the code!
Also, since Impromptu's video recording functionality is broken on the latest versions of OSx, I've been testing out a new software called Screenflick, which is actually pretty good (apart from the logo you can't get rid of unless you buy the software).
Enjoy!
Ambient electronica scales rollercoaster.
Recording of a workshop+gig at the 2011 Libre Software Meeting in Stransbourg (event page | facebook | lastfm) on July 7th 2011.
Ambient electronica scales rollercoaster.
Recording from a gig in Paris at La Generale, "Laboratoire artistique, politique et social". 14, avenue Parmentier Paris XIe, Métro Voltaire (facebook | lastfm)
Happy electronica piece with an almostlyrical intermezzo.
Turbo Robot (2010-12) live @ Goldsmith College
Recording from Thursday Club Xmas party at Goldmisth college, London.
Happy electronica piece with an almostlyrical intermezzo.
The Event
Thursday Club Xmas party [ event-site | facebook | flier | map ]
6:30pm sharp til 8:30pm, 2010/12/16 at Goldsmith Digital Studios
Enjoy live coded music from some of the UKs finest algorithmic musicians, namely:
- slub - Slub celebrate a decade since they first got a whole room of people to dance to their code (at Amsterdam Paradiso), with a hard-edged set of abstract acid with extra breakdowns. [ http://slub.org/ ]
- Wrongheaded - Conducting an algorithmic seance, where a ouiji board control interface issues instructions from beyond the grave. Dimly lit but for the flickering of gas-driven projector screens, the protagonists will be appropriately moustachioed as they bring you ethereal sounds from the underworld.
- Thor Magnusson - Shaking, self-modified beats with ixilang, from the co-founder of ixi audio. [ http://www.ixi-audio.net/ ]
- Michele Pasin - Audio/Visual temporal recursion with Impromptu. [ http://www.michelepasin.org/ ]
- Forth + Yee-King - South Bank Common Lisp + SuperCollider synchronised in percussive improv. [ http://www.yeeking.net/ ]
Rather random experiment with musical randomness..
I've been experimenting a bit with picking random notes from a scale using various voices simultaneously, so to create interesting sounds textures... and what came out is maybe a bit too random at the beginning, but getting more interesting towards the end imho ...
Untitled 12 (2010-02) live @ Anatomy Theatre
Livecoding @ Anatomy Theatre, King's College, London.
Untitled 12 is an electronic music experiment mixing a standard bassline with randomly generated synth sounds.
Finally I managed to shrink down to a reasonable size and upload the video recording from our last livecoding event at Kings College.
Hopefully the other people will be uploading theirs too some time soon .. Here's my piece titled Untitled 12:
The program
Experiment called Xanadu.
...
Took me a little bit to build up the song base, then then I think it gets more interesting...
Kali is a japanese musical garden where to relax and unwind.
Ended: Videos
Extempore ↵
Quick start
What is Extempore?
Extempore is a programming language for musical livecoding and algorithmic composition. If you're new to Extempore, YouTube has many videos that show it in action.
Getting started with Extempore
Main resources
- Extempore Homepage and GitHub project: all you need to know about the Extempore language from its author Andrew Sorensen
- Mailing List Archives: questions and answers from the community
- VSCode plugin: an extension for the popular code editor that makes it easier to interact with the Extempore server
Other resources
- Ben's livecoding tricks give you a hint of how to make music with Extempore
- My own Functions navigator: a website that makes it easier to search and inspect Extempore's code base.
Installing Extempore
Warning
Section needs revision
- Install latest release
- Extract and open (some shortcuts available in bash_profile)
- Install VSCode extension by pulling shared settings from gist
- Set up keybindings eg for VsCode
Installation errors
January 4, 2021 Libportmidi error on workmac, after installing https://github.com/digego/extempore/releases/tag/v0.8.7
Evaluating expression: (sys:load "/Users/michele.pasin/Dropbox/code/extempore/xtm-hacking/init-extempore/LOAD_ALL.xtm")
Loading * "libs/core/pc_ivl.xtm" *
Loading * "libs/external/portmidi.xtm" *
Loading xtmbase library... done in 1.731826 seconds
Loading xtmportmidi library... Bad: dlopen(./libs/platform-shlibs/libportmidi.dylib, 9): no suitable image found. Did find:
./libs/platform-shlibs/libportmidi.dylib: code signature in (./libs/platform-shlibs/libportmidi.dylib) not valid for use in process using Library Validation: library load disallowed by system policy
Error: could not open dynamic library
^C
Received interrupt signal (SIGINT), exiting Extempore...
FIX:
- open up folder in finder libs/platform-shlibs
- right click open in OSx for all executables
Extempore extensions
A collection of my own custom extensions to the Extempore programming environment. Many of the compositions on this website rely on them e.g. macros like mkchord or mkint.
GitHub
See the Extempore Extensions GitHub project: it contains a bunch of (mainly MIDI) scheme extensions that I developed in order to create Extempore musical algorithms more efficiently and more naturally.
In a nutshell
The scheme/lisp family of languages make it easy to shape the language the way you want by changing its core operators (a bit like 'overloading' functions).
The extensions in this collection are just like that: the scheme abstractions that I developed in order to create Extempore musical algorithms more efficiently and more naturally. Goes without saying, they work for me but possibly not for you!
Functions navigator
The Functions navigator extensions section is also handy to browse the code.
Ableton Live
In a nutshell
Extempore can generate musical events using MIDI and an external software sequencer like Ableton LIVE. This page shows how to control Ableton LIVE using Extempore.
Controlling Ableton Live via MIDI
Mains steps:
- Create a new virtual device using the Audio Midi Setup app
- Restart Ableton LIVE, open preferences and add a Control Surface for extempore. Ensure INPUT sections 'Track' and Remote' are on.
- Start Extempore, init MIDI and
(pm_print_devices)
to see what device number Ableton is on, then do the usual(define *mididevice* (pm_create_output_stream <number>))
See also
OSX Audio Midi Setup
- Open up Audio Midi Setup app; click on submenu
Show Midi Studio
- Double click on IAC driver and add a new entry called 'Extempore BUS' (any name would do). Save and close.
See also:
Ableton MIDI settings
Restart Ableton LIVE, open preferences and add a Control Surface for extempore. Ableton Live you can you can create your own MIDI mappings script so I did one and named it 'extempore' - as a result the name appears in the dropdown in the Control Surfaces section.
Note: this is an extra step, extempore-live midi communication will work even without the script (using default settings).
Then, on the MIDI Ports Input section:
- Set
Track
,Sync
andRemote
inputs to true - Ready to go! (route extempore MIDI to it)
Important Note: Only activate necessary MIDI ports!
Switching Sync to On for both In and Out of the same device may trigger a feedback loop and affect Live’s performance. Do not do this unless you have a specific reason.
See also
Extempore MIDI messages
Start Extempore, init MIDI and (pm_print_devices)
to see what device number Ableton is on, then do the usual (define *mididevice* (pm_create_output_stream <number>))
.
Channels numbers
MIDI channels a 1-based in Live, but are 0-based in Extempore! So adjust your function calls accorindingly.
You can also refer to the device using its name:
(define *DEFAULT_MIDI_DEVICE_NAME*
"IAC Driver Extempore Bus")
(sys:load_verbose "libs/external/portmidi.xtm") ;; core midi lib
(pm_initialize)
(pm_print_devices)
(define *mididevice* (pm_create_output_stream
(pm_output_device_with_name *DEFAULT_MIDI_DEVICE_NAME*)))
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;
;; TEST PLAY NOTES
;;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
(let ((beat (*metro* 'get-beat))
(midichannel 1))
(play midichannel 60 90 2)
)
See also
- My own Extempore startup script the snippet above is taken from
- Source code: pm_output_stream function and portmidi.xtm
Using Midi CC and Automations
Arm the track
From 'arrangement view' or 'session view'. Click on the automation arm button to ensure it's selected.
Arm the track(s) you want to record on.
In Extempore, make sure you have MIDI CC mappings set up for the instrument you are using.
Click on "Record"
Note: you'll always record in the arrangement view using this method. You can also record session clips, buy you need to select the Record Session button instead, and subsequently record an arrangements from the sessions
Send Midi
From Extempore
Eg
(define channel 1)
; first set up the mapping as usual
; (:midicc 15 (random 1 100))
(define testmidicc
(lambda (beat dur)
;; play a note at each beat
(at 1 0 (play channel C3 20 dur ))
;; change CC every time the function runs
(:midicc 15 (cosr (cosr 16 10 1/2) 5 .05))
(callback (*metro* (+ beat (* 1/2 dur)))
'testmidicc (+ beat dur) dur)))
(testmidicc (*metro* 'get-beat 1) 1/8)
You should see the MIDI CC effects in real time - eg if we mapped filter-1 to Frequency
:
Stop recording
Always better to stop sending MIDI from Extempore first, to avoid strascichi di note.
Once that is done, stop recording in Live.
You should have a brand new clip in the armed tracks in the Arrangement view.
Warning
In Ableton's MIDI settings, do not set the Output
to IAC Driver! Leave it to None
. Otherwise the track recording seems to fail as the record button always switches back to inactive.
Did it work?
This is what automations look like in Ableton
See Also
VSCode
Warning
Section needs revision
VSCode cheats for livecoding
My usual shortcuts:
Alt+D
=> Delete all right // delete line without removing lineCMD+E
=> Extempore EvalALT+CMD+E
=> Extempore Connect current BufferCMD+]
=> Indent LineCMD+[
=> Outdent LineCMD+rightArrow
=> End of lineCMD+leftArrow
=> Start of lineCMD+L
=> Select LineCMD+D
=> Duplicate LineALT+D
=> Delete all right // delete line without removing lineCMD+shift+D
=> delete lineCMD+K 1-2-3-4
=> Wrap selection With letbeat / onbeat / dotimes / printlnOPT+CMD+K
=> Format selectionSHIFT+CMD+K
=> Show snippets
Preferences > Open Keyword Shortcuts
Example key mappings eg In keybindings.json
{
"key": "cmd+e",
"command": "extension.xtmeval",
"when": "editorTextFocus"
},
{
"key": "alt+cmd+e",
"command": "extension.xtmconnect",
"when": "editorTextFocus"
},
{
"key": "cmd+k 1",
"command": "editor.action.insertSnippet",
"when": "editorTextFocus",
"args": { "name": "wrap-letbeat" }
},
{
"key": "cmd+k 2",
"command": "editor.action.insertSnippet",
"when": "editorTextFocus",
"args": { "name": "wrap-onbeat" }
},
{
"key": "cmd+k 3",
"command": "editor.action.insertSnippet",
"when": "editorTextFocus",
"args": { "name": "wrap-dotimes" }
},
{
"key": "cmd+k 4",
"command": "editor.action.insertSnippet",
"when": "editorTextFocus",
"args": { "name": "wrap-println" }
}
VSCode settings
2023-11-26: try removing auto snippets suggestions. Only worked by passing a [extempore]
directive:
"[extempore]": {
"editor.wordBasedSuggestions": false,
"editor.quickSuggestions": {
"other": false,
"comments": false,
"strings": false
}
},
VSCODE opts
VSCode Snippets
Backup the built-in snippets
Musical loop functions
A collection of Extempore code snippets for creating different types of musical outputs.
Loop with inner dur
The most basic version of the loop templates. dur
as an inner variable is handy in case we want to change dur
on the fly and derive other parameters from it in the function body.
(define lp1
(lambda (beat)
(let ((dur 1))
(monitor beat dur)
(callback-at dur 'lp1 ))))
(lp1 (*metro* 'get-beat 1))
Loop with dur
as a function argument
Takes dur
as a loop argument. Useful when we want to change dur
INSIDE the function - ie based on some other process that happens in the function, so the next iteration should receive a preprocessed dur
.
Clearly this is a special case - normally dur
is either fixed or randomly assigned, hence it does not have to be an argument.
(define lp1
(lambda (beat dur)
(monitor beat dur)
;; EG change dur briefly every 8 beats
(if= (modulo beat 8) 0
(set! dur 2) (set! dur 1)
)
(callback-at dur 'lp1 dur)))
(lp1 (*metro* 'get-beat 1) 1)
Loop with all play
arguments
I.e. the 'explicit' loop. Here we are passing all play
args so that they can be changed at each iteration.
(define lp1
(lambda (beat pitch vol dur inst)
(let ((dur dur))
(monitor beat pitch vol dur inst)
(callback-at dur 'lp1
pitch
vol
dur
inst))))
(lp1 (*metro* 'get-beat 1) A3 90 1 1)
Loop with rotating melody, external
Using notes and durs patterns as arguments, which rotate and get reset to the original lists when all elements have been used.
Note: since notes and durs are global variables, they can be easily changed outside the loop or reused by other functions etc..
(define *melody* (mkchord 48 '-))
(define *durs* (mklist 4 (oneof 1/2 1)))
(define lp1
(lambda (beat melody durs)
(let ((dur (car durs))
(p (car melody)))
(monitor beat dur p)
(callback-at dur 'lp1
(cdr-or-else melody *melody*)
(cdr-or-else durs *durs*)))))
(lp1 (*metro* 'get-beat 1) *melody* *durs*)
Loop with rotating melody, internal
Similiar to the external melody version, but the pattern arguments are internal and rotated periodically.
This allows for example to call the same function multiple times with different notes and durs arguments, without having to duplicate code, generating overlapping musical patterns.
(define lp1
(lambda (beat notes durs)
(let ((p (car notes))
(dur (car durs)))
(monitor beat p dur)
(callback-at dur 'lp1
(rotate notes -1)
(rotate durs -1)))))
(lp1 (*metro* 'get-beat 1) '(60 67 69) '(4 2))
Loop with rotating melody, internal, with 'deep' rotation
A variation of the rotate loop that takes a nested list of plists and takes one value from each plist per cycle, while rotating them, generating a two-voices alternating structure.
(define lp1
(lambda (beat notes durs)
(let ((p (caar notes))
(dur (car durs)))
(monitor beat p dur )
(callback-at dur 'lp1
(rotatedeep notes -1)
(rotate durs -1)))))
(lp1 (*metro* 'get-beat 1) '((60 67 69) (48)) '(4 2))
See also
Loop with map
+ play
pattern
A more complex pattern playing structure that maps over several lists. It's a terse structure that permits to generate many rythmic and harmonic variations by modifying the external times
, notes
and vols
periodically.
(define times (mklist 4 (oneof 1/2 1/4)))
(define notes (mkchordrandom A3 'm6 4 100))
(define vols (mklist 4 (oneof 50 90)))
(define lp1
(lambda (beat)
(let ((dur 1/16))
(map (lambda (t p v)
(onbeat t 0 (monitor p v dur ))
)
times notes vols)
(callback-at dur 'lp1))))
(lp1 (*metro* 'get-beat 1))
See also
- Rythmic Cycles on YouTube uses this pattern
Loop with every
Allows to express conditions in relation to the loop (iteration) count, by passing an extra parameter lc
.
(define lp1
(lambda (beat dur lc)
(every 2 (monitor beat dur lc) )
(callback-at dur 'lp1
dur (1+ lc))))
(lp1 (*metro* 'get-beat 1) 1 1)
See also
Loop with rect
downbeats
Using a rectr
function to oscillate downbeats between two states only.
Note: dur
is defined internally here, to make it easier to update the snippet at runtime.
(define lp1
(lambda (beat)
(let ((dur 1/16)
(t (rectr 1/2 1 1/16 )))
(at t 0
(play 1 A2 60 t)
)
(callback-at dur 'lp1))))
(lp1 (*metro* 'get-beat 1))
Loop with 'ping pong' effect
Shorten the loop cycle duration by a fixed amount to achieve a ping pong effect. Reset the cycle duration when it gets too short.
(define lp1
(lambda (beat pitch dur inst)
(play inst pitch (+ 10 (* dur 100)) dur )
(if> dur 1/32
(callback-at dur 'lp1 pitch (- dur 1/32) inst)
(callback-at 2 'lp1 pitch 1 inst))))
(lp1 (*metro* 'get-beat 1) A3 1 1)
Loop with diatonic chord progressions
Generates chords variations based on diatonic changes.
(define lp1
(lambda (beat degree)
(let ((dur 8))
(for-each (lambda (p)
(play 1 p 30 dur))
(mkchordiatonicr 36 '- degree 3 70))
(callback-at dur 'lp1
(oneof (rest (assoc degree
'((i iv)
(iv i)
))))))))
(lp1 (*metro* 'get-beat 8) 'i)
See also
Making scales and chords
A summary of the chords functions available, in the mk
namespace, to create harmonic structures like scales and chords.
All of these functions take either a chord symbol or a scale symbol.
; sym chords
'M|m|Msus|M4|M6|M64|M7|M65|M43|M42|M2|M7#4|M9|7|9|65|43|2|42|msus|m4|m6|m64|m7|m65|m43|m42|m2|m9|m7b5|d|d6|d64|d7|d65|d43|d42|d2
; sym scales
'M|m|pentatonic|wholetone|chromatic|octatonic|messiaen1|messiaen2|messiaen3|messiaen4|messiaen5|messiaen6|messiaen7|ionian|dorian|phrygian|lydian|lydian-dominant|lydian-mixolydian|mixolydian|aeolian|locrian
Chords
(mkchord root 'chordsym )
(mkchordrand root 'chordsym )
(mkchordsteps root '(steps) 'scalesym )
(mkchordiatonic root 'maj-min 'degree) ; chordsym either M or m TODO
These are enhanced versions of the pc
functions.
Deprecated:
(pc:chord root 'chordsym )
(pc:make-chord lower upper number pcchord)
(pc:make-chord-fixed pitch number pcchord)
(pc:diatonic root '^- 'i)
The only advantage of pc:make-chord is that it can table a list of pitch classes, while mkchord just chord syms EG
Melody
Makes a melody from a root pitch and a scale.
Scales
These are enhanced versions of thepc
functions IE
Intervals
(mkint C4 3 'scale-sym )
(mkrandom C3 C6 'pitch-list-or-scale-sym ) ; simple wrapper around pc:random
(mkquant 61 'pitch-list-or-scale-sym )
Deprecated
Callback-at
A more terse version of the callback
function with metro/beat implied.
The built-in callback function is the main mechanism to build temporal recursions in the Extempore language.
Goal
Changing
or, if you have more arguments:
;; from
(callback (*metro* (+ beat (* 1/2 dur)))
'main (+ beat dur) dur plist)
;; to
(callback-at dur 'main dur plist )
Implementation
;; first arg is always the time delay for the recursion, then the function
(impc:aot:do-or-emit
(define-macro (callback-at dur fun . args)
`(callback (*metro* (+ beat (* 1/2 ,dur)))
(eval (quote ,fun)) (+ beat ,dur) ,@args)))
The only requirement is to have the beat
and *metro*
symbols defined. Note: metro is available by default with the standard Extempore library.
Example
;; basically, you can simply omit beat
(let ((beat (*metro* 'get-beat)))
(callback-at 1 'println 1))
;; let's play some music
(define pinco
(lambda (beat)
(let ((dur (random 1 3)))
(play 1 (random 60 80) 90 dur)
(callback-at dur 'pinco ))))
(pinco (*metro* 'get-beat 1))
See also
- Original callback function
- Implementation of make-metro function
- Extempore language homepage
Rect
Rect produces a repetitive function that has the min value and the max value only. No intervals. The max value is the sum of base point and variation. Eg 1/4 + 3/4 (=1) in this example:
(at (rectratio 1/4 3/4 1/16) 0 ;; cycle between two downbeat values
(play piano3
(mkrel C3 (cosr (cosr 5 4 .02) (cosr 5 5 7/2) 1/39) 'm) ;; use intervals of C3
(cosr 25 15 1) 1)
)
This allows to produce cycled loops with two values only. Tip: operate the pitch cosr frequency to generate different notes eg. (cosr 5 5 1/2)
Make it more interesting by introducing a random offset to make it sounds more natural:
Random
Useful tips about the random
function in Extempore.
Background
Apparently Standard Scheme does not provide a random number generator. But no worries, there are multiple ways to achieve that as discussed in this SO thread.
Luckily Extempore comes with its own implementation of random
. However, the original Extempore version always returns an integer when arguments are provided.
That's not exactly what I wanted as in some situation I'd like to get a random float from any two numbers. So I made a version that does that:
I've updated it on lines 11-12, so that if the first argument it's a float, then it returns a float.
Random float
Generates a float from 0 to 1.
Random from list
Returns a random element from the list.
Heads up: symbols are evaluated before being returned!
Random with boundaries
Returns a random integer from 2 to 5 (excluded).
Returns a random floating number from 2 to 5 (excluded).
Defaults
The second element is always 0, hence this returns a random integer from 0 to 2 (excluded).
The second element is always 0, hence this returns a random float from 0 to 2 (excluded).
Example
Ended: Extempore
Impromptu ↵
About Impromptu
Legacy software
Impromptu was released in 2005 and was superseded by Extempore around 2011. The articles in this section were originally published on another blog and have been archived here.
Impromptu is a Mac OS X programming environment for live coding, created by Andrew Sorensen. Impromptu is built around the Scheme language, which is a member of the Lisp family of languages.
Here is a classic piece made with Impromptu, titled A Study in Keith.
See also this slide deck I prepared for a tutorial in 2011.
2013-09-15: Building a master volumes UI with impromptu
Based on one of the examples packaged with Impromptu, I wrote a simple function that uses the objc bridge to create a bare-bones user interface for adjusting your audio instruments master volumes.
Using the objc
bridge
The script assumes that your audio graph includes a mixer object called *mixer*. The UI controllers are tied to that mixer's input buses gain value.
The objc bridge commands are based on the silly-synth example that comes with the default impromptu package.
Being able to control volumes manually rather than programmatically made a great difference for me. Both in live coding situations and while experimenting on my own, it totally speeds up the music creation process and the ability of working with multiple melodic lines.
The next step would be to add a midi bridge that lets you control the UI using an external device, in such a way that the two controllers are kept in sync too. Enjoy!
P.s.: this is included in the https://github.com/lambdamusic/ImpromptuLibs
2013-02-20: A metronome object for Impromptu
Metronome: a device used by musicians that marks time at a selected rate by giving a regular tick. If you ever felt that you missed a metronome in Impromptu, here is a little scheme object that can do that job for you.
Function: make-metroclick
The make-metroclick function returns a closure that can be called with a specific time in beats, so that it plays a sound for each beat and marks the downbeat using a different sound.
Possibly useful in order to keep track of the downbeats while you compose, or just to experiment a little with some rhythmic figures before composing a more complex drum kit section.
Here's a short example of how to use it:
Make-metronome relies on the standard libraries that come with Impromptu, in particular make-metro, which is described in this tutoriale and on this video. Essentially, it requires you to define a metro object first, e.g. (define *metro* (make-metro 120)).
Here's the source code:
2011-12-21: Using Impromptu to visualize RSS feeds
Some time ago I've been experimenting with the processing and display of RSS feeds within Impromptu, and as a result I built a small app that retrieves the news feed from The Guardian online and displays on a canvas. I've had a bit of free time these days, so last night I thought it was time to polish it a little and make it available on this blog (who knows maybe someone else will use it as starting point for another project).
There're a thousand improvements that could be done to it still, but the core of the application is there: I packaged it as a standalone app that you can download here. (use the 'show package contents' Finder command to see the source code).
The application relies on a bunch of XML processing functions that I found within Impromptu 'examples' folder (specifically, it's the example named 35_objc_xml_lib). I pruned that a bit so to fit my purposes and renamed it xml_lib.scm.
By using that, I created a function that extracts title and url info from the guardian feed:
(load "xml_lib.scm")
(define feedurl "http://feeds.guardian.co.uk/theguardian/world/rss")
;;
;; loads the feed and extracts title and url
;;
(define get-articles-online
(lambda ()
(let* ((out '())
(feed (xml:load-url feedurl))
(titles (objc:nsarray->list (xml:xpath (xml:get-root-node feed)
"channel/item/title/text()")))
(urls (objc:nsarray->list (xml:xpath (xml:get-root-node feed)
"channel/item/link/text()"))))
(for-each (lambda (x y)
(let ((xx (objc:nsstring->string x))
(yy (objc:nsstring->string y)))
(set! out (append out (list (list xx yy))))))
titles urls)
out)))
Some feed titles are a bit longish, so I added a utility function formattext that wraps the titles' text if they exceed a predefined length.
(define formattext
(lambda (maxlength txt posx posy)
(let ((l (string-length txt)))
(if (> l maxlength)
(let loop ((i 0)
(j maxlength) ;; comparison value: it decreases at each recursion (except the first one)
(topvalue maxlength)) ;; komodo value : must be equal to j at the beginning
(if (equal? (- topvalue i) j) ;; the first time
(loop (+ i 1) j topvalue)
(begin ;(print (substring txt (- topvalue i) j))
(if (string=? (substring txt (- topvalue i) j) " ")
(string-append (substring txt 0 (- topvalue i))
"n"
(substring txt (- topvalue i) (string-length txt)))
(if (< i topvalue) ;;avoid negative indexes in substring
(loop (+ i 1) (- j 1) topvalue))))))
txt))))
And here's the main loop: it goes through all the feed items at a predefined speed, and displays it on the canvas using a cosine oscillator to vary the colours a bit. At the end of it I'm also updating 3 global variables that are used for the mouse-click-capturing routine.
(define displayloop
(lambda (beat feeds)
(let* ((dur 5)
(posx (random 0 (- *canvas_max_x* 350)))
(posy (random 10 (- *canvas_max_y* 150)))
(txt (formattext 40 (car (car feeds)) posx posy))
(dim ;(+ (length feeds) 10))
(if (= (length feeds) 29)
60 ;; if it's the first element of the feed list make it bigger
(random 25 50)))
(fill (if (= (length feeds) 29)
'(1 0 (random) 1) ;; if it's the first element of the feed list make it reddish
(list (random) 1 (random) 1)))
(style (gfx:make-text-style "Arial" dim fill)))
(gfx:clear-canvas (*metro* beat) *canvas* (list (cosr .5 .6 .001) 0 (cosr .5 .6 .001) .5 ))
(gfx:draw-text (*metro* beat) *canvas* txt style (list posx posy))
(set! *pos_x* posx)
(set! *pos_y* posy)
(set! *current_url* (cadr (car feeds)))
(callback (*metro* (+ beat (* 1/2 dur))) 'displayloop (+ beat dur)
(if-cdr-notnull feeds
(get-articles-online))))))
In order to capture the clicks on the feed titles I simply create a rectangle path based on the x,y coordinates randomly assigned when displaying the title on the canvas. These coordinates are stored in global variables so that they can be updated constantly.
(io:register-mouse-events *canvas*)
(define io:mouse-down
(lambda (x y)
(print x y)
(when (gfx:point-in-path? (gfx:make-rectangle *pos_x* *pos_y* 200 200) x y )
(util:open-url *current_url*))))
Finally, the util:open-url opens up a url string in your browser (I've already talked about it here).
You can see all of this code in action by downloading the app and taking a look its contents (all the files are under Contents/Resources/app).
If I had the time…
Some other things it'd be nice to do:
- Creating a routine that makes the transitions among feed items less abrupt, maybe by using canvas layers.
- Refining the clicking events creation: now you can click only on the most recent title; moreover the clicking event handler is updated too quickly, thus unless you click on the titles as soon as it appears you won't be able to trigger the open-url action.
- Refining the xml-tree parsing function, which now is very very minimal. We could extract news entries description and other stuff that can make the app more informative.
- Adding some background music to it.
Impromptu 2.5 has been out for a while now but I've never realised it contained this new handy feature: an 'error hook':
The interpreter will now throw to an error hook providing you with greater control over exception handling. You initiate the livecoding error hook by calling (sys:livecoding-error-hook #t). Errors are then passed to the *livecoding-error-hook* function - which you may rebind. By default the function simply returns 1 but can be modified for more specialised behaviour.
This is extremely useful, to say the least, if you are performing live and want to avoid situations in which a (stupid) typo or parenthesis error will mess up your entire gig. The error hook in many cases will prevent your looping function from crashing, giving you time to fix the error. Really neat.
Here's an example from the official release notes:
;; turn on livecoding error hook
(sys:livecoding-error-hook #t)
;; with livecoding-error-hook on
;; this temporal recursion will continue
;; to play the second note even though 'pitch
;; is an unbound symbol
(define loop
(lambda (beat)
;; symbol a is not bound but loop continues to function
(play piano pitch 80 1)
(play piano 63 80 1)
(callback (*metro* (+ beat (* 1/2 1))) 'loop (+ beat 1))))
(loop (*metro* 'get-beat 4))
;; by redefining the error hook we can provide
;; additional specialisation - such as replacing
;; any unbound symbol with 60!
;;
;; eval below and both notes will play
;; 'pitch being replaced by 60
(define *livecoding-error-hook*
(lambda (msg a)
(cond ((symbol? a) 60)
(else 0))))
Happy (and safer) livecoding!
2011-03-29: An alternative to the 'play' macro: 'iplay' and 'with-instrument'
The other day I was thinking: when I use the play macro in Impromptu (video tutorial), in my head it's already obvious what's the virtual instrument I want to play. So why do I have to specify that all the time? Wouldn't it be more natural just being able to say, for example, get this instrument and now play this and that note with it...
Let me clarify this with an example. Let's set up the standard Apple's DLS-synth audiounit and the metronome:
(au:clear-graph)
(define dls (au:make-node "aumu" "dls " "appl"))
(au:connect-node dls 0 *au:output-node* 0)
(au:update-graph)
(au:print-graph)
(define *metro* (make-metro 100))
Now imagine that we want to play some (dumb) melody with Apple's DLS. We can use the usual play macro to achieve this quite easily (that's because above we set up the metronome, which is needed in order to use play - check out the docs if this sounds odd to you).
Sequencing a bunch of notes is thus just a matter of sequencing play macros:
(define the-usual-play
(lambda (beat)
(play dls 60 90 1)
(play dls 64 90 1)
(play 1/2 dls 72 90 1)
(play 3/2 dls 67 90 1)
(callback (*metro* (+ beat (\* 1/2 2))) 'the-usual-play (+ beat 2))))
(the-usual-play (*metro* 'get-beat 4))
That's where I started getting nervous (so to say). Having to write 'dls' each time I play a new note seemed to me redundant and illogical; I know that it's dls the instrument I want to play - I heard my mind screaming - why can't I focus on the music instead of making sure I type in the instrument name all the time?
Taking advantage of Scheme, the self-modifying language
Luckily though we're using Scheme, which differently from most other computer languages allows you to change the language grammar as you like, thanks to macros. So here we go, we can create a new macro similar to play that lets you omit the instrument you're playing. We'll call it iplay (shortcut for instrument-play, not itunes :-)):
(macro (iplay args)
(cond ((equal? (length (cdr args)) 3)
`(let ((note ,(cadr args))
(vol ,(caddr args))
(dur ,(cadddr args))
)
(play my-inst note vol dur)))
((equal? (length (cdr args)) 4)
`(let ((offset ,(cadr args))
(note ,(caddr args))
(vol ,(cadddr args))
(dur ,(cadddr (cdr args))))
(play (eval offset) my-inst note vol dur)))
(#t (print "Error: the function only accepts 3 or 4 argument"))))
Essentially what we're telling the interpreter here is that every time iplay is used, the original play should be called and the symbol my-inst should be passed as the variable representing our instrument. Now we can modify the simple loop defined above like this:
(define the-usual-play-modified
(lambda (beat)
(let ((my-inst dls))
(iplay 60 90 1)
(iplay 64 90 1))
(play 1/2 dls 72 90 1)
(play 3/2 dls 67 90 1)
(callback (*metro* (+ beat (\* 1/2 2))) 'the-usual-play-modified (+ beat 2))))
(the-usual-play-modified (*metro* 'get-beat 4))
If you run that, you'll see that this loop sounds exactly the same as the previous one, although the first two play calls are now iplay macro calls. The whole thing works because we introduced a local variable my-inst and bound that to the dls audio instrument (created at the beginning). Notice that the new macro iplay knows nothing about what instrument is playing: it's just using blindly the my-inst variable, under the assumption that we've associated it to a valid audio instrument.
Some more syntactic sugar
The only hassle now is that each time we want to use iplay we are forced to use the (let ((my-inst dls)).. form. Typing this stuff doesn't feel very natural too. Rather, in my head, I tend to see things like this: get an instrument first, then play a bunch of notes with it.
So, let's create some syntactic sugar for the 'let' form, by defining another macro, 'with-instrument':
As you can see, this macro doesn't do much: it just rephrases the let form above in a way that is probably more natural to me (and to others too I believe..).
For example, now we can use iplay like this:
(define justatest
(lambda (beat)
(with-instrument dls
(iplay 48 40 1) ;; iplay with 3 args: pitch, vol and dur
(iplay (random (list 1/2 1/4 1/8)) (random 60 80) 40 1)) ;; 4 arguments: offset
(callback (*metro* (+ beat (\* 1/2 1))) 'justatest (+ beat 1))))
(justatest (*metro* 'get-beat 4))
Finally, let's remember that because of the way we defined iplay above, we can pass it 3 or 4 arguments: in the first case, the macro assumes that we're providing a pitch, a volume, and a duration. In the second case instead the first argument is assumed to be an offset value (while the others remain unchanged).
The original play macro can take another argument too: the channel (or midi port). I haven't included here cause I normally don't need it, but if you do I'm sure you can fiddle a bit with the code above and make it do whatever you want!
Conclusion: here is the code you need to evaluate within Impromptu if you want to use the with/iplay constructs (all the source code is also available on BitBucket):
(macro (with-instrument args)
`(let ((my-inst ,(cadr args)))
,@(cddr args)))
(macro (iplay args)
(cond ((equal? (length (cdr args)) 3)
`(let ((note ,(cadr args))
(vol ,(caddr args))
(dur ,(cadddr args))
)
;(print inst beat note vol dur)
(play my-inst note vol dur)))
((equal? (length (cdr args)) 4)
`(let ((offset ,(cadr args))
(note ,(caddr args))
(vol ,(cadddr args))
(dur ,(cadddr (cdr args))))
;(print inst beat note vol dur)
(play (eval offset) my-inst note vol dur)))
(#t (print "Error: the function only accepts 3 or 4 argument"))))
Finally.. a 'pastebin' function
Also, here is a pastebin function similar to pb:cb (check out this video tutorial if you don't know what I'm talking about) that returns a template with the with-instrument macro:
(define pb:iplay (lambda (name dur inst) '())) ;; fake function definition, useful for autocompletion!
;; macro wrapper for pb-iplay
(define-macro (pb:iplay name dur inst . args)
\`(pb-iplay (sexpr->string (quote ,name))
(sexpr->string (quote ,dur))
(sexpr->string (quote ,inst))
,@(map (lambda (expr)
(sexpr->string expr))
args)))
(define pb-iplay
(lambda (name dur inst . args)
(let ((a (apply string-append (map (lambda (e) (string-append "n " e)) args))))
(sys:set-pasteboard
(string-append
"(define " name "
(lambda (beat)
(with-instrument " inst "
" a "
(callback (\*metro\* (+ beat (\* 1/2 " dur "))) '" name " (+ beat " dur ")))))nn(" name " (\*metro\* 'get-beat 4))")))))
;;;;;;;;;;;;;;;;
;; call it like this:
;(pb:iplay myloop 1/4 dls)
;;
;;;;;;;;;;;;;;;;;
;; returns:
;;;;;;;;;;;;;;;;
;
;(define myloop
; (lambda (beat)
; (with-instrument dls
;
; (callback (\*metro\* (+ beat (\* 1/2 1/4))) 'myloop (+ beat 1/4)))))
;
;(myloop (\*metro\* 'get-beat 4))
I got this info while reading Kontakt 4 documentation, and I thought it was useful to pass it on. It helps understanding the significance of the 'reaper time' setting in Impromptu, which I often playing around with without really getting it... (check Reaper stuff on IM mailing list).
Kontakt 4 is an aaward winning sampler from Native Instrument; I was just going through the docs the other day to figure out what it does or doesn't do and in the section that explain the various ettings parameters I found the description of the 'latency optimization' very useful (2.1.2 Latency Optimization [KONTAKT 4 Getting Started – page 11 - get the pdf of the manual here).
The Latency slider controls the size of the playback buffer.
And here is the explanation of that setting:
The load that typical digital audio calculations generate on your processor is often not constant and predictable; parameter changes, additional voices or other processes can all cause mo- mentary peaks in the load, which can result in drop-outs or other audio artifacts if not properly compensated for. That is why audio programs do not send the audio signals they generate directly to the hardware, but write them to a short buffer in memory instead, whose contents are in turn being sent to the actual hardware.
This concept allows the program to bridge short irregularities in the stream calculation and thus be more resistant to processing peaks. Of course, this “safety net” comes at a price–the buffering causes a delay, known as latency, between the triggering of a note and the actual sound. This delay gets longer with increas- ing buffer sizes. Hence, it’s vital to tune the buffer size in order to find a good compromise between latency and playback reliability. The optimal value depends on such diverse factors as your CPU, memory and hard disk access times, your audio hardware and drivers, and your operating system environment.
In order to find the optimal buffer size for your system, we recommend that you begin by setting the Latency slider described in the previous section to a healthy middle value between 384 and 512 samples, then gradually decrease the value during your normal work. When you begin to notice drop-outs, increase the buffer again by a small amount. Generally, it’s a good idea to have as few other applications as possible running in the back- ground when working with audio software. Also, if you can’t get below a certain buffer size without getting drop-outs, consult the documentation of your audio hardware to find out whether you can access it via an alternate driver architecture, as some architectures allow more efficient low-level access to the hardware than others.
Imagine you've got a bunch of audio samples you want to load up while livecoding with Impromptu but you can't remember exactly their names - it'd be handy to be able to open up the corresponding Finder window directly from scheme, without too much clicking around. Do-able or not?
I spent some time trying to figure this out, and the answer is yes! Quite a nice learning experience... although it came with a surprise at the end.
Originally I thought, let's do it via impromptu's ObjC bridge. I don't know much about ObjC but after a bit of googling it seemed evident that the quickest way to accomplish this is by writing ObjC code that, in turns, runs a simple applescript command that opens a window:
[NSString](http://developer.apple.com/documentation/Cocoa/Reference/Foundation/Classes/NSString_Class/) \*s \= \[[NSString](http://developer.apple.com/documentation/Cocoa/Reference/Foundation/Classes/NSString_Class/) stringWithFormat:
@"tell application "Terminal" to do script "cd %@"", folderPath];
[NSAppleScript](http://developer.apple.com/documentation/Cocoa/Reference/Foundation/Classes/NSAppleScript_Class/) \*as \= \[\[[NSAppleScript](http://developer.apple.com/documentation/Cocoa/Reference/Foundation/Classes/NSAppleScript_Class/) alloc\] initWithSource: s\];
[as executeAndReturnError:nil];
So I translated the bottom part of the code above into impromptu/scheme:
(define run_applescript
(lambda (script)
(objc:call (objc:call (objc:call "NSAppleScript" "alloc")
"initWithSource:"
script)
"executeAndReturnError:" )))
That is a generic script-running function, that is, you can pass any script and it'll run it, eg:
(define script0 "
tell application "Finder" to open folder "Macintosh HD:Users"
tell application "Finder" to activate")
(define script1 "
tell application "Terminal" to do script "cd ~; open ."")
(define script2 "
tell application "System Events"n
tell dock preferencesn
set properties to {autohide:false}n
end telln
end tell")
;; finally, let's choose randomly one of the scripts above and run it
(run_applescript (random '(script0 script1 script2)))
Now, back to the original problem: in order to open a Finder's window at a specified location we need to pass the location variable to our function run_applescript; also, we want to be able to pass unix path expressions (eg '/Users/mike/music/'), so we've got to transform that path syntax into the semicolon-delimited macintosh syntax ("Macintosh HD:Users:mike:music") needed by the applescript function we're using. That's easily done with string-replace, so here we go:
(define open_finder_at
(lambda (location)
(let* ((llocation (string-replace location "/" ":"))
(script (string-append "tell application "Finder" to activate open folder "Macintosh HD" llocation """)))
(objc:call (objc:call (objc:call "NSAppleScript" "alloc")
"initWithSource:"
script)
"executeAndReturnError:" ))))
(open_finder_at "/Users/me/")
That's pretty much it really... now we can easily open OsX Finder's windows from within Impromptu.
But as I said above, there's a surprise: after getting this far I thought I'd search impromptu's mailing list for more examples of obj:call .... and guess what, there's already a system function that runs applescripts, it's called sys:run-applescript !
Too bad, it's been a rewarding learning experience anyways...
Good news for livecoders: a new version of Impromptu is available (direct link to the 2.5 dmg package).
Apart from various bug fixes, it looks like as if the major development is the ICR (impromptu compiler runtime), a new set of scheme functions that facilitate the creation of faster bytecode, so that computationally-intensive tasks such as real time audio processing or open-gl can perform more efficiently.
The new compiler is a far more robust and serious attempt at providing low level, but completely dynamic programming support within the Impromptu ecosystem. It is designed for situations which require low-level systems style programming. Two primary examples are audio signal processing and opengl programming - both of which require the production of highly efficient code. While attempting to achieve runtime efficiency the ICR also tries to maintain Impromptu's love of all things dynamic and is designed to coexist with the rest of Impromptu's infrastructure (Scheme, Objective-C, AudioUnits etc..).
The new compiler seems to be introducing some pretty foundamental improvements:
It is important to state from the outset that the new Impromptu compiler is NOT a scheme compiler. It is really its own language. This language looks like scheme (sexpr's and alike) but is in many ways semantically closer to C (a nice functional version of C :-). This is because the ICR is designed for purposes that are not suitable for the standard scheme interpreted environment. Low level tasks that require efficient processing - such as audio signal processing, graphics programming, or general numerical programming.
Unlike Scheme the Impromptu compiler is statically typed. You may not always see the static typing but it is always there in the background. The reason that you won't always see the typing is that the compiler includes a type-inferencer that attempts to guess at your types based on static analysis of your code. Sometimes this works great, sometimes it doesn't. When things fail, as they surely will, you are able to explicitly specify types to the compiler.
A big thanks to Andrew Sorensen for keeping up (and free) this great work!
When you're Impromptu-ing but don't know the meaning or syntax of a function, the usual thing to do is calling (help function-name)
to get some help about that function, or (help function-name #t)
if you want to see also the examples associated with it. The help text gets displayed in the log view, so that you can then copy/paste what you need from there. Quite useful, but nonetheless I always find myself fighting with the log window: too small, hidden away by other canvases, or not readable anymore cause after calling the help function I've evaluated other stuff that has moved up the much needed help-text.
Since a couple of months ago Impromptu has a wiki too - so I thought, it'd be nice to see a function's help in a browser window, and possibly contribute to its explanation too..
So, that's the rationale for this little script. By calling 'wiki' you can open up a web browser at the relevant Impromptu-wiki page.. as simple as that. >>>
First off, we need a couple of utility functions that are not included in Impromptu by default, for better manipulating strings, lists and webpages (UPDATE 9-Nov2010: some of this symbols have been included in Improptu 2.5, so I prefixed the one below with the utils: namespace):
;;;;;;;
;; utilities
;;;;;;;
;; (utils:list-flatten '(9 9 (9 9 9 )))) = (9 9 9 9 9)
(define utils:list-flatten
(lambda (l)
(cond ((null? l)
'())
((atom? l)
(list l))
(#t (append (utils:list-flatten (car l)) (utils:list-flatten (cdr l)))))))
;; returns a char from a string of length 1, or a list of chars from a longer string
(define utils:char
(lambda (string_char)
(if (string? string_char)
(if (> (string-length string_char) 0)
(if (> (string-length string_char) 1)
(string->list string_char)
(car (string->list string_char))))
(print 'please 'enter 'a 'string))))
;; matches a single character in a string, and replaces it
(define utils:string-replace
(lambda (s match replacement)
(let ((ll (string->list s))
(match1 (utils:char match))
(replacement1 (utils:char replacement)))
(if (= (string-length match) 1)
(let ((z (map (lambda (x)
(if (equal? x match1)
replacement1
x))
ll)))
(list->string (utils:list-flatten z)))
;z)
(print "i can match only single characters for now")))))
;; makes a string upper case
(define utils:string-capitalize
(lambda (s)
(string-append (string (char-upcase (string-ref s 0))) (substring s 1 (string-length s)))))
;; open-url: calls the default mac browser with a url argument
;; disclaimer: I'm not an objc programmer... found an example at
;; [http://macosx.com/forums/software-programming-web-scripting/18422-how-do-i-launch-url-using-cocoa-objective-c.html](http://macosx.com/forums/software-programming-web-scripting/18422-how-do-i-launch-url-using-cocoa-objective-c.html)
(define utils:open-url
(lambda (urlstring)
(let ((urlobj (objc:call "NSURL" "URLWithString:" urlstring))
(workspace (objc:call "NSWorkspace" "sharedWorkspace")))
(objc:call workspace "openURL:" urlobj))))
Finally, the functions for opening the wiki page:
;;;;;;;;;;
;; wiki url caller
;; e.g. (wiki objc:from-address) => goes to http://moso.com.au/wiki/index.php?title=Objc:from-address
;;;;;;;;;;
;; wiki-escape: composes the url so that it matches the ones of the online wiki
(define wikiescape
(lambda (funname)
(for-each (lambda (x)
(set! funname (utils:string-replace funname (car x) (cadr x))))
'(("+" "%2B")
("=" "%3D")
("<" "lessthan")
(">" "greaterthan")
("*" "%2A")
("?" "%3F")
("!" "%21")
))
(utils:string-capitalize funname)))
(define wiki-inner
(lambda (funname)
(let* ((urlbase "[http://moso.com.au/wiki/index.php?title=](http://moso.com.au/wiki/index.php?title=)")
(newname (wikiescape funname))
(url (string-append urlbase newname)))
(utils:open-url url))))
;; macro wrapper and main function that gets called
(define-macro (wiki name)
`(wiki-inner (sexpr->string (quote ,name))))
That's it: load all of this code (or put it in a single file and load it at startup time) and you've got the wiki procedure available!
2009-10-29: Impromptu: If-mod macro
Hey there - this morning I checked out a nice screencast by Ben Swift and was struck by the if-mod construct he's using. It's a really useful shortcut that saves you from writing a few (possibly distracting) parenthesis, so I tried to recreate it myself.
Implementing if-mod
To recap.. normally with Impromptu if you want to play notes at some specific time expressed in beats with you'd have to set up a metronome first [have a look here for more info about how to use *metro*] and then check for the right beat using the modulo function.
For example, something like this will play a central C every first beat of a 4/4 measure:
(define *metro* (make-metro 100))
(define test
(lambda (beat)
(if (equal? (modulo beat 4) 0)
(play dls 60 60 3))
(callback (*metro* (+ beat (* 1/2 1/4))) 'test (+ beat 1/4))))
(test (*metro* 'get-beat 4))
Another way of doing this is by using case. Same approach, but probably faster to code, as it lets you specify 'multiple beats' very easily:
(define test2
(lambda (beat)
(case (modulo beat 4)
((0)
(play dls 60 60 3))
((2 5/2)
(play dls 67 60 1/2)))
(callback (*metro* (+ beat (* 1/2 1/4))) 'test2 (+ beat 1/4))))
(test2 (*metro* 'get-beat 4))
Still quite a few parenthesis though... which, especially when playing live, might mean more chances to mess up! So when I saw Ben's video I realized that a macro usable to facilitate the creation of case/modulo expressions would be quite useful.. Here is how it can be done:
(define-macro (if-mod x y args)
`(for-each (lambda (step)
(if (equal? (modulo beat ,x) step)
,args))
(if (list? ,y)
,y
(list ,y))))
Now, by using the if-mod macro we've just created we can re-write the second example above much more concisely:
(define test2-new
(lambda (beat)
(if-mod 4 0 (play dls 60 60 3))
(if-mod 4 '(2 5/2) (play dls 67 60 1/2))
(callback (*metro* (+ beat (* 1/2 1/4))) 'test2-new (+ beat 1/4))))
(test2-new (*metro* 'get-beat 4))
That's all! Notice also that the if-mod construct can take either a list of beats or a single one.
2009-10-18: Impromptu & Zebra: a perfect match!
I've been having so much fun using Impromptu with the Zebra audiounit lately (I just got the demo version for now). Nice sounds, very stable, clear but captivating interface, easy to control programmatically via Impromptu. Think I'm gonna buy it!
Zebra features
Zebra is a wireless modular synthesizer. It combines numerous synthesis techniques (subtractive, additive, fm, wavetable, etc.) with a powerful modulation engine that even smoothly integrates with the built-in effects section.
Unlike its analog predecessors Zebra has got an adaptive user interface that shows only what you can hear. You don't have to worry about complexity - but it's available when you need it!
A demo version (AU, VST Mac & Win, RTAS Mac) is available to musicians with fastidious sound requirements:
Zebra can be ordered online for only 199$ (USD)!
Screenshots
2007-05-12: Impromptu: scheme-based music and video
Live & interactive programming
In one word? SUPER COOL!!!
Impromptu is an OSX programming environment for composers, sound artists, VJ's and graphic artistswith an interest in live or interactive programming. Impromptu is a Scheme language environment, a member of the Lisp family of languages.
Time is the essence:
Time plays a major role in the Impromptu environment allowing accurate real-time scheduling of events and code. Impromptu is a dynamic environment designed for the creation and manipulation of running programs in live performance.
AudioUnit Host for OSX:
Impromptu is a programmable AudioUnit host. A powerful environment for creating AudioUnit graphs of arbitrary complexity with precise programmatic control over individual AU nodes. Musical material can be precisely scheduled for performance by any AudioUnit instrument node and parameters, program changes and presets can be programmatically changed on-the-fly as well as directly via the AU's user interface.
Graphics routines can be applied with the same temporal accuracy as audio material allowing artists to tightly integrate audio and visual components. OpenGL, live video processing, vector drawing routines, image rendering, CoreImage filters, text rendering and quicktime movie support are a few of the graphics features available for artists to play with.
Impromptu also includes a bidirectional ObjC-Bridge allowing Scheme to instantiate and call ObjC objects and ObjC objects to call back into the Scheme interpreter.
Ended: Impromptu
Miscellaneous ↵
Composing at the metalevel
I've started reading "Notes from the Metalevel: An Introduction to Computer Composition", by Heinrich Taube, and realised I should have done that a long time ago!
About the book
Notes From the Metalevel is a practical introduction to computer composition. It is primarily intended for student composers interested in learning how computation can provide them with a new paradigm for musical composition.
I happened to have a pdf version of the book, but the good news is that there's an html version of it too, which includes also all the midi files of the numerous examples included in the book. So make sure you check that out, if you're interested in computer-based composition. You might also be interested in this review on computer music journal, and this course materials from Taube's class at Illinois.
What is metalevel composition?
The preface to the fist chapter contains this suggestive excerpt from Leonard Schlain's book, The Alphabet Versus the Goddess, which Taube (page 19-20) uses as a metaphor of what algorithmic composition (i.e., metalevel composition) is::
"The one crowded space in Father Perry's house was his bookshelves. I gradually came to understand that the marks on the pages were trapped words. Anyone could learn to decipher the symbols and turn the trapped words loose again into speech. The ink of the print trapped the thoughts; they could no more get away than a doomboo could get out of a pit. When the full realization of what this meant flooded over me, I experienced the same thrill and amazement as when I had my first glimpse of the bright lights of Konakry. I shivered with the intensity of my desire to learn to do this wondrous thing myself." (spoken by Prince Modupe, a west African prince who learned to read as an adult)
It is impossible to know exactly how Prince Modupe felt when he discovered a process by which his very thoughts could be trapped and released at will again into speech. But I think his epiphany must be close to what I experienced when, as a young composer, I was first shown how I could use a computer to represent my musical ideas and then "release them" into musical compositions. At that instant it became clear to me that there was an entire level of notation above the scores that I had been writing in my regular composition classes, a level I knew nothing about! But I could see that in this level it was possible to notate my compositional ideas in a precise manner and work with them in an almost physical way, as "trapped words" that could be unleashed into musical sound whenever I wanted.
So what does it mean to compose at the meta level?
Given the existence of the acoustic and score representations one might ask if there is yet another representation that constitutes a level of abstraction above the performance score? The answer, of course, is yes; it is what this book terms the metalevel. If the score represents the composition then the metalevel represents the composition of the composition. A metalevel representation of music is concerned with representing the activity, or process, of musical composition as opposed to its artifact, or score.
This book is about using the computer to instantiate this level: to define, model and represent the compositional processes, formalism and structures that are articulated in a musical score and acoustic performance but are not literally represented there. By using a computer the composer can work with an explicit metalevel notation, or language, that makes the metalevel as tangible as the performance and acoustic levels.
Special issue of CMJ DVD on livecoding
The latest issue of the Computer Music Journal is now available, and it includes a DVD full of livecoding bonanza.
Because this is the Winter issue, it includes the annual CMJ DVD, whose program notes appear near the end of the issue. The curators for the compositions on this year’s DVD are specialists in live coding, the practice of onstage computer programming whose real-time output is an improvised and often collaborative musical performance. As always, the DVD also includes sound and video examples to accompany recent articles, as well as related files on the DVD-ROM portion of the disc.
]
]
More information
A full description of the contents of the DVD is available here (and here if you're not benefitting from an academic subscription), and I'm very proud to say that it includes also one of my livecoding pieces, Untitled 12, performed live at the Anatomy Museum livecoding event in 2010.
Article: Thought and Performance, Live Coding Music, Explained to Anyone
I bookmarked this article on createdigitalmusic.com a while ago (it's from Jul 2010) and ran into it again today.. "Thought and Performance, Live Coding Music, Explained to Anyone – Really" by Peter Kirn contains several simple but thought provoking ideas about livecoding and its relevance in the (traditional) music world.
Is livecoding an elitarian activity?
Secrets such as why the programming language Lisp inspires religious devotion, or how someone in their right mind would ever consider programming onstage as a form of musical performance, represent the sort of geekery that would seem to be the domain of an elite.
Commenting on Ramsay's video (Algorithms are Thoughts, Chainsaws are Tools):
I doubt very seriously that live coding is the right performance medium for all computer musicians. [..] But Ramsay reveals what live coding music is. It’s compositional improvisation, and code simply lays bare the workings of the compositional mind as that process unfolds. Not everyone will understand the precise meaning of what they see, but there’s an intuitive intimacy to the odd sight of watching someone type code. It’s honest; there’s no curtain between you and the wizard.
An interesting comment from a reader puts forward what I'd call the 'livecoding as a programming-virtuosism view:
The live coding thing is clearly an amazing talent. I admire anyone who can do that, but it does seem pretty much a sophisticated parlor trick unless the music resulting can stand on its own. The question becomes, were you to hear the piece without observing the live coding performance, would it stand up, or is the quality of the piece augmented by the way in which it was composed? Is a decent painting painted by someone who paints blindfolded something I would rather see than an excellent painting by someone who paints in a conventional fashion? Cause unless the live coder can spin something up that I would enjoy listening to on my portable media player, I feel like music takes a back seat to the musician, which is a truly peculiar something. […] This is not to say live coding is something to be ignored, but where from ever in history have we asked this question? Does the musician matter more than the music?
And another, even more critical comment:
It is not about letting the audience in at all. It's about cultivating an stage presence of virtuosic technical wizardry. No one in the audience understands the code and that's why everyone marvels at the "magic". Worse still it's Lisp, a particularly archaic and obfuscated computer language.
So what?
I think this is all very useful to read, as it shows what non-specialists may think of livecoding. I've been asking myself similar questions a lot of times, but never really reached a clear conclusion. Is livecoding a music making activity, or is it just programming wizardry?
I personally got into livecoding as a musician, first, and only afterwards as a programmer. As a result I tend to see it as some sort of advanced music-making tool. However, interestingly enough, in order to make that tool match my music taste and composition style I had to become an expert at programming the livecoding environment. While doing that, I sort of lost the closure to the 'instrument', which is something you'd have all the time if you play a piano or a guitar. With no closure, you end up in the role of 'music programmer', worrying about mathematical structures and time recursions rather than notes and feelings.
It's a cyclical process, actually. You gain competency with some programming pattern that lets you express your musical ideas quickly and efficiently. Then you think of different ideas, but you can't put them into code easily, so you've got to step back, abandon the musical dimension temporarily, and hack some new programming structures. Which makes me think: maybe that's what's so cool about it. Livecoding environments are malleable meta-instruments that let you create (software) music instruments.
So the music - the end result - is definitely part of it. But the process, the how in the music creation business is also what we have in focus here. In fact this process is also eminently creative (and here lies the difference with many other digital music 'creation' tools) and, maybe most importantly, this process is so abstracted and codified that it feels as if it represented some sort of essence of creativity.
Article: Algorithmic Composition: Computational Thinking in Music
An article by Michael Edwards on algorithmic composition has been published last month on the Communications of the ACM journal. The article is titled Algorithmic Composition: Computational Thinking in Music.
Although the article is quite introductory (Edwards makes it clear that the article "is more illustrative than all-inclusive, presenting examples of particular techniques and some of the music that has been produced with them") it is defintely an interesting read. I found quite a few nice ideas in it and also references to musics and musicians I wasn't familiar with.
Follows a list of 'highlights' from my iPad reader, to which I added hyperlinks to relevant explanatory materials:
Serialism As A Continuation Of Early Algorithmic Composition
After World War II, many Western classical music composers continued to develop the serial technique invented by Arnold Schönberg (1874–1951) et al. Though generally seen as a radical break with tradition, in light of the earlier historical examples just presented, serialism’s detailed organization can be viewed as no more than a continuation of the tradition of formalizing musical composition. Indeed, one of the new generation’s criticisms of Schönberg was that he radicalized only pitch structure, leaving other parameters (such as rhythm, dynamic, even form) in the 19th century. They looked to the music of Schönberg’s pupil Anton von Webern for inspiration in organizing these other parameters according to serial principles. Hence the rise of the total serialists: Boulez, Stockhausen, Pousseur, Nono, and others in Europe, and Milton Babbitt and his students at Princeton.
Composers: Hiller And "The Illiac Suite For String Quartet"
Lejaren Hiller (1924–1994) is widely recognized as the first composer to have applied computer programs to algorithmic composition. The use of specially designed, unique computer hardware was common at U.S. universities in the mid-20th century. Hiller used the Illiac computer at the University of Illinois, Urbana-Champaign, to create experimental new music with algorithms. His collaboration with Leonard Isaacson resulted in 1956 in the first known computer-aided composition, The Illiac Suite for String Quartet (wiki | video), programmed in binary, and using, among other techniques, Markov Chains in “random walk” pitch generation algorithms.
Cage On The Difference Between Traditional And Computer- Assisted Composition
Cage said in an interview during the composition of HPSCHD (wiki | video), "Formerly, when one worked alone, at a given point a decision was made, and one went in one direction rather than another; whereas, in the case of working with another person and with computer facilities, the need to work as though decisions were scarce—as though you had to limit yourself to one idea—is no longer pressing. It’s a change from the influences of scarcity or economy to the influences of abundance and ## I’d be willing to say—waste."
Composers: Xenakis
Known primarily for his instrumental compositions but also as an engineer and architect, Iannis Xenakis was a pioneer of algorithmic composition and computer music. Using language typical of the sci-fi age, he wrote, “With the aid of electronic computers, the composer becomes a sort of pilot: he presses buttons, introduces coordinates, and supervises the controls of a cosmic vessel sailing in the space of sound, across sonic constellations and galaxies that he could formerly glimpse only in a distant dream. [...] Xenakis’s approach, which led to the Stochastic Music Programme (henceforth SMP) and radically new pieces (such as Pithoprakta, 1956), used formulae originally developed by scientists to explain the behavior of gas particles (Maxwell’s and Boltzmann’s Kinetic Theory of Gases). He saw his stochastic compositions as clouds of sound, with individual notes as the analogue of gas particles. [...] His Eonta (1963–1964) for two trumpets, three tenor trombones, and piano was composed with SMP. The program was applied in particular to the creation of the massively complex opening piano solo.
Composers: Koenig
Koenig saw transcription (from computer output to musical score) as an important part of the process of algorithmic composition, writing, "Neither the histograms nor the connection algorithm contains any hints about the envisaged, ‘unfolded’ score, which consists of instructions for dividing the labor of the production changes mode, that is, the division into performance parts. The histogram, unfolded to reveal the individual time and parameter values, has to be split up into voices."
The Contemporary Landscape: A Division Between Composers And Ai Researchers
Contemporary (late 20th century) techniques tend to be hybrids of deterministic and stochastic approaches. Systems using techniques from artificial intelligence (AI) and/or linguistics.. [...] While naturally significant to AI research, linguistics, and computer science, such systems tend to be of limited use to composers writing music in a modern and personal style that perhaps resists codification because of its notational and sonic complexity and, more simply, its lack of sufficient and stylistically consistent data [...] Thus we can witness a division between composers concerned with creating new music with personalized systems and researchers interested in developing systems for machine learning and AI. The latter may quite understandably find it more useful to generate music in well-known styles not only because there is extant data but also because familiarity of material simplifies some aspects of the assessment of results. Naturally though, more collaboration between composers and researchers could lead to fruitful, aesthetically progressive results.
Algorithmic Composition Outside Academia: Brian Eno
Application of algorithmic-composition techniques is not restricted to academia or to the classical avant garde. Pop/ambient musician Brian Eno (1948–) is known for his admiration and use of generative systems in Music for Airports (1978) [wiki | video] and other pieces. Eno was inspired by the American minimalists, in particular Steve Reich (1936–) and his tape piece It’s Gonna Rain (1965) [wiki | video]. [...] Eno said about his Discreet Music (1975) [wiki | video], "Since I have always preferred making plans to executing them, I have gravitated towards situations and systems that, once set into operation, could create music with little or no intervention on my part. That is to say, I tend towards the roles of planner and programmer, and then become an audience to the results".
Ligeti On The Relation Between Music And Mathematics
After leaving his native Hungary in the late 1950s, Ligeti worked in the same studios as Cologne electronic music pioneers Karlheinz Stockhausen and Gottfried Michael Koenig though produced little electronic music of his own. However, his interest in science and mathematics led to several instrumental pieces influenced by, for example, fractal geometry and chaos theory. But these influences did not lead to a computer-based algorithmic approach. He was quoted in Steinitz saying, "Somewhere underneath, very deeply, there’s a common place in our spirit where the beauty of mathematics and the beauty of music meet. But they don’t meet on the level of algorithms or making music by calculation. It’s much lower, much deeper—or much higher, you could say."
Example: An Algorithmic Model Of Ligeti's Desordre
I have implemented algorithmic models of the first part of Désordre in the open-source software system Pure Data, which, along with the following discussion, is based on analyses by Tobias Kunze,26 used here with permission, and Hartmut Kinzler. It is freely downloadable from my Web site http://www.michael-edwards.org/software/desordre.zip [...] The main argument of Désordre consists of foreground and background textures.. [...] In Désordre we experience a clear, compelling, yet not entirely predictable musical development of rhythmic acceleration coupled with a movement from the middle piano register to the extremes of high and low, all expressed through two related and repeating melodic cycles with slightly differing lengths resulting in a combination that dislocates and leads to metrical disorder. I invite the reader to investigate this in more detail by downloading my software implementation.
On The Negative Reception Of Algorithmic Composition
There has been (and still is) considerable resistance to algorithmic composition from all sides, from musicians to the general public. This resistance bears comparison to the reception of the supposedly overly mathematical serial approach introduced by the composers of the Second Viennese School of the 1920s and 1930s. Alongside the techniques of other music composed from the beginning of the 20th century onward, the serial principle itself is frequently considered to be the reason the music—so-called modern music, though now close to 100 years old — may not appeal. [...] Algorithmic composition is often viewed as a sideline in contemporary musical activity, as opposed to a logical application and incorporation of compositional technique into the digital domain. Without wishing to imply that instrumental composition is in a general state of stagnation, if the computer is the universal tool, there is surely no doubt that not applying it to composition would be, if not exactly an example of Luddism, then at least to risk missing important aesthetic developments that only the computer can facilitate, and that other artistic fields already take advantage of.
Composing Using Algorithmic Methods: Misconceptions
Much of the resistance to algorithmic composition that persists to this day stems from the misguided bias that the computer, not the composer, composes the music. In the vast majority of cases where the composer is also the programmer, this is simply not true. As composer and computer musician Curtis Roads pointed out more than 15 years ago, it takes a good composer to design algorithms that result in music that captures the imagination. [...] Furthermore, using algorithmic-composition techniques does not by necessity imply less composition work or a shortcut to musical results; rather, it is a change of focus from note-to-note com- position to a top-down formalization of compositional process. Composition is, in fact, often slowed by the requirement that musical ideas be expressed and their characteristics encapsulated in a highly structured and non-musical general programming language. Learning the discipline of programming is itself a time-consuming and, for some composers, an insurmountable problem.
I just got back from Strasbourg (France) where I gave a talk about my experience with Livecoding and Impromptu at the at the Cultures et Arts Libres Workshop, part of the 2011 Libre Software Meeting. In a nutshell, livecoding is the process of writing software in realtime, as a form of improvised time-based art. Many thanks for the organizers for inviting me, it's been a quite rewarding experience. Here I'm posting the slides from the talk in case people want to follow up on the things I mentioned.
The slides are very introductory, so I strongly encourage anyone interested to find out more about the world of Impromptu by following the links provided in the presentation.
The other two livecoders who gave talks at the workshop were Marje Baalman and Dan Stowell; both of them do very interesting stuff with SuperCollider (another livecoding environment) so you better check them out too!
By the way, after the workshop there was also a livecoding performance - but I'm going to report about that here..
Just ran into this interesting article by Brian Eno. It struck me as quite a fair representation of what livecoders do most of the time, when they create (maybe I should say 'sculpt') musical structures that evolve in time, as part of their performance:
It’s intuitive to think that anything complex has to be made by something more complex, but evolution theory says that complexity arises out of simplicity. That’s a bottom-up picture. I like that idea as a compositional idea, that you can set in place certain conditions and let them grow. It makes composing more like gardening than architecture
It's from http://www.soundonsound.com/sos/oct05/articles/brianeno.htm.
Here is one of the songs from the album Eno is talking about in that article, Another Day on Earth.
A nice documentary about livecoding practise by Louis McCallum and Davy Smith. Some shorts excepts from my performance at the Anatomy Museum are included too (00:32 and 08:45).
jMusic is a project designed to provide composers and software developers with a library of compositional and audio processing tools. It provides a solid framework for computer-assisted composition in Java™, and is also used for generative music, instrument building, interactive performance, and music analysis.
I have no intention to leave Scheme and Impromptu, but when I run into jMusic the other day the thing that struck me is the extensive tutorial available on that website (also the references section is worth checking out).
I think I'm going to start re-doing all of those exercises with Impromptu, as a learning exercise (btw it marvels me that there's no link to this site on the Impromptu one, given that Andrew Sorensen, the author of impromptu, was working on jMusic too)..
In particular there are various interesting sections that focus on how to represent algorithmically classic harmonic progressions. For example, the Chords in the cycle of fifths tutorial.
This example plays a progression of dominate seventh chords each a fifth away from the previous chord. In this way each of the 12 notes of the chromatic scale are used once as the root of a chord. This chord progression is common in western musical styles from Baroque to Jazz, and is often called the Cycle of Fifths.
The musical result is beautiful; here's how I implemented it in scheme via Impromptu:
(define progression
'((0 4 \-2 \-5) (\-7) (0 4 7 10) (5)))
(define loop
(lambda (beat note)
(print note)
(let ((dur (random '(1 2 4 1/2))))
(begin
(for-each (lambda (x)
(play dls x 90 dur))
(map (lambda(x) (+ x note)) (list-ref progression 0)))
(set! note (+ note (car (list-ref progression 1))))
(for-each (lambda (x)
(play (/ dur 2) dls x 90 dur))
(map (lambda(x) (+ x note)) (list-ref progression 2)))
(set! note (+ note (car (list-ref progression 3)))))
(if (\> note 50)
(callback (\*metro\* (+ beat (\* 1/2 dur))) 'loop (+ beat dur) note)
(for-each (lambda (x)
(play (eval dur) dls x 90 dur))
(pc:make\-chord\-fixed note 4 '(0 4 7 14)))
))))
(loop (\*metro\* 'get\-beat) 50)
Learning resources about Scheme
So you've decided to know everything about scheme and rock the world using fast-paced programming environments like Impromptu. Well, I confess I did think that on several occasions, but still I haven't made it even half way through the schemer pilgmim's path. But I've collected quite a few useful resources in the process, and those I can certainly share!
So in what follows I've put together a list of learning resources about Scheme that I found useful.. First off, two links that might be useful in all situations:
-
Little Scheme, an online interpreter that you can use for testing things out while you're on holidays
-
Schemers.org, semi-official website containing news and lots of links to other resources
Now, why don't we start with the definition offered by the self-regulating wikipedia collective intelligence? Here we go:
Scheme is one of the two main dialects of the programming language Lisp. Unlike Common Lisp, the other main dialect, Scheme follows a minimalist design philosophy specifying a small standard core with powerful tools for language extension. Its compactness and elegance have made it popular with educators, language designers, programmers, implementors, and hobbyists, and this diverse appeal is seen as both a strength and, because of the diversity of its constituencies and the wide divergence between implementations, one of its weaknesses
If this blurb hasn't made you proud of learning such a slick language, you'll surely find more interesting ideas in what follows. I divided up the list in two sections, generic learning materials about scheme, and tutorials about specific topics (for now, only macros are included). ----------------------------------
1. Learning Resources About Scheme:
-
Scheme for Common Lispers, article
The Scheme dialect of Lisp was created in 1975 by Guy Steele and Gerry Sussman to explore ideas in programming-language semantics. They showed that a powerful language can be made ``not by piling feature on top of feature, but by removing the weaknesses and restrictions that make additional features appear necessary''. Scheme pioneered lexical scope in Lisp, first-class continuations, and tail recursion, and more recently added an advanced macro system. It's the best-known Lisp dialect after Common Lisp (which it influenced). This note summarizes the differences from CL that might slow down a CL programmer trying to read a Scheme program; people without a CL background, or wanting to write programs of their own, should see the references.
-
the Schematics Scheme Cookbook
The Schematics Scheme Cookbook is a collaborative effort to produce documentation and recipes for using Scheme for common tasks. See the BookIntroduction for more information on the Cookbook's goals, and the important ContributorAgreement statement.
-
Harvey, Wright, Simply Scheme: Introducing Computer Science, 1999 MIT press [a classic]
Symbolic programming is one aspect of the reason why we like to teach computer science using Scheme instead of a more traditional language. More generally, Lisp (and therefore Scheme) was designed to support what we've called the radical view of computer science. In this view, computer science is about tools for expressing ideas. Symbolic programming allows the computer to express ideas; other aspects of Lisp's design help the programmer express ideas conveniently. Sometimes that goal comes in conflict with the conservative computer scientist's goal of protection against errors.
-
Felleisen, Findler, Flatt, Krishnamurthi, How to Design Programs An Introduction to Computing and Programming, MIT 2001
[..] programming is more than just a vocational skill. Indeed, good programming is a fun activity, a creative outlet, and a way to express abstract ideas in a tangible form. And designing programs teaches a variety of skills that are important in all kinds of professions: critical reading, analytical thinking, creative synthesis, and attention to detail. We therefore believe that the study of program design deserves the same central role in general education as mathematics and English. Or, put more succinctly, everyone should learn how to design programs. On one hand, program design teaches the same analytical skills as mathematics. But, unlike mathematics, working with programs is an active approach to learning. Interacting with software provides immediate feedback and thus leads to exploration, experimentation, and self-evaluation. Furthermore, designing programs produces useful and fun things, which vastly increases the sense of accomplishment when compared to drill exercises in mathematics. On the other hand, program design teaches the same analytical reading and writing skills as English. Even the smallest programming tasks are formulated as word problems. Without critical reading skills, a student cannot design programs that match the specification. Conversely, good program design methods force a student to articulate thoughts about programs in proper English.
-
Dybvig, The Scheme Programming Language, 2003MIT press
This book is intended to provide an introduction to the Scheme programming language but not an introduction to programming in general. The reader is expected to have had some experience programming and to be familiar with terms commonly associated with computers and programming languages. The author recommends that readers unfamiliar with Scheme or Lisp also read The Little Schemer [see below]to become familiar with the concepts of list processing and recursion. Readers new to programming should begin with an introductory text on programming.
-
Nils M Holm, "Sketchy LISP" [you can download the book here Update 08/12: this book has become 'Sketchy Scheme' and is now for-sale here]
Sketchy LISP is a step-by-step introduction to functional programming in Scheme. It covers various aspects of the language including data types, conditional evaluation, list processing, lexical scoping, closures, recursion, dynamic typing, etc. By means of numerous examples of varying complexity, it takes the reader on an entertaining and informative tour through the language. The Scheme language achieves what only few languages have managed before: to bring fun back to programming. Its simple syntax, clean semantics, and powerful functions open the door to a fresh perspective on program design. Programming in Scheme is fun, and this book is an attempt to share some of that fun.
-
Friedman and Felleisen, The Little Schemer, 1996 MIT press
The goal of this book is to teach the reader to think recursively. Our first task, therefore, is to decide which language to use to communicate this concept. There are three obvious choices: a natural language, such as English; formal mathematics; or a programming language. Natural languages are ambiguous, imprecise, and sometimes awkwardly verbose. These are all virtues for general communication, but something of a drawback for communicating concisely as precise a concept as the power of recursion. The language of mathematics is the opposite of natural language: it can express powerful formal ideas with only a few symbols. We could, for example, describe the entire technical content of this book in less than a page of mathematics, but the reader who understands that page has little need for this book. For most people, formal mathematics is not very intuitive. The marriage of technology and mathematics presents us with a third, almost ideal choice: a programming language. Programming languages are perhaps the best way to convey the concept of recursion. They share with mathematics the ability to give a formal meaning to a set of symbols. But unlike mathematics, programming languages can be directly experienced---you can take the programs in this book and try them, observe their behavior, modify them, and experience the effect of your modifications.
-
The Weiner Lectures Archives [various videos, but not complete lectures unfortunately]
The goal of this project is to make knowledge of computer science easily available not only to the students at Berkeley, but also to the entire community. For several years, faculty members have been videotaping lectures in CS large lower division courses, mainly as an aid to students with time conflicts that prevent them from attending lectures. By hosting an archive storing all CS lectures that were recorded, we hope the computing knowledge that has been gathered can be easily shared. As a teaching aid, a 'greatest hits' lecture will also be compiled for each course covering all major topics addressed in the corresponding class. The best parts of many different past lectures will be linked together and presented along with slides to make this greatest hits lecture. This lecture should represent the best teaching abilities in the lower division CS lectures and should be a valuable resource in the computer community for basic CS knowledge. Thanks to the generous donation of Larry Weiner this online site should become a permanent resource.
----------------------
2. Specific topics:
On Macros and metaprogramming:
-
The art of metaprogramming, Part 1: Introduction to metaprogramming, IBM developer works
Summary: One of the most under-used programming techniques is writing programs that generate programs or program parts. Learn why metaprogramming is necessary and look at some of the components of metaprogramming (textual macro languages, specialized code generators). See how to build a code generator and get a closer look at language-sensitive macro programming in Scheme.
-
Lisp Macros -- How to Define Entirely New Languages in Lisp
This is a very interesting lesson if you want to deeply understand Lisp, and some very deep things about programming, but it's also entirely optional; We suggest that you do through it, but not worry too much about understanding it in detail. If you get very deeply into programming, you'll find that Lisp macros are an amazing tool, but they are also somewhat mind-bending, and used rather rarely in simple programming. So, feel free to skip this lesson, or at least, if you do study it, let it flow over you, and maybe come back to it later on if you find yourself wanting to know more about some of the deep and subtle reaches of Lisp programming.
-
Scheme FAQ Macros, on schemewiki.org
-
Sources for learning about Scheme Macros: define-syntax and syntax-rules, a thread on StackOverflow
-
A scheme syntax rules primer, an interesting blog post
----------------------
That's all for now... I'll be adding more stuff as I run into it!
A short movie about livecoding, Andrew Sorensen (creator of Impromptu), TOPLAP and other related stuff. If you're not familiar with this stuff, I'd say it's the most pleasant introduction to it available at the moment!
Thanks Stepher Ramsay for doing this!
Update 2023: unfortunately the video is no longer available :-( There is an Electronic Book Review essay Critical Code Studies Week Five Opener – Algorithms are thoughts, Chainsaws are tools that talks about the original video..
Event: Livecoding night @ King's College coming up
If you happen to be around London next Thursday (14th) you might be interested in joining us for an evening of livecoding! It's free entrance, and it's going to be a dense evening of algorithmically generated music and graphics.
The program:
About livecoding
Live coding is a new direction in electronic music and video, and is starting to get somewhere interesting. Live coders expose and rewire the innards of software while it generates improvised music and/or visuals. All code manipulation is projected for your pleasure. Live coding is inclusive and accessible to all. For more info see: http://toplap.org/
Event: Livecoding at the Shunt, London
Initially this song was called 'Voices Slowly Talk To Me' - then.. as usual.. I lost control of its direction! So I don't know anymore how much the title would apply. Anyways, it's my second experiment with recording an Impromptu performance (by the way I also played it live the other night at the Shunt in London, with various mistakes and delays, but somehow I got to the end - thanks to the toplap crew for their support!).
Livecoding practice: lessons learned
Somehow I'm becoming wiser with doing this type of stuff, you know, just trying to learn from past experiences. So here're a few tips I matured in the last weeks:
- keep it simple. Especially when playing live. Long and convoluted functions are a giant source of errors especially when you're a bit tense
- use 'paste' templates. Stuff like the pb:cb function that comes by default with Impromptu. At the beginning I thought it wouldn't look too good, cause you've gotta show that you're coding the whole thing from scratch. But actually, when you're livecoding time is very very precious and what you want to focus on is sound, primarily (well at least this is what I like to do). It's also important to remember that many other environments for live performance are much much higher level than Impromptu - meaning that it's quicker to emit sounds or musical structures and change their properties... so let's make sure we're not comparing apple and oranges here!
- make variations often. Even if they look stupid to you, change something, add another melody, double the drumkit, stuff like that. The audience is more interested in new audio-visual things happening that in seeing you code a Bach's prelude.
- exercise a lot. I initially felt weird about this, mainly because playing with Impromptu means coding, and when I code I usually take my time and think. But livecoding transforms the coding practice into a musical performance. Which means that you don't have time to think, things should just come out automatically and sound good. Only then you can take the freedom of 'jamming' without a plan. I play guitar, and that's exactly how it works there... I must have forgotten about it. When playing a song I can't lose time trying to remember how to lay out the fingers on the neck, that has to happen automatically.
That's it for now - I'll touch base again about this when the next live coding performance will happen! Rock on live-coders!
Ended: Miscellaneous
Utils ↵
Making a screencast
This post shows how to make a livecoding screencast using free OSX technologies.
Using QuickTime and RecordIt
Capturing system audio and screen-recording your live coding performance can be done in multiple ways. Here's a method based on Apple's Quicktime and an audio plugin that is part of a third-party software, Record It.
Note: both of these software components are free. The Record It Audio Device which is a free extension that enables you to capture system sounds on your Mac. It acts as a virtual audio input device and sends the sound from music, videos, and system alerts that you would normally hear through your speakers to the input cha
Recording a screencast: steps
- Get the Record It audio plugin PS this is a free audio extension, even if it is part of a paid-for software (RecordItAudioDevice.pkg)
- Install it by right-clicking on app and selecting 'Open...' to avoid the permissions issue
- On AudioMidi Setup, Create an Aggregate Device to combine multiple audio interfaces
- Increase sample rate to 96hz, if necessary (Ensure it's done before opening LIVE)
- Open Quicktime and set it to record your screen, making sure you get audio from the RecordIt audio plugin
- Set your DAW (eg Ableton Live) to output audio to RecordIt
- Set your computer to output sound to the Aggregate Device (so that you can also hear what's going on e.g. if you are using headsets)
DONE!
Post-processing tips
Often the audio I get from a Quicktime screencast recording is rather weak.
I found an easy way to fix that by using iMovie, which normally comes with any Mac. Here are the steps:
- Open the livecoding screencast in iMovie
- Detach Audio (right-click on movie bar to see the options menu)
- Adjust the volume in the audio track editor, by dragging the mouse up/down (avoid red zones!)
- Additionally, you can try using the top left menu:
Music Settings > Equalizer > Music Enhance
- Additionally, you can try using the top left menu:
- Save the new movie:
Share > File
- Resolution: 720p 60
- Quality: High
- Compress: Faster
Results: For a 10 mins video, that gives me a .mp4
file of ~250M
Upload to YouTube and off you go ;-)
Using Screenflick (legacy)
Legacy method
This method requires a license from Screenflick
https://www.araelium.com/support/screenflick
Official docs:
- When you start a recording with system audio turned on, Screenflick switches the system-wide default audio output device to the "Soundflower" virtual audio device, which Screenflick then uses as an input to record audio from.
- Some programs unfortunately will play all audio over a specific output device determined when the application launched rather than always using the current system setting which can lead to problems like this.
- What needs to happen is the program playing audio needs to decide that it should play that audio to the Soundflower device, rather than your speakers. There are two ways this can happen.
- Launch the program which plays the audio after you start a Screenflick recording. (The system output device will be Soundflower and the program will therefore use it.)
- Before starting the program, go into System Preferences and change the system output audio device to "Soundflower (2ch)" manually, start the program which will play audio, and then start the recording in Screenflick when you're ready. (After the recording ends, don't forget to change the system output device back to your speakers.) [BEST]
- A third option which doesn't always exist, is that the program playing audio may have a preference setting for which audio device to play audio to. Check the program's preferences just in case. If it does have a setting, set it to "Soundflower (2ch)" while recording.
My own tips
- Frame rate:
FPS 15
- Seems to lead to less MIDI delays
- Frames not an issue cause it's just screen recording
- Then export to QuickTime
- Audio
128 Kbps
- All other options default
- Changing framerate seem to have no effect
Performing
Warning
Section needs revision
Getting started with live performance
The trick is to have smth simple that you can progressively enhance.
Eg add an octave, some random variation, then cosr etc..
Eventually the base structure is there. And you can take more time to build other stuff that requires more code.
Printing code (for guiding performance)
Recipe to export from VSCode:
- Copy from VSCode.
- Open in TextMate. Pick the Scheme language syntax.
- Open print dialog: pick a font size and theme that suits. Ideally make it fit within one page.
PS on home printer, choose B/W printing.
iMovie
Tips on how to use Mac's iMovie to produce livecoding screencasts.
Editing a screencast on a mac using iMovie
Often the audio I get from a screencast recording is rather weak.
There's an easy way to fix that by using iMovie, which normally comes with any Mac. Here are the steps:
- Open the livecoding video in iMovie
- Detach Audio (right-click on movie bar to see the options menu)
- Adjust the volume in the audio track editor, by dragging the mouse up/down (avoid red zones!)
- Additionally, you can try using the top left menu:
Music Settings > Equalizer > Music Enhance
- Additionally, you can try using the top left menu:
- Save the new movie:
Share > File
- Resolution:
720p 60
- Quality:
High
- Compress:
Faster
- Resolution:
Results: For a 10 mins video, that gives me a .mp4
file of ~250M
Upload to YouTube and off you go ;-)
Adding backgrounds
I often use Hydra to create interesting backgrounds, and then overlay them on the livecoding video.
- Launch Hydra and pick some visuals you like
- Save as local movie file (eg
.mov
using QuickTime and 'screen capture') - Import into iMovie
You can easily add video overlay effects in iMovie on a iOS device. In particular the Green/Blue Screen effect can be handy:
Green/Blue Screen: Adds the clip so that during playback, the clip appears with the green-screen or blue-screen parts of the clip removed, and the remaining parts of the clip are superimposed on the main clip in the timeline.
First menu on the left:
Tips about saving videos to YouTube
Useful links
- Help page about creating a new channel
- Channel switcher add a new channel
- My channel The musical Code
YouTube aspect ratio
It's 16:9 e.g. 1920 x 1080
The standard aspect ratio for YouTube on a computer is 16:9. If your video has a different aspect ratio, the player will automatically change to the ideal size to match your video and the viewer's device -- Google support
Sample resolutions
Top 16:9 Resolutions:
640 x 360 (nHD)
854 x 480 (FWVGA)
960 x 540 (qHD)
1024 x 576 (WSVGA)
1280 x 720 (HD/WXGA)
1366 x 768 (FWXGA)
1600 x 900 (HD+)
1920 x 1080 (FHD)
2048 x 1152 (QWXGA)
2560 x 1440 (QHD)
3200 x 1800 (WQXGA+)
3840 x 2160 (UHD)
5120 x 2880 (UHD+)
7680 x 4320 (FUHD)
15360 x 8640 (QUHD)
30720 x 17280 (HHD)
61440 x 34560 (FHHD)
122880 x 69120 (QHHD)
If the video is pixelated or out of focus..
Export with a higher resolution EG 4K. This forces the YouTube optimiser to use the higher res compressor by default.
Standard YouTube screencast description
Sample title
Many pianos - Study No 2 In B Minor // Extempore Livecode Algorithmic Music
Sample description:
Title:
Many pianos - Study No 2 in B Minor
Description:
Me having fun with various overlapping piano numerical patterns.
01:04 Music starts
02:59 Second line
04:35 Third line
06:04 Fourth line
---
Credits:
Livecoding software: Extempore
https://extemporelang.github.io/
Sounds: Ableton Live
- syn1: Kontakt Pad: Lesotho
- syn2: Spitfire Audio: Siren Songs - Gossip
- syn3: Zebralette: Lead
---
Author: @lambdamusic
Code: https://github.com/lambdamusic/The-Musical-Code/blob/main/works/TODO.xtm
Hydra Sketches
Hydra if a free tool to livecode visual patterns.
Hydra is live code-able video synth and coding environment that runs directly in the browser. It is free and open-source and made for beginners and experts alike.
Reusable snippets
One of the coolest aspects of Hydra is that is is self-contained and browser based.
A program can be easily shared, either as code or via a URL. Eg try this link - which renders the following snippet:
// ee_5 . FUGITIVE GEOMETRY VHS . audioreactive shapes and gradients
// e_e // @eerie_ear
//
s = () => shape(7.284).scrollX([-0.5, -0.2, 0.3, -0.1, -0.062].smooth(0.139).fast(0.049)).scrollY([0.25, -0.2, 0.3, -0.095, 0.2].smooth(0.453).fast(0.15));
//
solid().add(gradient(3, 0.05).rotate(0.05, -0.2).posterize(0.56).contrast(0.016), [1, 0.541, 1, 0.5, 0.181, 0.6].smooth(0.9)).add(s()).mult(s().scale(0.8).scrollX(0.01).scrollY(-0.01).rotate(0.303, 0.06).add(gradient(4.573).contrast(0.008), [0.684, 0.118, 1, 0.43].smooth(1.496), 0.5).mult(src(o0).scale(0.142), () => a.fft[0] * 4.226)).diff(s().modulate(shape(644.351)).scale([1.7, 1.2].smooth(0.392).fast(0.05))).add(gradient(2).invert(), () => a.fft[2]).mult(gradient(() => a.fft[3] * 8)).blend(src(o0, () => a.fft[1] * 40)).add(voronoi(() => a.fft[1], () => a.fft[3], () => a.fft[0]).thresh(0.7).posterize(0.419, 4).luma(0.9).scrollY(1, () => a.fft[0] / 30).colorama(0.369).thresh(() => a.fft[1]).scale(() => a.fft[3] * 2), () => a.fft[0] / 2).out();
//
speed = 1;
Livecoding
Snippets can be modified / composed on the fly.
There is also a handy 'make random change button' that identifies parameters in the code and changes them automatically.
With we had something that easy to use yet powerful for musical livecoding!
A couple nice examples
// Sumet
// by Rangga Purnama Aji
// https://ranggapurnamaaji1.wixsite.com/portfolio
osc(0.5,1.25).mult(shape(1,0.09).rotate(1.5))
.diff(gradient())
.add(shape(2,2).blend(gradient(1)))
.modulate(noise()
.modulate(noise().scrollY(1,0.0625)))
.blend(o0)
.color(0.2,-0.1,-0.5)
.out()
// Puertas II
// por Celeste Betancur
// https://github.com/essteban
osc(4.226, 0.122, 1).kaleid().mask(shape(4, 0.523, 1.91)).modulateRotate(shape(4, 0.1, 1)).modulateRotate(shape(1.428, 0.1, 0.633)).modulateRotate(shape(5.023, 0.143, 1.001)).scale(0.3).add(shape(4, 0.062, 0.071).color(0.433, 1, 1, 0.5)).rotate(() => time).out();
// ee_5 . FUGITIVE GEOMETRY VHS . audioreactive shapes and gradients
// e_e // @eerie_ear
//
s = () => shape(7.284).scrollX([-0.5, -0.2, 0.3, -0.1, -0.062].smooth(0.139).fast(0.049)).scrollY([0.25, -0.2, 0.3, -0.095, 0.2].smooth(0.453).fast(0.15));
//
solid().add(gradient(3, 0.05).rotate(0.05, -0.2).posterize(0.56).contrast(0.016), [1, 0.541, 1, 0.5, 0.181, 0.6].smooth(0.9)).add(s()).mult(s().scale(0.8).scrollX(0.01).scrollY(-0.01).rotate(0.303, 0.06).add(gradient(4.573).contrast(0.008), [0.684, 0.118, 1, 0.43].smooth(1.496), 0.5).mult(src(o0).scale(0.142), () => a.fft[0] * 4.226)).diff(s().modulate(shape(644.351)).scale([1.7, 1.2].smooth(0.392).fast(0.05))).add(gradient(2).invert(), () => a.fft[2]).mult(gradient(() => a.fft[3] * 8)).blend(src(o0, () => a.fft[1] * 40)).add(voronoi(() => a.fft[1], () => a.fft[3], () => a.fft[0]).thresh(0.7).posterize(0.419, 4).luma(0.9).scrollY(1, () => a.fft[0] / 30).colorama(0.369).thresh(() => a.fft[1]).scale(() => a.fft[3] * 2), () => a.fft[0] / 2).out();
//
speed = 1;
//corrupted screensaver
//by Ritchse
//instagram.com/ritchse
voronoi(350,0.15)
.modulateScale(osc(8).rotate(Math.sin(time)),.5)
.thresh(.8)
.modulateRotate(osc(7),.4)
.thresh(.7)
.diff(src(o0).scale(1.8))
.modulateScale(osc(2).modulateRotate(o0,.74))
.diff(src(o0).rotate([-.012,.01,-.002,0]).scrollY(0,[-1/199800,0].fast(0.7)))
.brightness([-.02,-.17].smooth().fast(.5))
.out()
Behringer Mixer
Warning
Section needs revision
Miscellaneous tips from an everyday user.
Fx send with Behringer Mixer
Mixer review https://www.youtube.com/watch?v=K2x17AN7glE Plugin guide https://www.youtube.com/watch?v=MOG6VIZgDAc
From this forum
The way you can do it is like this:
1 Connect drum machine to any of the stereo channels (5/6, 7/8 or 9/10).
2 Connect a lead from the Xenyx FX send output to the Virtualizer.
3 Connect a lead from the output of the virtualizer to any remaing free channel on the Xenyx.
The FX knob on the channel you have the drum machine in sets the level of signal to go to the virtualizer. The FX send knob under the phantom power switch is the master volume for this. In the Virtualizer you will have to adjust the reverb setting so that you only get wet coming back.
The fader on the channel you have the virtualizer plugged into controls the amount of reverb coming back.
Plugging the drum machine into the RCA inputs is ok, but you cannot apply FX to this.
Alternatively, you can do your other option, i.e. plug drum machine into virtualizer, then plug virtualizer into one of the Xenyx channels. In this case, you would have to adjust the reverb setting in the virtualizer to get a desired balance of wet and dry.
an Aux Return is a level-control-only input that generally feeds directly into the Mix Bus (rather than using up a fully-featured channel), and has no aux send from it (automatically preventing feedback through the device).
FinalCut
Final Cut settings
- 1080 or 4k seems to have the same audio quality
- 4k seems to occupy less space, unclear why
- originals in FinalCut folder, at some point can get rid of them
- the export in iTunes folder seems the most interesting thing to keep
Markdown Tips
Warning
Section needs revision
VSCode Shortcuts
VSCode is pretty handy for editing Markdown. Here are some tips:
- Press
⌃Space
(Trigger Suggest) while editing to see a list of suggested Markdown snippets - Use
⇧⌘O
to quickly jump to a header in the current file. - Use
⌘T
to search through headers across all Markdown files in the current workspace - Path suggestions are automatically shown when you type / or can be manually invoked by using
⌃Space
. - You can Drag and drop a file from VS Code's Explorer or from your operating system into a Markdown editor. Start by dragging a file from VS Code's Explorer over your Markdown code and then hold down Shift to start dropping it into the file.
- Previews: To switch between views, press
⇧⌘V
in the editor. You can view the preview side-by-side (⌘K V
) too
See also
- Official vscode help
MKDocs material
Material for mkdocs is a framework for rendering markdown based documentation.
This website uses it for turning a collection of markdown notes into an HTML website.
Admonitions
See the Material admonitions documentation. E.g.:
!!! note "Phasellus posuere in sem ut cursus"
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod
nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor
massa, nec semper lorem quam in massa.
??? note
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod
nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor
massa, nec semper lorem quam in massa.
Supported types
MP3
Warning
Section needs revision
Transform audio to mp3
ffmpeg is a pretty powerful command line tool:
See also
- Thread on stackexchange: Is there a way to convert audio files in Mac OS X or the command line without using iTunes?
Plugins & DAWs
Warning
Section needs revision
Kontakt instruments
The default instruments are pretty good
However if you open up any of mine in the 2.instruments
library, they go into DEMO mode
~~The correct way is to open them up in Live directly.~~
It used to be possible to import them using Sample. But from Live 11 it is not possible to do that anymore..
See: Kontakt Player Demo Mode – Explained
The “demo mode” means that you can only use the library for fifteen minutes. After that, there will be no sound on the output, and you won’t be able to access the editing features
Free libraries for Kontakt: - https://projectsam.com/libraries/the-free-orchestra/
Spitfire Audio
How to reset:
Had occasionally to reset everything cause I was getting errors with instruments not being found, even after reinstalling them as per error 4
- delete everything
- install app and log in
- the libraries installed previously may still appear. So you have to
- click on the library icon, select the cog, click on 'reset', 'reset entire library'
- optionally, then reinstall it
This youtube video helps!
Pianobook samples
Get the Decent Sampler plugin
Get an account on https://www.pianobook.co.uk/faq/
Arturia Minilab
PROS nice keyboard good quality midi control works knobs
CONS no midi out no arpeggiator
Arturia Analog 4 Vs Analog 5
August 15, 2023: I have both of them installed
Analog Lab V (5) can play all Arturias sounds, including legacy sounds, if you have any.
Analog Lab 4 can't play the sounds from applications that's new in V-Collection 8. But perhaps you find the workflow and better for you, as things has changed from AL 4 to AL V.
- Arturia Analog Lab 4 vs 5 (is it worth it to double install?)
- https://legacy-forum.arturia.com/index.php?topic=108084.0
- https://splice.com/blog/difference-between-v-collection-analog-lab/
Akai MPK Mini Play
March 9, 2021: Editor does not work with OSx !!! - Homepage
Launchkey Mini MK3
- Getting-Started-With-Launchkey-Mini-MK3
- update firmware
- Getting-Started-With-Launchkey-Mini-MK3-Ableton-Live-10-Setup
March 9, 2021: returned / felt very cheap
Logic Pro X
2021-03-14: tested and eventually gave up TODO: write up article on blog to sum up the problems
- Logic Pro X: A Guide to Multitrack MIDI Recording https://www.macprovideo.com/article/logic-pro/logic-pro-x-a-guide-to-multitrack-midi-recording
- Too many midi events https://www.logicprohelp.com/forum/viewtopic.php?t=57319
- Playing midi instruments w/o selecting the track https://www.logicprohelp.com/forum/viewtopic.php?t=128623
- IAC Driver, ports etc https://www.logicprohelp.com/forum/viewtopic.php?t=121829
- Official help desc https://support.apple.com/en-gb/guide/logicpro/lgcpbc10f1ea/mac#:~:text=%E2%80%9CMIDI%20data%20reduction%E2%80%9D%20checkbox%3A,only%20a%20few%20MIDI%20ports.
- Midi sustain https://www.reddit.com/r/Logic_Studio/comments/2lqv3q/midi_notes_sustain_indefinitely_anyone_else_ever/
- Generic Tutorial on Logic and Midi Controller https://www.musicsequencing.com/article/controlling-logic-with-a-midi-controller
- Arturia Minilab support issues https://forum.arturia.com/index.php?topic=93486.0
Music Audio Visualizer
May 2, 2022
- magic visuals Seems nice but hard to get going with
- https://animusvisualizer.webflow.io/
- Interesting thread
- https://lairdkruger.github.io/Audio-Visualizers/
- https://www.uberviz.io/
- Searching on google... really hard to find anything!
Ended: Utils
About
I am not a computer music professional. I actually work in AI and research analytics. But music (of all kinds) has always been a driving force in my life.
Why this site?
So you may wonder why I'm writing about this.
First and foremost, I would like to share the stuff I've learned on this topic with all computer music enthusiasts out there. I assume that you may take inspirations from my explorations, the same way I did from a zillion other sites...
Secondly - and sometimes most importantly - this a bit like a personal collection of notes and tips about my learning journey in algorithmic composition. A knowledge base I routinely get back to since my memory often faults me!
Gist
This is what I write to the future me who has forgotten how I did the stuff I want to do again.
Me looking dumb in my office, 2021
Changelog
See the CHANGELOG.md file on Github.