Live Coding

Live coding[1] (sometimes referred to as 'on-the-fly programming',[2] 'just in time programming') is a programming practice centred upon the use of improvised interactive programming. Live coding is often used to create sound and image based digital media, and is particularly prevalent in computer music, combining algorithmic composition with improvisation.[3] Typically, the process of writing is made visible by projecting the computer screen in the audience space, with ways of visualising the code an area of active research. http://en.wikipedia.org/wiki/Live_coding

Electronic Music Live Performance format. http://cacm.acm.org/magazines/2013/12/169929-the-lure-of-live-coding-the-attraction-of-small-data/fulltext

TOPLAP (The (Temporary|Transnational|Terrestrial|Transdimensional) Organisation for the (Promotion|Proliferation|Permanence|Purity) of Live (Algorithm|Audio|Art|Artistic) Programming) is an informal organization formed in February 2004 to bring together the various communities that had formed around live coding environments.[15] The TOPLAP manifesto asserts several requirements for a TOPLAP compliant performance, in particular that performers' screens should be projected and not hidden. http://toplap.org/

http://on-the-fly.cs.princeton.edu/

Andrew Sorensen performance at Ted X-QUT, using Extempore software

Algorithms are Thoughts, Chainsaws are Tools - A short film on livecoding presented as part of the Critical Code Studies Working Group, March 2010, by Stephen Ramsay. Presents a "live reading" of a performance by composer Andrew Sorensen. It also talks about J. D. Salinger, the Rockettes, playing musical instruments, Lisp, the weather in Brisbane, and kettle drums.

http://toplap.org/wiki/ToplapSystems

http://stackoverflow.com/questions/392449/whats-available-for-livecoding-music

Chuck

Python and Beat Lounge? http://djfroofy.github.io/beatlounge.html

SonicPi is a live coding environment based on Ruby, originally designed to support both computing and music lessons in schools, developed by Sam Aaron in the University of Cambridge Computer Laboratory[1] in collaboration with Raspberry Pi Foundation... Thanks to its use of the SuperCollider synthesis engine and accurate timing model,[4] it is also used for live coding and other forms of algorithmic music performance and production, including at algoraves. https://en.wikipedia.org/wiki/Sonic_Pi

  • SuperCollider is an environment and programming language originally released in 1996 by James McCartney for real-time audio synthesis and algorithmic composition... The SuperCollider programming language is a dynamically typed, garbage-collected, single-inheritance object-oriented and functional language similar to Smalltalk,[5] with a syntax similar to Lisp or the C programming language. Its architecture strikes a balance between the needs of realtime computation and the flexibility and simplicity of an abstract language. Like many functional languages, it implements functions as first-class objects, which may be composed. https://en.wikipedia.org/wiki/SuperCollider
  • Python?
    • FoxDot is an easy-to-use Python library that creates an interactive programming environment and talks to the powerful sound synthesis engine, called SuperCollider to make music. FoxDot schedules musical events in a user-friendly and easy-to-grasp manner. https://foxdot.org/
      • I'm the developer of FoxDot and happy to answer any of your questions. FoxDot is designed as a live performance language and - as you probably know - doesn't do any synthesis itself. Like you I started working with SuperCollider but I was interested in live coding; programming music as a performance. However, SuperCollider just seemed to complex to me and I had to do a lot of typing to get not a lot done. As part of a project at Uni I began working on FoxDot as a quick and easy interface for manipulating synths / samples stored on SuperCollider in a semi-autonomous way. I did consider using Pyo instead of SuperCollider as the sound engine a while back, but there seems to be a lot more support for SuperCollider and a bigger community of users - especially in the live coding scene - so I just stuck with it. As far as I'm aware, Pyo does pretty much everything that SuperCollider does, and in a nice user-friendly way. If you're wanting to learn Python I would maybe suggest going with Pyo as it is a Python library for audio programming whereas FoxDot is a mini-language inside of Python on top of SuperCollider. Programming with FoxDot isn't really like programming with Python at all to be honest! However, if you're interested in live coding algorithmic music, do give FoxDot a try!
    • Pyo is more of a raw signal processing / sound design tool closer in spirit to something like SuperCollider but with the benefit of Python's much more concise and streamlined syntax—as you're likely well aware, the SC language can be pretty daunting at first.
    • This (sc3) project is a port of core features of SuperCollider's language to Python 3. It is intended to be the same library in a different language and to keep sclang elegance in a pythonic way (if possible). https://github.com/smrg-lm/sc3
  • mixing with another source (like a Vindor)?
    • LNX_Studio is a Digital Audio Work Station (DAW) created in the SuperCollider language. It has a powerful set of tools for creating music, all of which can be networked. Co-location gigs or real-time collaborations don't have to be in the same room.... It’s best compared to the main view of FL Studio, or the basic rack in Reason, or the devices in Ableton Live, in that the focus is building up songs through patterns and instruments and effects. What you don’t get is audio input, multitracking, or that sort of linear arrangement. Then again, for a lot of electronic music, that’s still appealing – and you could always combine this with something like Ardour (to stay in free software) when it’s time to record tracks.
    • Ardour is a hard disk recorder and digital audio workstation application (DAW)... Ardour's recording abilities are limited by only the hardware it is run on; there are no built-in limits in the software. When recording on top of existing material, Ardour can do latency compensation, positioning the recorded material where it was intended to be when recording it.... Ardour's core user group: people who want to record, edit, mix and master audio and MIDI projects.

Edited:    |       |    Search Twitter for discussion