Stranger

 
 
 
 

January 3, 2009

Wrap-up

2008 is over, and I need to wrap-up this blog. The music is soooo 2008! ;))

There were big plans to possibly make this research into a paper, but all the things that happened in 2008 didn't give me the time to devote on that. And now it feels too late. The piece is done, I'm happy with it, and preparing a CD release. So I guess it turns out that I find the music much more important than talking about it. And the method to make this work was a good way to get started, but I didn't work it through as rigorously as intended, at all. I'd still like to do this again once, with the following restrictions, learned through this process:

  • use much less sounds to start with, and much shorter ones (27 samples of 0:30 - 1:00 min gave me too much material)
  • restrict the outcome of the processes in time - in making Stranger the processes delivered samples between 30 sec and 3 min - again, too much material
  • differentiate the various parts of the composition in articalation

In the meantime the composition has been performed at Culturelab (diffused using the Resound system) and STEIM (the stereo version). I'm looking for more opportunities, submitting it to various conferences and festivals.

So long...




September 24, 2008

Renamed

I renamed the work from Vreemdeling to Stranger. It seemed odd to have a dutch title, and felt still like a worktitle.

More info on the progress later.




June 12, 2008

At CultureLab Newcastle [3]

So it's already Thursday in the second week. This week was much more full than I expected. But in a good way.

On Monday I presented a preview of Vreemdeling for a group of staff members, PhD students, Kazuhiro Jo who will be a research fellow at CultureLab next year and John Bowers who was visiting for a lecture in computing - and received a lot of usefull feedback.
A selection of the various thoughts:

  • I believe most of them liked the lack of artificial reverb - something quite more common in more traditional diffusion works, to exaggerate distance in the sound. I wasn't really aware of it, because (as I realized then) in my work I generally don't use artificial reverb (or any other 'regular' effect like delay, compression). That has been in fact some of the criticism I received on my live electronics work, that it is very dry and close. I like that ;)
  • What was also generally recognized is that the work is mainly playing with sound coming out of separate speakers - not so much exposing the space. This is again my preference for dry sound I guess. The day after I opened all the curtains to see how it would sound to let the space reverberate more - I didn't like it. Too fuzzy, unclear.
  • Talking about how and why moving sounds around the general idea was that I could do a bit more with diffusing 'active' sounds, sounds that suggest a lot of gesture lend themselves (and in fact are expected) to move around more in space.
  • John liked the volume level - I have been stuggeling a bit with the levels. I like them quite high, but the Genelec speakers disagreed at first. Rolling off the bass on the speaker itself didn't do much, but applying a lowpass filter in the software, before it even gets to the speakers, allowed me to boost their volume. The low frequencies were dealt with anways by the two subs.
  • On my question on alternative speaker setups we talked a bit about how that could be made a bit more interesting than the more or less traditional all around setup I have now. Jaime mentioned (and we talked about that the evening before during dinner) using different speakers, putting them in various places, under tables, in boxes. But that's not feasible for now - especially since I'd have to reprogram SuperCollider and Resound. Not enough time. But it would be interesting working on this at another time.
  • Regarding strategies of diffusing - I guess that's for one just gaining experience. I'm curious to hear James' treatment of the various groups during Friday's concert.

The days after that I've been working on details regarding the difusion strategies - trying out various gestures with the various soundlayers in the composition, moving some speakers around, repairing a breaking down joystick by re-scaling it in SuperCollider. In Nuendo (where the piece is played from) I added markers with descriptive titels as a sort of diffusion score. When performing the diffusion I'm zoomed in in Nuendo so I can visualy see some of the movements in the sound and anticipate its position in space.

Next to this there were the social gatherings of course - meeting with the staff of CultureLab and the local arts scene for dinner, having some discussions with Jaime , Sally Jane and Atau.

Last but not least I've been playing a bit with Bennett (Hogg), Paul (Bell) and John (Ferguson) as a 4tet for the concert, and quite a bit in a duo with John on guitar - we've also recorded some, and I'm very curious to hear what we put down, as it felt really good while we were doing it.

(not even mentioning dealing with emails - the first week I could pretend for a little while that STEIM and the current situation there were far away, but not this week, with all the activities next week when I'll be back)

Finally here's a video with me explaining the current setup with the diffusion system (also see Wikipedia - acousmatic music).

Some interesting links from last week:




June 6, 2008

At CultureLab Newcastle [2]

Friday afternoon, the end of my first week at CultureLab.

The week was very fruitful - I have a working multichannel setup, with three stereo channels that can independently be diffused using the joystick through SuperCollider 3 to the Resound system. So whatever audio I send to channels 1+2, 3+4 and 5+6 can be processed - I just tried it a bit with my live performance setup, and it works great! An interesting side-effect there is that the joystick has two functions there. A welcome restriction in diffusion-world, where I still have to find my way. The dilemma as always is about balance: between vulgar effect (sweeping sounds through the space) and subtle musical movements. Too much effect and it gets boring very quickly - too subtle and the audience (and myself too actually) wonders what the hell you're grabbing that joystick for. I have to find this balance between performance and spacebar-music. Between just playing back tracks and gestural treatment of the sound in the moment.

This weekend I'll be just playing with the system - finding out its response. But also takin g a break: visiting Tynemouth for a walk along the coast and the used-stuff market there.




June 5, 2008

At CultureLab Newcastle [1]

I arrived here on Monday afternoon. The university in very central, and there's a good coffee place right around the corner of CultureLab. So I'm set!

After the first couple of days here is where I stand:

  • SPACE: I'm working in the concertspace - very convenient I can hear it all as it will be during the concert.

  • HARDWARE: Currently the diffusion setup consists of 8 Genelec 8050A speakers (with the bass roll off set to -6db) and two EAW subwoofers - I must say that the Genelecs can only barely deal with the soundlevels I'd like to have. On Monday we'll add 4 EAW tops - this will no doubt solve that issue. The speakers are setup as in the scheme below (where the EAW's are high up).

    speakers

  • SOFTWARE: I'm using the backup Resound system meaning that I can only have 8 outputs. The Resound system consists of a server that deals with all audio - from my system I send it 6 channels of audio, and it spits out 8 channels to the Genelecs - and a client computer, that deals with the configuration of the diffusion. Both run Ubuntu (Linux) and communicate through OSC (open sound control). Since OSC is very open, everyone basically makes his own sub protocol, resulting in my case in an incompatability between STEIM's junXion and Resound. With junXion I can translate my joystick data into OSC, but only in messages in the format /junXion/controllers/1 - while Resound wants the OSC in the format /fader/1. [.. sound of wrong answer in a gameshow..] So no connection there. I decided to use SuperCollider 3.

  • CONFIGURATION: this is not the place to go into the workings of Resound in detail, but I'll try to explain the basic setup as I have it after two days of working. Resound basically is a matrix, where one can control the amount of signal from every input channel to every output channel - with the 6 inputs and 12 outputs (8 Genelecs and 4 EAWs) this makes for 72 parameters to control (excluding controllers for the main volume, the input- and output levels for all channels and other options). There are 32 faders available in the system - so smart programming is required.
    I use the Logitech joystick's X and Y axis to be able to pan the sound in the 2D plane. Now this sounds easy, but it wasn't. There are some ways to do these kinds of things in Resound, but I couldn't really figure it out. So I put some of the logic in SuperCollider and set the 32 faders in Resound to control the level of 32 matrix nodes - not using the more advanced features of Resound.

  • DIFFUSION: Basically I divided the plane in 4 quadrants, and dealt with fading in and out of audio levels in separate speakers for each of these quadrants. Then I decided I'd output 3 stereo channels to Resound, and I want to control the diffusion of these 3 channels independently. I use three buttons on the joystick to activate/deactivate the joystick X/Y controllers for the 3 channels. Furthermore I use the Slider controller on the joystick to crossfade between the lower plae (Genelecs) and the higher plane (the EAW's). Finally the hatswitch thing on the joystick increments/decrements the main volume level. Are you still with me?

This is where I stand right now. Next things to do:

  • Determine what tracks of Vreemdeling (played back from Nuendo) go to what stereo output - so fixing the three separate layers that I can control independently as mentioned above.
  • Work on the placement of speakers - John Ferguson, a Music Phd student at CultureLab mentioned some non conventional speakers setups that were used before. I'd like to experiment with that.
  • And last but certainly not least: practicing! Playing with the placement of the audio, and create the performance.

Aside from my own work I'm also talking a lot with the Phd students here. There was a seminar on Tuesday, with Chris Leary talking about his thesis subject. Then I met John Ferguson yesterday to talk about his performance setup, and today I'll have a meeting with Paul Bell. After the weekend we'll have an initial listening session for Vreemdeling with some of them, and we'll play a bit in the studio - just to see what's possible for the concert on Friday (the 13th!).

space4&5

space4&5

resound

resound

resound




May 1, 2008

Working at STEIM - 1

First day of a long weekend at studio 1 in STEIM.

The presentation at LOOS last Sunday went well. The crowd was mixed: some Sonology guys, some composition students and teachers, some HKU people.

I played the last 9 minutes of the total of 25 minutes I have now, did not tell much about the piece before I asked the following questions:

Q: Did you recognize any of the material?
Nobody recognized Bladerunner, which is very good. Like I explained to them, Bladerunner is sampled so much, I wanted to use the sounds from the movie, but not recognizable.

Q: About the structure of the work, please choose:
A. There is an overall structure
B. It's just multiple short sections without any connection

Q: About the material used in the work, please choose:
A. It is too fragmented - all over the place
B. It's coherent throughout the composition
C. It is very similar - it gets boring

On both questions I did not get much hands in the air, but they did start some discussion. Some people heard two parts in the piece, but the most important feedback I boiled down from the various remarks was that the piece might be a bit too homogenic - with the material and the grouping I have I should add more variety in articulation.

So I've been trying to achieve that today, by thinning out here and there, and adding some space in some parts - allowing small silences. I might need more, but I'm not sure yet.

Also I've collected all the separate parts into one Nuendo setup, so I can make subtler crossfades between the pieces - before I would mix down the sections (1 through 27 - one group of all descendents of one axiom) and then in another Nuendo setup make the overall edits.




April 25, 2008

Progress of phase 2

As a reminder: phase 2 is the stage where I will be creating the stereo composition.
Currently I have a sketch of 25 minutes, that I'm quite happy with.

I feel the need for a recapitulation. So here it is.

I think there are three aspects of a composition: concept, material and working method. For Vreemdeling they are as follows:


  1. Concept: Coincidence, chance, rational thought, concepts (and how are they reflected in the piece). How much of the creation of a composition is concious thought, and how much is just intuition? Is it worth working on a piece for weeks, analysing it, making minor changes, or is the first version just as good as the last?
    This is simultaniously the research topic: how does one go about creating a structured piece of electronic music?
  2. Material: samples of the film Bladerunner
  3. Working method: using the family tree concept: starting with 27 samples, defining 9 processes (6 in LiSa, 2 in Adobe Audition, 1 analog process) and applying these processes iteratively to the samples, building a family tree with 3 generations

I'd like to talk about the working method a bit more, chopping it up in a couple of subphases.

Phase 2A: selection
I consider the enormous amount of samples that were generated (about 500) in groups numbered 1 - 27 (all the decendents of sample 1 are in the first group, all decendents of sample 2 are in the second group, etc). I went through the samples group by group, made a rough selection. This selection process is of course very intuitive.

Phase 2B: subcompositions
Then I started to make a subcomposition per group - making sure not to mix samples from different groups. In that process I just put all samples from one group on a pile, I did not keep them in the order of the tree. This is where I left the rationality path and started to work more intuitively - moving blocks of sound around, taking some chances here and there, being helped by chance. While working on these subcompositions I discarded some more samples, and also discarded 3 whole groups. This gave me 24 subcompositions, in length ranging from 1 to 7 minutes.

Phase 2C: cutting down to 25 minutes
Then I loaded these 24 subcompositions into Nuendo, and lined them up in order of the group number. That way the composition follows the linear progress of Bladerunner, where the start of the composition uses the opening titles, and the end of the composition uses sounds from the last scene. This gave me a composition that was about 1 hour.... Since I'm aiming at a composition of about 20 minutes, I discarded 9 subcompositions. Also I went back to some of the subcompositions and made them shorter. Again, a very intuitive process. So now I'm down to 25 minutes, and I'm quite happy with the whole thing.

This Sunday I will present a part of it at LOOS in The Hague. I'm planning on asking the audience two questions right after the piece, and before I'm saying anything about it (they might read this blog, but I'm taking that chance ;):
* Structure: does it feel like one piece of music with some kind of concept or idea behind it, or multiple short sections without much connection?
* Material: do you find coherence in the material? Or is it too fragmentated? Maybe too similar?

Furthermore I'm working on another version of the material, only consisting of the more ambient sounds - I feel the need to give that material a bit more space too.




March 22, 2008

Phase 2 - Starting up

Phase 2 of the project started.

I grouped all soundfiles according to their ancestor, which in turn were numbered 1 through 27. Then I just started to work with the samples of group 1, tried to add more structure to the files. This gave me a pretty interesting structure of about 6 minutes. I didn't stick to the order within the generations, just basically put all descendents of 1 on a pile and picked the ones I liked in the order that seemed interesting. After this I continued with groups 2, 7 and 8 (3 - 6 just didn't seem that interesting, and I can be picky as I have a ton of material). This gave me a total of 20 minutes worth of composition, that I really start to like after repeated listening.

A couple of thoughts:
* If I continue this way, the piece could end up being pretty fragmented, with no overall structure. Of course there will be an internal structure in every group (1 - 27), but since I'd then just concatenate them the order seems arbitrary. But maybe the processes (that after all are similar through every generation) will give it coherence?
* Should I shorten the 'pieces' I've put together? Or should I just go on, and end up with a cd worth or material?
* If I decide on a composition of 1 hour +, would this stay interesting? Or would the fact that the processes are similar through the piece make the piece repeating itself?




February 11, 2008

EMS Pictures


The studio that was my home for 6 days.


The old brewery houses EMS, Fylkingen, and various other smaller companies.





EMS - Saturday

Back in Amsterdam.

Saturday at EMS was a good day: I generated some more 2d generation files, and experimented with the 3rd generation. Then organizing it all in one Nuendo setup. It's over 400 soundfiles!

For the 3rd generation I limited myself to files from the 2d generation that were processed at some stage with process nr. 4 (applying a highpass filter with the joystick) and then using mainly processes 2 (pitching up the sound extremely) and 4 again. I figured, that would push the sound into the high frequency and hopefully create some interesting different sounds. That actually worked quite well, and some of the 3rd gen files are quite long as I felt they kept developing.

The processes again, now all in a row:

  1. LiSa ch2 LPF: this is basically playing the sample twice on both speakers, but a little out of sync and with a low pass filter applied - this one turned out mainly to be interesting when applied to the axioms - repeated application doesn't give distincive enough versions.
  2. LiSa ch2 pitch+48: this is a process that plays the sample pitched up 48 semitones together with the same sample pitched down 48 semitones - the high pitched version of course is much more prominent, and works in some cases and not in others - the low pitched version only gives some rumble every now and then - all the time there is some sample length change going on so the loop is not so obvious.
  3. LiSa ch6 broken + distortion: a joystick action, scratching through the sample with extreme panning, some subtle pitch change and variable distortion - all kinda broken and cutup.
  4. LiSa ch6 filter + distortion: similar, but then with a high pass filter changed by the joysticks Y axis, and no panning.
  5. ch6 regular + distortion: my main instrument in the live set, scratching through the sample and applying extreme pitch change with the Y axis.
  6. LiSa ch5: this is a weird, but a little too obvious stuttering of the sound - I didn't use this one much.
  7. Audition pitch bender: changing the pitch of a sample over time, sometimes subtly, sometimes quite drastically - very much depending on the sound material.
  8. Audition noise reduction (keep only noise) (with 'noise' from next sample): this is kind of crossbreeding: take a snippit from the next soundfile in line, and use that to apply the noise reduction filter in Audition - but instead of getting rid of what you filter out ('noise'), keep just that - in a way this resembles vocoding.
  9. Aluminum foil: attach pieces of aluminum foil to the speakers, playback sound with lots of low frequencies, and record the playback with two microphones. This I applied mainly to the axioms that were treated with process nr.1 (the lowpass fiter).

The names of the LiSa processes refer to the midi channels they are assined to - not very interesting to know, but that's the way I memorize them.

I organized the samples by the following naming convention:

  • starting out with 27 samples, numbered 001 - 027.
  • for every applied process I added a period and the process number, f.e. 017.4.6 is part of the 2d generation, applying first process 4 and then process 6 on axiom 017.

A couple of notes:

  • I realize now that I should have started with much less axioms - max 10 would have been fine.
  • Applying process 2 twice seems to result in more interesting sound than only applying it once! Which doesn't make sense, as the process mainly duplicates the soundfiles, with one voice being pitched up 4 octaves, and one voice pitched down 4 octaves. If you apply this twice you'd think they would cancel eachother out (with 8 octaves up and 8 octaves down not being audible I'd think). Where's the catch?
  • Some axioms just don't work - and those are mainly the noisy ones: action sequences in the film with lots of traffic, crowds, rain. The more sparse ones worked much better.

The upcoming composition process: I will definitely need to cut down extremely on the soundfiles. After sequencing them in Nuendo the total length was 5 hours! This means that I'll probably skip some branches of the tree alltogether. Axioms 005 and 006 (and their descendents) for example will certainly be skipped (sorry guys...).

Furthermore I'll probably have different versions: one where I'll just edit out a couple of seconds from each soundfiles and sequence those - this is the Rigid Way - probably (hopefully) this mindless (or less 'mindfull') way of working will not result in an interesting composition.
The more sensible way will probably be more tedious: listen to different variation of the samples, filter filter filter select select select until I end up with a more manageable number of them. Then put them together in an order that makes sense.

And next to the composition (that's supposed to be approx 20 minutes) I'm thinking of compiling a CD with 60 second versions, calling it 'De geluidjesfabriek' ('The little-sound factory'), after a good friend of mine that used that term to describe my music. The idea would be that everyone of those 60sec pieces would be little compositions in themselves, as that's how I've been thinking of all the files that I generated: basically I tried to have some micro structure in them, some sense of small scale development.




February 9, 2008

EMS - Friday

I'm abiguous as to how this week went. I generated 155 soundfiles in the first generation, and 169 in the second - all between 30 sec and 3 min, a horrifying number for a 20 minute composition! I'm also not sure of the variety of sounds, since in quite a lot of cases you clearly hear the processes.

In generating the 2d generation, I became more and more critical on the sounds - I'd discard them much more quickly, also because I found quit a lot of them to be resembling 1st generation sounds too much. But it's also interesting that the order in which I'd work on them is important: after working for 2 hours I'd get tired of certain processes and not even try them. Kill the unborn in the womb, to quote Metallica Iron Maiden (2 Minutes to Midnight)...

I haven't consistently generated all possibilities for the 2d generation, but I don't think that's necessary. Also thinking about the next step where I'd have to go through all this material and make sense of it - it's probably good to already start selecting in the generation process.

Another thought: I already failed the initial goals I set myself. The processes I chose are too much about performing - while generating them I'm already making all kinds of artistic judgements, in stead of meticiously, mindlessly applying processes on the sounds.

Fo today I'll just generate some more 2d generation - being picky about which 1st generation files I choose, as to not generate any more trash (well, again defeating the purpose, but that's why it's called research I guess). Then I'll try some 3rd generation. Not too much, just playing around. The last thing I'd like to do here at EMS is organizing the sounds a bit. Thinking of a way to present them so I can more easily go through them and monitor them.




February 7, 2008

EMS - Wednesday & Thursday

Two days in one post. Yesterday wasn't very exciting. I was mainly routinely generating material.

Last night though I was playing around with Audition, and decided to add two more processes:
* Pitch bender: changing the pitch of a sample over time, sometimes subtly, sometimes quite drastically - very much dependent on the sound material.
* Noise reduction (keep only noise) (with 'noise' from next sample): this is kind of crossbreeding: take a snippit from the next soundfile in line, and use that to apply the noise reduction filter in Audition - but instead of getting rid of what you filter out ('noise'), keep just that.

So all day today I've been again generating material, starting the 2d generation. I'm not applying every process on every sample though, that would get me way too many files, and some just don't work (LP filter over LP filter f.e.). Some outcomes are surprising!

I actually realized today that I don't have too many processes, but I started with too many soundfiles (the 27 'axioms').

Then tonight I added (yet) another process, an analog one this time:
* Aluminum foil: attach pieces of aluminum foil to the speakers, playback sound with lots of low frequencies, and record the playback with two microphones.
I'm hoping this will add some different aspects to the whole. I'm sometimes not sure if the family tree is varied enough - too much inbred?

Just keep going, tomorrow another day. Hopefully finishing the 2d generation, and then looking at what I have and if I should generated more. Maybe try out some 3d generation examples.

To bed.




February 6, 2008

EMS - Tuesday

Very interesting how this works, setting yourself up with strict rules, and trying to stick to that. More than a couple of times I was tempted to leave the path I set myself - thinking of other samples to use, getting ahead of myself by applying more than one process at the time. But I resisted.

What I have now: 27 samples with a length between 30 sec and 2 minutes. I allowed myself to edit the initial sourcefiles a bit to get a lenght of at least 30 sec. And to keep that a bit more interesting than just repeating the same sample 2 or 3 times, I used: reversing the sample or adding a slight pitch change in time and editing these various versions of the same sample together into one.

Then the processes. I decided on 6 processes in LiSa:
1. ch2 LPF: this is basically playing the sample twice on both speakers, but a little out of sync and with a low pass filter applied - this gives me a lot of low drones (probably too much).
2. ch2 pitch+48: this is a process that plays the sample pitched up 48 semitones together with the same sample pitched down 48 semitones - the high pitched version of course is much more prominent, and works in some cases and not in others - the low pitched version only gives some rumble every now and then - all the time there is some sample length change going on so the loop is not so obvious.
3. ch6 broken + dist: a joystick action, scratching through the sample with extreme panning and some subtle pitch change - all kinda broken and cutup.
4. ch6 filter + dist: similar, but then with a high pass filter changed by the joysticks Y axis.
5. ch6 reg + dist (fixed pitch): my main instrument in the live set, scratching through the sample and applying extreme pitch change with the Y axis.
6. ch5: this is a weird, but a little too obvious stuttering of the sound - probably I'll get tired of this one the easiest.

After applying these processes to 6 sourcefiles (already quite a lot of work!) I can already see that I'll end up with much too much material, every 1st generation file being at least 1 minute, but most of the time 2 or 3 minutes. When generating them I actually add quite a ot of 'musicality' - they are small compositions in themelves sometimes. Also I allowed in some cases to not use a certain process, as it didn't seem interesting on that particular sample. Shouldn't do this too much though, as in other cases I initially also expected a certain process not to be interesting on a certain sound, and it ended up being quite nice.

A thought I had yesterday: if I would have executed this composition process at home or in a STEIM studio, I'd have abandonned it already, or at least changed its direction. Being here in Stockholm with no diversion and plenty of time forces me to keep working. So I'm glad I took on this challenge for experiment.

So today I'll be continuing the 1st generation creation with the LiSa processes, and then decide if I still want to add other processes, more regular ones using Adobe Audition. It seems I'm already generating too much material, but maybe Audition will give me another direction - LiSa can sound a bit too 'LiSa' sometimes.

I also finished Do Androids Dream Of Electric Sheep yesterday. Interesting how different in story but similar in atmosphere it is to Bladerunner. But even more focus on those intense initial questions this process started with (see the pdf with the plan, above).




February 5, 2008

EMS - Monday

So yesterday I arived at EMS. The place is nice, very quiet, good for getting work done. Friendly people.

I started out roaming through Bladerunner, taking samples. Not an easy task, since I don't want voices and music from the film. So what's left is a little one-dimensional: a lot of droning, crowds, atmospheres. Not much details or fast changing sounds. I guess that's what I have to add then.... Still I'd like to stick to the plan of using only sounds from the film. After getting 20some soundfiles, ranging from 3 seconds to 2 minutes, I tried some processes. Adobe Audition, my favorite audio editor (yes, Windows on my MacBook!) is a good tool for filtering. Also the dynamics processing, with extreme parameter values is promising, as is using noise reduction with random other pieces of sound as 'noise' (this is like 'subtracting' one sound from another).

The plan for today is to organize the samples, figure out a way to administer them in the family tree, and adjust lengths, so they are a bit more practical. Then I should test and decide on LiSa processes to use. This is hard, as I want to choose only 3 or 4 processes, and they should be narrowed down: the processing I do with LiSa usually is very broad, meaning that with one process I can do a wide variety of things. But I think the processes to build the family tree should be a bit more contained.

As a general direction to go I'd like to explore a soundfield with low frequency drones, high filtered sounds, and not much mid frequencies (maybe some sweeps through those mid frequencies). This seems the best take on the soundmaterial I have right now. It is actually a good thing that I have this kind of rigid plan, as I'm constantly tempted to consider other sounds to use: there's a Buchla analog synthesizer in the studio, and a lot of room noises in both the studio and the room in my family hostel. It also is a little scary though: will I be able to generate enough interesting material from those sourcefiles to create a composition that is any good?

To work.




February 4, 2008

Start of the work.

On this blog I will keep a log of the work on Vreemdeling, a composition and research project. The full description of the project can be found here.

The first task, creating this blog, is (obviously) done. I'm currently in EMS's studio 4. EMS director Mats Lindström was very kind in enabling my stay here at EMS (Institute for Electroacoustic Music in Sweden). I will be here one week, starting up the work on Vreemdeling.