Documentation from University Submission:
I’ve always had an interest in accessible instruments. Music software and hardware is quite daunting to the inexperienced, and this project aims to expand the reaches of musicianship in a visually stimulating and enticing way. It also functions as a performance tool for people who want more expressive and aesthetically dramatic control. I created a 4 x 4 grid of light sensors mounted to an MDF board, with a clouded perspex sheet to diffuse the light over the top. A spotlight is placed above (although any directional light could be used, or it could be used backwards, i.e. In a dark room where you use a torch to make sounds, if the calibration is done backwards: setting the light value as low, and the dark value as high) and the shadow projected creates sounds. I used an Arduino to take the information and Max/MSP to process it. Unfortunately because the sensors are spaced quite far apart, the diffusion aspect of the perspex did not to work too well so I had to create a type of ‘diffusion averaging’ type of code in Max/MSP.
The board is split into different sections. A sine chord section, a square arp section and an effects / manipulation section. To the left is a diagram of the boards layout to help explain which section does what. The blue highlighted section (sensors 1-3 and 5-7) controls the square arp. When they are obscured (or lit up depending on calibration) the volume of notes change in the chord playing. Sensors are related to those next to each other in the maxpatch, through averaging, so each sensor is not exclusive in the sound it controls, it also has some effect on the sensor next to it. The brown highlighted section (sensors 9-11 and 13-15) controls the sine sound, which adds body to the sound created. Its mechanics are the same as those for the square arp modulation volume control - wise. The green highlighted section controls the effects / modulation. Sensor 4 triggers a resonant delay on the square sound which fades in and out. Sensor 8 triggers a pulse width modulation which fades in and out also, but more instantaneously than that of the resonant delay. Sensor 12 controls a small FM modulation arpeggio on the sine wave sounds. Sensor 16 controls the chord change. All of these control sensors work when they are covered and uncovered again, which is better than them just working when they are first covered, because sometimes you cannot tell where your shadow is going to be, and can potentially trigger things too early by not knowing when the shadow will appear. Having it work on the release helps this because even if you obscure it too early, you can control when you stop obscuring it. There are further comments in the maxpatch which explain in slightly more detail how things work in the project and also in the accompanying video. One point to note is that in the Arduino subpatch, the inputs from the Arduino are moved around to correspond to the right inputs. The reason for this is because when I implemented the wiring, it was inefficient to wire it in the grid formation. It was better to do so with the wires meeting in the middle and coming from the centre. This meant that for example, sensor 5 does not relate to input 5 on the Arduino and therefore has to be remapped. When this is done, the inputs which are sent from the ‘s’ object in the maxpatch correspond to the right values. Also another point is that the numbers go up and across. Originally the board was designed to be landscape, but I felt this offered less control. Obviously it can be reprogrammed to work in a landscape format though.
I used ethernet cables to transmit the data, as it was easier than using individual wires. I had to add another two cables (in the form of a speaker cable) to send power, as the sensors used up the 16 pins on the ethernet. I think that the key to a project like this working effectively is to have a very direct link to the movements the performer makes to the sound produced. I wanted to make this project quite obvious in that sense, and in doing so also making it quite simple but stable and well designed. I chose to use sine chords with lots of reverb as they seemed to fit in with the ‘sci-fi’ atmosphere the board elicits, and also the square arpeggio which gives a lot of space to the sine sound, and sounds like the typical ‘bleep bloop’ noises of spacey computers as portrayed in TV programmes. I also made a pleasant chord progression which has satisfying changes as I think that is also important to drawing people to it. I think that having attractive and direct sounds makes it more satisfying to use, and therefore more creatively effective. I think that this project was successful, both creatively and technically, and Im happy with the sounds produced and the interaction used. All of the family and friends I tested the controller on were fascinated by it and wanted to keep playing with it more and more, which I would count as a success. However, as with any project of this nature there is room for expansion. Most obviously would be to include different ‘states’ which changes instrumentation, chord structure or what each of the chords do as just a few examples. Gesture control is also a huge possibility, if you covered certain sensors in a specific order then something new would happen. It would add a whole new dimension to the project, and a whole new level of discovery for whoever happens to be using the system. I would probably implement gesture control with Wekinator as it is so simple to use, but also very effective. Technically, the wiring is not perfect either. I used 4 core telephone cable, but I only used three of these cores for each sensor (+5v, ground and control). I could have cut the amount of wire I used in half if I had just used the other core as the control wire for another sensor, and had two sensors per wire. +5 and ground could easily just be sent one sensor over.