Mickey Li, Suet Lee
Last updated on Jun 26, 2023
We played around wth the idea of patches, attachable objects and linking that to sounds. Concepts we came up with included
Velcro patches to play sound, with detection achieved via RFID reader
Power/Volume control with a dynamo like crank
Hanging objects on hooks to trigger sounds
Buttons on the tree to enable voting actions
Looking through these we identified the need for some detection mechanism, and some method of producing sound. With these requirements, we sought to create a tabletop prototype of the internal systems of a single pillar.
Tom started prototyping the use of RFID readers for detection, and we found two raspberry pi’s to act as our onboard compute. Raspberry Pi’s were ideal as an audio hat with an onboard amplifier could be easily sourced. A Pi also supports multithreading which is probably required for playing multiple tracks simultaneously (in comparison to an Arduino anyway…).
During the hackathon, I attempted to use javascript to play multiple notes and samples simultaneously. Unfortunately a naive approach simply did not work - I could only get one note to play at any time, and also could not easily dynamically generate music. This lead me to read up on sound generation and how multithreading is required to play multiple tracks, and that most synthesisers/music libraries operate on the compile->play methodology rather than a live performance type.
Bringing this forward, after spearking to some colleagues, I settled on using Sonic-Pi as our music generation program. Sonic-Pi is actually a fantastic program - it was designed to teach programming through music, but has been used a lot for “live coding” DJ performances at clubs and raves. It’s syntax is dervied from ruby, but essentially surfaces its own language for the live playing of multi-track pieces. In particular the live_loop functionality allows for the live editing of indefinite running sample loops. Sonic-Pi also supports sending and receiving external signals such as OSC (open sound) messages and midi messages - designed for interfacing with programs such as Ableton Live.
Therefore I settled on the following two part architecture.
A Python script which can read from sensors, RFIDs and generally manage the running of a pillar. When a sound needs to be played, it will send an OSC message to Sonic-Pi
Sonic-Pi running a listening script, and on receive of an OSC message will trigger the playing (or stopping) of a particular sound sample.
From the above requirements, there only needs to be two messages: (1) start and (2) stop, with both including the name of the music sample. The Sonic-Pi script is therefore comprised of two blocking listeners on these messages. The start message then creates a new thread which plays the specified sample with given ramp up and ramp down times.
Stopping the message was unfortunately more complicated due to the threaded nature. Unfamiliar with sonic-pi’s inter-thread comms structures, and unable to leverage the underlying Ruby tools, I devised a method using global lists of currently playing, and currently stopping samples. When playing a sample, the thread busy loops and checks the contents of these global lists. While it worked, it was perhaps somewhat expectedly full of race-conditions and weird hangups. Much closer to the time, I discovered the existence of cues and sync where a thread blocks until the receipt of a specific cue in an atomic manner. Therefore I replaced the busy waiting with a sync and for the stop command to generate a cue <sample_name>. This much simpler setup works assuming that a start is always followed by a stop. A future problem might arise if a thread is started but never stopped, but thankfully that can never happen in our current use case.
There was one final issue with sonic-pi which we found out on the day which will be discussed later.
The samples themselves were found on FreeSound. We were aiming to create ambient sounds from natural or urban phenomena. Therefore I looked for live recordings of waterfalls, birds in forests, storms and so on for “nature”. Closer to the date Avgi also stepped in to help find more sounds for “urban” such as construction, cars on roads, sirens etc. These were externally mixed and placed on each Pi.
Hardware-wise, the raspberry Pi hat supported up to 12W of speakers. We therefore brough four speakers and wired them up in parallel. It was a bit of a pain getting sound reliably through the speakers until we figured out how to set the default audio device to be the audio hat. We also had issues running sonic-pi in headless mode as sonic-pi is primarily designed to run from the frontend gui. Thankfully this could be solved by export DISPLAY=:0, while putting our sonic-pi script in ~/.sonic-pi/config/init.rb. Later on we also add it as an autostart application so that it will start at device desktop boot (Note it did not work as a systemd).
Tom was a whizz with testing first single RFIDs connected to the Pi, and then working out how to daisychain multiple RFIDs into one Pi. From my understanding the RFIDs connect over the SPI bus. However, each RFID required one extra GPIO pin in order to reset after detecting a read. This made the daisy-chaining of the RFIDs awkward - as seen by the first attempt at soldering! Tom then designed a special set of PCBs, including a custom PI hat and RFID daisychain connectors in order to make wiring them up easier. Software wise, it is simply a case of continually polling the state of all the RFID readers.
There are two types of RFID tag, ones which can be programmed to store data - such as a sample name, and ones which can only be read for their ID. We bought a number of RFID stickers which could only show their ID. We began with assuming we could do something with the first type, but in the end we decided to use the stickers for our objects, along with a registration procedure - storing the mapping between ID and particular sound sample.
We wanted individually controllable addressable LEDs to adorn each pillar, with options for generating fun effects. For example when a participant places an object onto the pillar, the LEDs would immediately animate, highlighting that action. In the end we bought 20m of offbrand neopixels off of Amazon. Going from my work with drones, I was hoping that the LEDs could be controlled directly from the Raspberry Pi. After some initial testing, it looked promising even though it was hard to find ready prepared libraries with a large number of effects. However during integration with the RFID we found a huge problem that the two systems would break each other. The LEDs would function properly until an RFID was activated, at which point the RFID would stop functioning. We still do not know the full reasoning, but our hypothesis was that they were sharing the same PWM source, and using one function would interrupt the other.
With this in mind, we quickly switched over to using an Arduino to control the LEDs, connected to the Pi over serial. Sourcing two arduino UNOs from the flight arena in the BRL, I found some well regarded software called WS285fx which included a large number of different pre-programmed effects. I then built a simple Serial parser to allow the Pi to send simple commands to the arduino to change the effect of different segments of the pillar.
After messing with Arduino UNO, we were finding that it was struggling to light up all 400 or so LEDs that we had allocated for each pillar. It turns out that the UNO only has 2Kb of memory, and that it was likely that it simply did not have enough memory to store the state of each LED. Each LED takes 3 bytes to store its state, so 400 * 3 gives 1200 bytes or 1.2Kbs! If you include the size of the effects library, its highly likely that this was the problem. Therefore we frantically tried to find a compatible microcontroller. Thankfully one of the BRL technicians had some ESP8266s at hand which have 32Mb of RAM which is more than enough. This had its own problems as it does not provide a 5V line, and we had to take it from the Raspberry Pi - causing concerns that it might be drawing down the Pi power supply. Later we replace these ESP8266s with Arduino Mega (clones) with 32Kbs of memory and could provide the 5V lines via external power supply. Although other problems appeared later on, but that’ll be discussed when we get there!
With only a few days left we finally got down to building it. We began with the urban pillar where I would sort out the electronics while Tom went ahead with designing the slices. In the end for simplicity, we decided to put the LEDs in the extrusion channel as they would be protected and it would be simple to install. The next day we did a long shift where we put together the nature pillar in the flying arena and performed integration testing with everything in place.
It mostly went smoothly with only some hiccups. The LEDs were a bit painful to attach as we were plagued with annoying soldering jobs and dodgy connections. Since each slice slides down the extrusion, the LED wires at the top of each pillar have a tendancy of getting caught and breaking. We also had some issues with wiring the RFIDs since we only had a limited number of breakout boards, with the rest manually soldered. With this limitation we were judicious with the placement. Wires in general were a bit of a pain as they were trailing everywhere up and down! We also decided to install a couple of screens for each of the Pis as it would be useful in debugging sonic-pi issues over the connection (I also enabled VNC on the Pis, though the connection was awful over wifi).
That being said, the integration between the whole team was pretty successful with a full test of the system coming online, and sounds being produced at 3am in the prescence of myself, Tom and Georgios on the Friday before the weekend of FoN!
Concept for urban pillar
Suet attaching willow to the nature pillar
First light up of the urban pillar
Nature and Urban pillar
Debugging and construction of the pillars at midnight
Team Takeout Dinner!
1am debugging coming to an end!
All thats left is the event itself!
In part 3 we talk about the Festival of Nature!