Technology that harnesses their users' unconscious multisensory associations are faster to learn, more aesthetically appealing, and less cognitively demanding to use. These factors become increasingly important for applications that aim to convert large quantities of information from one sense into another. Relatedly, sensory substitution devices (SSDs) seek to convert visual information into sound in order to provide assistance for the visually-impaired, as well as provide new opportunities for multisensory art and sensory augmentation. Here we present the Synaestheatre, an SSD that turns 3D space, size, shape, and colour information into patterns of spatially distributed sounds, varying in pitch, persistence and timbre, all varying in real-time. The Synaestheatre's sonification method is informed by a combination of natural hearing processes, as well as multisensory associations found in synaesthesia and unconsciously in the wider population. In combination, these produce easy-to-learn, responsive, and aesthetically appealing soundscapes. Here we show that novice users can rely on their intuitions to track the location, size, colour, and movement of visual objects through sound alone. Our demo showcases this through users listening to soundscape changes as arrangements of coloured objects are manipulated in front of the Synaestheatre's camera.