In this article, the authors look at algorithmic synaesthesia, a form of sonic intermedia involving synchronous computer-mediated manipulation of sound and image. In algorithmic synaesthesia extensively shared features are created in the two media. What an audience member can cognitively access in such synaesthesia is considered. The fact that a machine can process image and sound in parallel, and by the same algorithm, does not establish that the human brain can. The transparency of an algorithmic process to a listener-viewer-screener is a core issue in auditory display (or ‘sonification’). Sonification aims to make the segmentation of a data set more accessible than it is when represented numerically or visually, and has many practical and creative applications. Current approaches in experimental cognition may assist us in evaluating these issues.