Beyond the panopticon: Machine vision and school architecture
Elizabeth de Freitas, Project Co-I and Professor in the School of Education, Adelphi Universit
The relationship between architecture and vision is newly inflected in the digital age, where surface and envelope become a palimpsest for data and sensation, where affect and movement are traced and extracted and fed forward into instrumental futures. That which is incomputable – the contingency within the algorithm itself, the aesthetic noise of all lived architecture, and the practical limits of any dataset – make for a paradoxical mixture of analog and digital in the smart school building. Students learn how to hack the surveillance systems and evade the controlling eye, while they simultaneously submit and surrender their passive data.
But there is more than evasion to this new visuality. Our experiments with sensors and GoPro 360 cameras create a mobile point of view, but they also operationalize a metastable view, a view sustained by multiple lenses, lenses that are learning together to make sense of a wandering sensation. This is a multi-ply image and “virtual” view animated through crossing-space and making volume, activating walls and corridors; the building becomes a synchronically projective participant. The student videos and scanned rooms are manipulable models at the sub-digital scale of the algorithm. This experiment reveals the folded space within the conventional school panopticon, a topological fold of proliferating and collaborative lenses, refusing the monocular singular vision of the point of view and the singular status of perspective.
Machine vision composes space as multi-dimensional and layered, where sensors and cameras converge towards a rendering process that absorbs every minor gesture, scraping pixel arrays of light and shadow, in search of edges and shapes. Architecture is allied with machine vision in being a sculpting and 3D rendering process, where the historical lineage is not to Muybridge’s chronophotography and the conjunction of sequenced still-images, animated into “motion-pictures” of locomotion, but seeks another historical precedent in François Willème and his 1860 “Photosculpture process.”
Screen capture of the Polycam app processing a lidar scan
Rather than a panopticon in which the central controlling eye is situated in a “point of view” that sees all, spanning and surveilling those who have no place to hide, Willème’s method was a proto 3D printing, where 24 cameras surrounded a central figure; the camera eye proliferated around the subject, generating a series of partial profile slices. These slices were then used to create wood cut-outs which were assembled in a volumetric sculpture; a palm-sized statue for everyone to hold, embodied and virtually real. A model.
Scanned models of school spaces
Media theorist Alexander Galloway (2021) argues that the manufacturing of space and the spatializing of vision in “photosculpture” aligns with the haptic vision of current digital sensors. This is a process that eliminates the very idea of point of view, as cameras spread out and dissipate across a volumetric and responsive space. Machine vision saturates and feels what it sees. The recursive palimpsest of machine vision collapses the point of view, expanding vision into the ecological posthuman, neither the “the god’s eye view” nor “view from nowhere”; visuality is now molecular, and invaginated.
Reference
Galloway, A. (2021). Uncomputable: Play and politics in the long digital age. Verso