The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the ‘knowledge’ in knowledge–based vision or form the ‘modelsrsquo; in model–based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place–holders for yet–to–be–identified causal mechanistic interactions. That is, applying the knowledge–based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong.