Measurements were made of human observers' performance both in segmenting regions of line-elements and in detecting line-element targets in stimuli containing several orientations. Performance was modelled by four artificial neural networks constructed from processing units trained to mimic the gross functionality of certain loosely defined classes of cortical cells. Model 1 contained modules sensitive to absolute orientation only, and it provided a poor fit to the human-performance data. Model 2 contained modules sensitive to orientation contrast: the outputs of these modules could be suppressed with fields of uniformly oriented line-elements. Model 3 contained orientation-contrast-sensitive modules of a different type: their outputs could be suppressed with fields of randomly oriented line-elements. Models 2 and 3 both successfully processed line-element arrays with orientation heterogeneities, but these models still provided inadequate fits to the human-performance data. Model 4 contained both types of orientation-contrast-sensitive modules; this model was able to account for human performance in the segmentation and detection tasks, both qualitatively and quantitatively.