You are here
The visual environments we inhabit are populated by diverse three-dimensional objects constructed from different physical materials. The shape and material of an object can influence diverse behaviours – from deciding whether an object is edible (e.g. imitation vs. real fruit) to programing movements to pick it up (e.g. gripping a porcelain vs. rubber ornament). How does the human brain extract relevant visual signals that allow us to perceive similar objects to be made of different materials and different objects to be made of the same material? Here we aim to combine approaches from computer graphics, behavioural measurement, brain imaging and machine learning to understand the network of brain areas involved. We will examine the combination of binocular information (i.e., the slightly different views of objects we get from each eye) as well as monocular image-based information about differences between objects with different materials (e.g., smooth vs. rough surface texture; matte vs. shinny surface reflectance; velvet vs. hessian composition). First, we will test our participants' responses when matte, specular or velvety stimuli are presented to them while they undergo fMRI scanning to understand the network of brain areas involved in the discrimination process. Then, we aim to build a machine learning classifier that discriminates patterns of brain activity related to changes in 3D shape and material: our goal is to have the classifier predict the surface appearance of an unknown object based on information contained in the fMRI signals. Finally, we will test the functional importance of nodes in this brain network by using transcranial magnetic stimulation (TMS) that can temporarily disrupt processing within particular regions of cortex. Our expectation from the project is that we will understand better how humans perceive 3D shapes and their material composition.