Supervisor: Dr Laszlo Talas
Last Date: Friday, November 15, 2024
About the Project
About the project
Background:
The core challenge of understanding animal colouration is complex: animals exist in intricate ecosystems that interact differently with various observers, including predators, prey, potential mates and competitors1. Modelling optimal colouration for concealment or signalling is hard, simply due to the sheer number of variables, including possible colours, patterns, and shapes of animals as well as the environment they inhabit. Studying these systems with biological observers is severely constrained by time, complexity, and ethical considerations2-4.
Advancements in machine learning enable us to create artificial agents mimicking biological behaviour. Deep neural networks have demonstrated success in automated target detection, but these typically do not integrate biological vision. Networks determine the best way to detect targets limited only by their mathematical complexity rather than biological input constraints5.
Our team have previously shown that networks can model human performance in detecting camouflaged targets, effectively replicating aspects of human vision for certain tasks. Networks were used predict human reaction times to colours6 and textures7, allowing us to test and validate vast parameter spaces to establish optimal concealment. Notably, neural networks were not presented with images containing targets, but only parameters describing those targets, offering the opportunity for practical continuation of the methodology as proposed here.
Aims and objectives:
1. Create deep neural network models with both targets and their backgrounds included, enabling artificial observers to process the complete visual scene rather than focusing solely on target parameters.
2. Establish a framework that allows comparisons of how target detection is performed between biological observers (humans) and artificial agents in order to develop valid digital twins. For example, understand which parts of targets the observers detect first.
3. Extend the methodology to non-human animals with different visual systems, in particular domestic chickens (Gallus domesticus), with a view to estimating optimal presentation contexts8 (e.g. lighting conditions)
Methods:
1. Use texture generating algorithms (e.g. based on reaction-diffusion equations7,9) to create large pattern spaces and parameterise them to control for visual similarity10.
2. Run computer-based psychophysics experiments on human participants to collect reaction time data to a large set of targets, including where they click on the target, with the potential extensibility to eye tracking.
3. Develop convolutional deep neural network architectures that can predict reaction times to targets presented in particular contexts. Networks will also be able to output the weights of image areas showing their relative importance in target detection (e.g. using Class Activation Mapping11).
4. Develop and implement a paradigm to present a large number of experimental trials to non-human animals with a focus on domestic chickens. These trials could be either physical object or computer-based, as long as targets can be represented in a parameter space and reaction times to detection are measurable. The paradigm should also accommodate changes in the environment, for example adjustments in light levels.
5. Implement a system to efficiently validate the predictions using a limited number of biological observers (e.g. using Genetic Algorithms4,7).
Key references:
1. Cuthill et al. (2017), https://doi.org/10.1126/science.aan0221
2. Bond & Kamil (2002), https://doi.org/10.1038/415609a
3. Bond & Kamil (2006), https://doi.org/10.1073/pnas.0509963103
4. Hancock & Troscianko (2022), https://doi.org/10.1111/evo.14476
5. Talas et al. (2019), https://doi.org/10.1111/2041-210X.13334
6. Fennell et al. (2019), https://doi.org/10.1098/rsif.2019.0183
7. Fennell et al. (2021), https://doi.org/10.1111/evo.14162
8. Lambton et al. (2010), https://doi.org/10.1016/j.applanim.2009.12.010
9. Turing (1952), https://doi.org/10.1098/rstb.1952.0012
10. Talas, Baddeley & Cuthill (2017), https://doi.org/10.1098/rstb.2016.0351
11. Minh (2023), https://doi.org/10.48550/arXiv.2309.14304
How to apply:
Please visit the Bristol Veterinary School website Funded 4-year PhD Scholarship | Bristol Veterinary School | University of Bristol for details of how to apply and the information you must include in your application. If your application is shortlisted, you will be invited to interview on or before 17th January. Interviews will take place on Microsoft Teams on 29th January. Start date September 2025.
Candidate requirements: Standard University of Bristol eligibility rules apply. Please visit PhD Veterinary Sciences | Study at Bristol | University of Bristol for more information. Contacts: please contact fohs-pgadmissions@bristol.ac.uk with any queries about your application. Please contact the project supervisor for project-related queries: L.Talas@bristol.ac.uk


Leave a Reply