Starting January 1st, 2019, I’ll start as Senior Data Scientist at Anchormen. Looking forward to return to my roots in Artificial Intelligence.
For more information about Anchormen, visit here.
Starting January 1st, 2019, I’ll start as Senior Data Scientist at Anchormen. Looking forward to return to my roots in Artificial Intelligence.
For more information about Anchormen, visit here.
Utrecht Young Academy podcast by @sanli about my work and why I moved to an industry job: https://soundcloud.com/utrechtyoungacademy/voice-of-uya-ron-dotsch
The belief in physiognomy—the art of reading character from faces—has been with us for centuries. People everywhere infer traits (for example, trustworthiness) from faces, and these inferences predict economic, legal and even voting decisions. Research has identified many configurations of facial features that predict specific trait inferences, and detailed computational models of such inferences have recently been developed. However, these configurations do not fully account for trait inferences from faces. Here, we propose a new direction in the study of inferences from faces, inspired by a cognitive–ecological and implicit-learning approach. Any face can be positioned in a statistical distribution of faces extracted from the environment. We argue that understanding inferences from faces requires consideration of the statistical position of the faces in this learned distribution. Four experiments show that the mere statistical position of faces imbues them with social meaning: faces are evaluated more negatively the more they deviate from a learned central tendency. Our findings open new possibilities for the study of face evaluation, providing a potential model for explaining both individual and cross-cultural variation, as individuals are immersed in varying environments that contain different distributions of facial features.
Dotsch, R., Hassin, R. R., & Todorov, A (2016). Statistical learning shapes face evaluation. Nature Human Behaviour, 1, 1. Retrieved from http://dx.doi.org/10.1038/s41562-016-0001. [materials & data]
On March 17, my PhD student Carmel Sofer successfully defended his dissertation “What is typical is good: The influence of face typicality on perceived trustworthiness” and received his PhD.
Congratulations Carmel!
(Dutch news paper Trouw covered his work [read it on Blendle (Dutch)])
Ik heb meegewerkt aan een artikel in Quest dat aandacht wijdt aan hoe accuraat impressies van gezichten kunnen zijn. Lezen? Klik hier (€0,49 op Blendle).
For the Dutch speakers: Some of our work was featured in Daniël Wigboldus’ lectures at Universiteit van Nederland. See all lectures here. The lecture on our reverse correlation work is the third one.
Daniel Wigboldus and I wrote a commentary on Klaas Sijtsma’s “Playing with Data—Or How to Discourage Questionable Research Practices and Stimulate Researchers to Do Things Right” Psychometrika paper. Read it here in open access (and click here for the commentary on our commentary).
An interesting discussion about the message of the commentary is emerging on the Open Science Framework listserv, here.
If you have generated reverse correlation stimuli with my older python script and now want to analyze your data with the much more user friendly rcicr, feel free to download these two PythonR Conversion scripts.
Then follow these steps:
1) Make sure you have rcicr version 0.3.0 or higher installed. Currently, this means you have to install the development version (see here).
2) Adapt single_gzipped_pickle_to_csv.py to point to the right .pkl file generated with the reverse correlation python scripts (this is the file containing the contrast parameters for each stimulus and is also used in analysis) and run it. It will output a contrasts.csv file.
3) If your base image is not 256×256, 512×512, or 1024×1024, rescale it with any image editor your have available.
4) Then edit create_r_data_file.r. You only need to edit the lines between Customize below and Until here, and run it. Make sure the contacts.csv file and the base image jpg is in the working directory you specified. Run the script in R.
And you’re done. A new .rdata file is generated, which you can reference with rcicr.
Leave a Comment