Mechanisms of intentional joint visual attention
DOI:
https://doi.org/10.4454/philinq.v7i1.239Keywords:
Intention, Joint Visual Attention, Computational Model, Human-Robot Interaction.Abstract
People communicate with others via intention. This is likewise true for the primitive behavior of joint visual attention: directing one’s attention to an object another person is looking at. However, the mechanism by which intention, a kind of internal state, causes that behavior is unclear. In this paper, we construct a simple computational model for examining these mechanisms, and investigate mechanisms for categorizing visual inputand for recalling and comparing between these categories. In addition, we lay out some interaction experiments involving a human and a robot equipped with the constructed computational model, to serve as a platform for verifying the intentionality demonstrated by these mechanisms.
Downloads
Published
Issue
Section
License
Authors retain copyright and grant the journal right of first publication, with the work five (5) years after publication licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
After five years from first publication, Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.