1 - 10 of 11 articles
This paper describes a project currently being undertaken at the National Research Council. The project addresses itself to two much-discussed topics:
• the relation between the semantics of language and vision
• the notion of symbol-grounding.
In this paper we concentrate on spatial prepositions, more specifically we are interested here in projective prepositions (eg. “in front of”, “to the left of”) which have in the past been treated as semantically uninteresting. We demonstrate that projective prepositions are in fact problematic...
This paper proposes a parallel and distributed computational model called Cellular Frame Model for hierarchical labelling of partial and global parts of images by words. The labelling is regarded as a process for integrating images and words. An objective model world is described by defining a...
Document understanding, the interpretation of a document from its image form, is a technology area which benefits greatly from the integration of natural language processing with image processing. We have developed a prototype of an Intelligent Document Understanding System (IDUS) which employs...
This contribution is based on two previously published approaches one of which automatically extracts vehicle trajectories from image sequences of traffic scenes and associates these trajectories with motion verbs. The second approach exploits machine vision in order to maneuver autonomous road...
In the last few years, within cognitive science, there has been a growing interest in the connection between vision and natural language. The question of interest is: How can we discuss what we see. With this question in mind, we will look at the area ofincremental route descriptions. Here, a...
The advent of virtual reality (VR) introduced a paradigm for human-to-human communication in which 3-D shapes can be manipulated in real time in a new kind of computer supported cooperative workspace (CSCW) (Takemura and Kishino 1992). However, mere manipulation — either with 3-D input devices...
In this paper, we present two major parts of an interface for American Sign Language (ASL) to computer applications currently under work; a hand tracker and an ASL-parser. The hand tracker extracts information about handshape, position and motion from image sequences. As an aid in this process,...
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Sign Up Log In
To subscribe to email alerts, please log in first, or sign up for a DeepDyve account if you don’t already have one.
To get new article updates from a journal on your personalized homepage, please log in first, or sign up for a DeepDyve account if you don’t already have one.