next up previous
Next: Implementation details for the Up: A Software Architecture for Previous: The proposed architecture


Implementation details for the observer agents

Observer agents are responsible for detection and tracking of objects of interest, as well as for adjusting the viewing direction of the associated camera with the purpose of following the current object or searching for new objects. The desired system consists of several observers so that, besides coordinate systems of the image $(o,x,y)$ and the camera $(C,X,Y,Z)$, it is necessary to define the common referent coordinate system of the scene $(O,K,L,M)$. An important property of considered scenes is that the objects of interest move within the horizontal ground plane $\pi$. It is therefore convenient to align the pan axis of the camera with the normal of $\pi$, and to choose the camera and the world coordinate systems for which the upright axes $Z$ and $M$ coincide with that direction (see fig.3).

Figure 3: The observer agent imaging geometry.
\resizebox{1.80\szimage}{!}{
\includegraphics{figs/obs_geom.eps}
}

In order to speculate the 3D position in camera coordinates $P(X_P,Y_P,-h)$ from the position of the object in the image plane $p(x_p,y_p)$, it is necessary to perform several transformations, based on precalibrated intrinsic and extrinsic [15] camera parameters and the known angular position of the camera $(\phi,\theta)$. In theory, the only error of the obtained position is caused by the finite height of the tracked object, but in practice several other errors come into effect. These errors are due to imperfect estimations of camera parameters and compensations of lens distortions and geometric inadequacies of the camera controller (offset of the projection center from the crossing of pan and tilt axes).

The main requirement for observer agents is the real time detection and tracking of objects of interest within the current field of view. Additionally, they are required to exchange the following data with the coordinator: (i) clock synchronization and extrinsic camera parameters (at the registration time), (ii) the current viewing direction (after each change), and (iii) the time stamped list of detected objects in camera coordinates (after each processed image). Observers operate in one of the following modes with respect to autonomous camera movement: seeking (camera seeks for an object and then the mode is switched to 'tracking'), tracking (viewing direction follows the active object), or immobile (viewing direction does not change). Finally, they listen for control messages from the coordinator and switch operating modes or move the camera accordingly.


next up previous
Next: Implementation details for the Up: A Software Architecture for Previous: The proposed architecture
Sinisa Segvic 2003-02-25