2026

2026 2025 2024 2023 2022 2021 2020
2019 2018 2017 2016 2015 2014 2013 2012 2011 2010
2009 2008 2007 2006 2005 2004 2003 2002 2001 2000
1999 1998 1997 1996 1995 1994 1993 1992 1991 1990
1989 1988 1987 1986 1985 1984 1983
  1. Tabrizi, M. K., Chi, M., Dey, Bir Bikram, Yuan, K., and Solbach, M. D., Liu, Y., Jenkin, M. and Tsotsos, J. K., DIJIT: A Robotic Head for an Active Observer, IEEE Robotics and Automation Letters, 1-8, 2026.
    We present DIJIT, a novel binocular robotic head expressly designed for mobile agents that behave as active observers. DIJIT's unique breadth of functionality enables active vision research and the study of human-like eye and head-neck motions, their interrelationships, and how each contributes to visual ability. DIJIT is also being used to explore the differences between how human vision employs eye/head movements to solve visual tasks and current computer vision methods. DIJIT's design features nine mechanical degrees of freedom, while the cameras and lenses provide an additional four optical degrees of freedom. The ranges and speeds of the mechanical design are comparable to human performance. DIJIT attains 85% of the peak human saccade speed. Our design includes the ranges of motion required for convergent stereo, namely, vergence, version, and cyclotorsion. Here, we present DIJIT and some aspects of its performance. We also present a novel method for saccadic camera movements, using a direct relationship between camera orientation and motor values. The resulting saccadic camera movements are close to human movements in terms of their accuracy, with 1.17 deg and 1.14 deg mean error for the left and right cameras, respectively.
  2. Hogan,F., Tremblay, J.-F., Baghi, B. H., Jenkin, M. Siddiqi, K., Dudek, G. L. US Patent 12,515,352B2, 2026.
    A method for identifying and manipulating objects may include obtaining, from an image sensor, image sensor data; identifying, using the image sensor data, a location of an object; controlling a robotic element, which includes the image sensor, to move towards the location of the object; determining a slippage based on contact between the image sensor and the object; and controlling a movement of the robotic element based on the determined slippage.