Embedded Vision Systems

blackfin_microncamera_small.png

 

 

 

 

 

The laboratory performs some work on low-level vision systems. Current work investigates how Digital Signal Processors (DSPs) can be combined with special purpose computing hardware as implemented in a Feild-Programmable Gate Array (FPGA) to generate mixed hardware/software systems that are more efficient in terms of power, performance, space, and stability.

Stereoscopic 3D Reconstruction using Motorized Zoom Lenses

within an Embedded System

Pengcheng Liu, Andrew Willis, Yunfeng Sui

UNC - Charlotte, 9201 University City Blvd., Charlotte, NC 28223

ABSTRACT

Motivation

Stereoscopic reconstruction systems are found in a number of environments and have been under development for over 20 years. Ongoing changes in imaging technologies have driven continual theoretical development and technical variations on the stereoscopic reconstruction problem. This paper describes a novel stereoscopic 3D reconstruction system meant to act as an 3D sensing payload for a terrestrial robot.

Methods

Novel theoretical and technical aspects of the system are tied to two aspects of the system design that deviate from typical stereoscopic reconstruction systems: (1) incorporation of an 10x zoom lens (Rainbow-H10x8.5) and (2) implementation of the system on an embedded DSP/FPGA system.
Hardware: The system is implemented on a mixed DSP/FPGA system consisting of a Blackfin DSP (BF537-ezkit) and a Xilinx Spartan3 FPGA. The DSP is tasked with orchestrating data flow through the system and complex computational tasks. The FPGA acts as an interface between the DSP and the system devices which include the camera CMOS sensors and the servo motors which rotate (pan) each camera. The entire system runs on an embedded version of the Linux operating system called ╬╝Clinux which has 64MB of available memory.

Software: Calibration of the camera pair is accomplished using a collection of stereo images that view a common chess board calibration pattern for a set of pre-defined zoom positions. Calibration settings for an arbitrary zoom setting is then obtained by interpolation of the camera parameters. Classical techniques are use to rectify images using the estimated calibration parameters. Dense stereo matching is performed on the stereo image pairs using a custom adaptation of a dynamic programming algorithm proposed by Filho & Aloimonos in 2006 which requires little memory and provides high performance compared to other techniques while sacrificing accuracy by limiting the number of paths considered in the disparity space. Subsequent 3D surface reconstruction is accomplished by classical triangulation of the matched points from the disparity map.

Results:

The paper includes descriptions of and results for our solutions to the following problems: (1) automatic com
putation of the focus and exposure settings for the lens and camera sensor, (2) calibration of the system for various zoom settings, (3) automatic control of the extrinsic parameters as a function of the zoom setting to ensure the camera pair has an overlapping field-of-view, (4) our adaptation of the dense matching algorithm and (5) stereo reconstruction results for several free form objects. Preliminary results for reconstructing a small figurine are provided in figure 1.

eimaging_stereo_fig1.png

Figure 1. (a) shows an image of the embedded system. (b,c) show images of the figurine from the left (b) and right (c) cameras respectively. (d) is a preliminary 3D reconstruction of the figurine.

A large community of developers has emerged over the past 5 years in support of open source for embedded applications. Towards this end, the linux operating system has been customized to run on a wide variety of the special-purpose microprocessors commonly used in embedded systems. This embedded version of linux is called uClinux ("you-see-linux"). One group within this community focuses on making uClinux compatible with the Analog Devices Blackfin series of DSPs. We are currently researching mixed FPGA / DSP embedded applications and found that using the Analog Devices EZ-FPGA daughterboard in conjunction with Analog Devices evaluation boards for the BF537 processor caused problems in uClinux. This article describes our approach for successfully using the two boards together. This requires hardware modifications on the EZ-FPGA board which we describe in detail in this article.

Hardware modifications to the FPGA daughter card were necessary to make the card work with the BF537 EZKit Board (or the BF537 STAMP board). The hardware modifications involve disconnecting those wires associated with the BF537 ethernet interface and several other connections, all of which are located on the 90-pin header J3 that is part of the "U" shaped connector on the bottom of the board, also referred to as the Expansion interface type B. When connecting FPGA board to BF537 DSP board through that interface, some of pins on J3 connector will interfere with pins on network chip of BF537 board. Disconnecting them should get ethernet on bf537 work properly.The following image indicated  pins needed to be disconnected (marked with red block):

fpga_modification.jpg

An image taken from the schematics of the FPGA daughter card is included. Those connections which must be disabled to make the daughterboard operate are in red. We accomplished this by simply lifting off the corresponding feet of these connections on J3 expansion interface board which, if necessary, we may re-attach at a later time.

There are also some useful tips that one should be aware of when using the FPGA daughter card with these BF537 development boards. We use the Xilinx ISE 9.2i to write VerilogHDL code for the FPGA. There are some important default settings that must be changed to make sure the BF537 will still operate properly when connected to the FPGA daughterboard. Specifically, in Xilinx ISE under the process property window one must change the configuration option for unused I/O pins. The default setting for unused I/O pins is "PULL UP," this setting must be changed to "FLOATING."

As a final note, one must also properly configure the FPGA board jumpers to ensure both FPGA and BF537 can boot up and that the FPGA board program is loaded from the onboard flash on reset.

 

Project/Proposal Title : Power Efficient Implementation of a Hardware Accelerated Real-Time 3D Reconstruction System

Source of Support : North Carolina Space Grant (NC Spacegrant)

Description : This project's goal is to develop a power-efficient system intended to act as a potential 3D scanning instrument for a spacecraft payload. The system adopts a hybrid hardware approach that combines the flexibility of a Digital Signal Processor (DSP) with the speed of purpose-built algorithms implemented in a Field-Programmable Gate Array (FPGA) to generate real-time 3D surface measurements using stereoscopic reconstruction techniques. Research contributions specific to this project include a power-sensitive overall instrument design, varifocal 3D reconstruction capable of maintaining accuracy farther away from the spacecraft and FPGA-based hardware acceleration of this computationally intensive problem. 

Project/Proposal Title : Senior Design Project : RealTime Stereoscopic 3D Reconstruction on Low-Power FPGA Systems

Source of Support : NC Space Grant : ESMD Design Award

Description : This project funds a senior design project, i.e., a 4-member team of senior-level undergraduate students, who will design and implement an FPGA system capable of performing real-time stereo reconstruction of 3D objects from streaming input image data.

Summary:

Our goal here is to calculate the 3D surface locations of an object. There are several ways of accomplishing this but in this article we concentrate on stereoscopic reconstruction. Stereoscopic reconstruction estimates the depth of an object using images generated from two cameras that view the object. As a project in my Computer Vision course (ECGR6090), I implemented such a system.

Below is the setup of a simple stereoscopic reconstruction system, with two cameras imaging the same object. Using the images, known relative geometric positions and orientations for the cameras, and how each of the cameras form images, i.e., details regarding their lenses and image sensors, we can reconstruct the 3D positions of points which lie upon the viewed object surface.

blendertwocamera.jpg

A Simple, Cost-Effective Active Range Sensing System

Christopher Mack

            My ongoing thesis project is designing and implementing a simple, cost-effective active range sensing system.   An active range sensing system is a system which uses a camera and a light source to calculate the 3D coordinates of an object.  By knowing the position of the light projector and the camera, the depth of the imaged laser light on the object can be calculated using triangulation.  The main issues for the project are designing and building a cost-effective system, implementing the needed image processing and computer vision code in Java and refining the system to provide accurate results.  Below is the basic setup of an active range sensing system.

activerangesystemnopt.jpg