Research News

Research results and progress is submitted here.
UI Engineering alumnus Thaddeus Thomas (BS 2004, MS 2007 biomedical engineering) recently received the Clinical Biomechanics Award at the 34th annual meeting of the American Society of Biomechanics.  Thomas was recognized for outstanding new biomechanics research targeting a contemporary clinical problem. The award is sponsored by Elsevier Science, Ltd., publishers of Clinical Biomechanics, an international multidisciplinary journal of musculoskeletal biomechanics.

Two finalists were selected from the top 10th percentile of over 500 abstracts submitted to the annual meeting of the Society’s annual meeting, who then made competitive podium presentations judged by the ASB Awards Committee. The award consists of an engraved plaque and a check for $1,000.

The ASB was founded in October 1977 by a group of 53 scientists and clinicians, and its first annual meeting was held that year in Iowa City. The ASB mission is to encourage and foster the exchange of information and ideas among biomechanists working in different disciplines and fields of application and to facilitate the development of biomechanics as a basic and applied science.

Thomas is a graduate research assistant in the Department of Orthopaedics and Rehabilitation, as well as a PhD candidate in the Department of Biomedical Engineering at The University of Iowa. His presentation was entitled “Virtual pre-operative reconstruction planning for comminuted articular fractures,” co-authored by Donald D. Anderson, J. Lawrence Marsh, and Thomas D. Brown (University of Iowa), and by Andrew R. Willis (University of North Carolina at Charlotte).

Also at the annual meeting, Donald Anderson, research associate professor of orthopaedics and rehabilitation, and biomedical engineering, was elected president of the ASB.

scanner_image
Research at the UNCC visionlab has produced an inexpensive 3D scanner that is portable, accurate and is capable of "wrapping" photographs over the 3D meshes produced by the scanner. The system is powered by a SICK LMS 200 LIDAR sensor that captures 3D (x,y,z) coordinates at a rate of up to 27k 3D points per second. Each measurement records the (x,y,z) position of surfaces within the line- o f-sight of the scanner. The 3D surface samples are integrated with photographs from a webcamera in real-time to create a 3D mesh of the scene in the vicinity of the scanner. The system is high ly configurable and allows the users to specify the region of interest for data capture that can range from a small surface patch (~1 sq. m.) to a 360-degree view of  all surfaces within 60m. of the scanning sensor. A dense 360-degree scan can take up to 2-3 minutes to capture and less dense scans covering smaller areas may be captured much faster. The SICK sensor provides measurements that average 2 cm. of error. Surface (x,y,z) measurements are integrated in real-time with images produced by a web camera that is also controlled by the scanning software. The scanner output is a sequence of Alias-Wavefront (Maya-compatible) OBJ files. Each output OBJ file includes a portion of the 3D scan and a image from the web camera that is overlaid onto the mesh using texture-mapping. The system was successfully used to capture data from Mayan architecture in the Puuc region of the Yucatan peninsula in Mexico in May 2010.
===============
Technical Specs
===============
DataRate: up to 27k 3D points/sec
Vertical Field of view: Configurable from straight up (0 degrees) to almost straight down (150 degrees) -- occlusion occurs due to tripod mount.
Horizontal Field of view: Configurable up to 360 degrees
Accuracy: ~2 cm.
Weight: ~22 kg.
Output: OBJ format 3D files and JPG images (for texture mapping)

Two views of a scan of  a Mayan facade from the Kiuic archaeological site are shown below.
mayascan_01
mayascan_02
During the period September 22 to October 4 Yunfeng Sui attended the International Conference on Computer Vision (ICCV) where he presented joint work with Dr. Willis on the development of a fast corner detector.  This work is described in detail in the paper entitled An Algebraic Model for fast Corner Detection. This paper revisits the classical problem of detecting interest points, popularly known as “corners,” in 2D images by proposing a technique based on2009_iccv fitting algebraic shape models to contours in the edge image. Our method for corner detection is targeted for use on structural images, i.e., images that contain man-made structures for which corner detection algorithms are known to perform well. Further, our detector seeks to find image regions that contain two distinct linear contours that intersect. We define the intersection point as the corner, and, in contrast to previous approaches such as the Harris detector, we consider the spatial coherence of the edge points, i.e., the fact that the edge points must lie close to one of the two intersecting lines, an important aspect to stable corner detection. Comparisons between results for the proposed method and that for several popular feature detectors are provided using input images exhibiting a number of standard image variations, including blurring, affine transformation, scaling, rotation, and illumination variation. A modified version of the repeatability rate is proposed for evaluating the stability of the detector under these variations which requires a 1-to-1 mapping between matched features. Using this performance metric, our method is found to perform well in contrast to several current methods for corner detection. Discussion is provided that motivates our method of evaluation and provides an explanation for the observed performance of our algorithm in contrast to other algorithms. Our approach is distinct from other contour-based methods since we need only compute the edge image, from which we explicitly solve for the unknown linear contours and their intersections that provide image corner location estimates. The key benefits to this approach are: (1) performance (in space and time); since no image pyramid (space) and no edge-linking (time) is required and (2) compactness; the estimated model includes the corner location, and direction of the incoming contours in space, i.e., a complete model of the local corner geometry.

iccv_3dim_picStudent Yunfeng Sui traveled to the IEEE International Workshop on 3-D Digital Imaging and Modeling held on October 3-4, 2009 in Kyoto, Japan. Here he presented a paper entitled Virtual 3D Bone Fracture Reconstruction via Inter-Fragmentary Surface Alignment which details how this task is accomplished. The system takes as input a collection of bone fragment models represented as surface meshes, typically segmented from CT data. Users interact with fragment models in a virtual environment to reconstruct the fracture. In contrast to other approaches that are either completely automatic or completely interactive, the system attempts to strike a balance between interaction and automation. There are two key fracture reconstruction interactions: (1) specifying matching surface regions between fragment pairs and (2) initiating pairwise and global fragment alignment optimizations. Each match includes two fragment surface patches hypothesized to correspond in the reconstruction. Each alignment optimization initialized by the user triggers a 3D surface registration which takes as input: (1) the specified matches and (2) the current position of the fragments. The proposed system leverages domain knowledge via user interaction, and incorporates recent advancements in surface registration, to generate fragment reconstructions that are more accurate than manual methods and more reliable than completely automatic methods.

Experimental Low-Cost LIDAR Scanner Implementation Complete 

The UNCC Machine Vision lab has acquired an AR4000 range sensor from Acuity Laser Measurement the system is capable of capturing 3D surface range data up to 40 feet from the sensor.

ar4000_sensor.jpg 

We have recently develop a linux driver for this sensor. The source for this driver is available through subversion at the subversion repository here under the link "laser-driver." The linux driver is currently compatible with linux kernel versions 2.6.16-2.8.18.

 

Small Code Footprint DICOM Image Loader Completed 

Recent work involving medical images have motivated the implementation of a robust DICOM image reader. This work has now been completed resulting in one of the most complete open-source Java implementations of the DICOM specification. The implementation is capable of loading 8-bit grayscale, 8-bit color, 16-bit grayscale, and 24-bit color DICOM images where the image data may be uncompressed, run-length encoded, jpeg-lossless compressed, or jpeg-lossy compressed. Also of interest is the implementation of the spatial (sequential) lossless encoding mode (SOF3) of the ISO/IEC also known as JPEGL. Note that this IS NOT an implementation of JPEG-LS. It is an implementation of the original lossless JPEG coding scheme as specified in the ORIGINAL JPEG Internal Standards Organization (ISO) spec :

  • ISO/IS-10918-1 (JPEG Part 1)
  • ISO/IS-10918-2 (JPEG Part 2)

Whereas JPEG-LS is ISO spec ISO/IS-14495-1 (JPEG-LS Part 1).

I can find no easy-to-use, small-footprint, open-source Java implementation capable of decoding these streams at full resolution. Some nice things about the implementation is that it requires just a few new classes to run (approximately 6).