tceic.com
学霸学习网 这下你爽了
相关文章
当前位置:首页 >> >>

Analysis of 2D Color Spaces for Highlight Elimination in 3D Shape Reconstruction

Analysis of 2D Color Spaces for Highlight Elimination in 3D Shape Reconstruction
Karsten Schlüns and Matthias Teschner Computer Vision Group, Department of Computer Science Technical University Berlin, Sekr. FR 3-11 Franklinstr. 28/29, 10587 Berlin, Germany email: karsten@cs.tu-berlin.de

Abstract
We develop a method to separate matte and specular reflection components in color images using the dichromatic reflection model. Three-dimensional color descriptors are transformed to a pair of two-dimensional descriptors. This representation allows to perform the reflection analysis for multi-colored objects in a fast and robust manner. Further, it enables us to consider multiple dichromatic clusters in a single dichromatic plane. The method suggests that a pure color space analysis is sufficient in most situations. The matte reflection component is applied to a fast three-dimensional surface reconstruction scheme. Applying the proposed method it is not necessary to know reflection parameters such as matte and specular albedo or roughness values for surface recovery in advance.

Here, c x,s and c x,b are the interface and body reflection color vectors, respectively. m x,s and m x,b are geometrical scaling factors. Later on, from m x,b the surface shape is recovered. In general, c x,s is assumed to be the illumination color [2]. We use this assumption in our separation technique. In general, for objects with curved shape the RGB-colors are dense clusters, but for partially curved or polyhedral objects there are gaps in the clusters. The colors of a dichromatic material lie on a unique plane (dichromatic plane). Among other things the gaps complicate a three-dimensional analysis of the RGB-space of scenes with more than one surface material [3]. Instead of using a three-dimensional color representation our method works with two consecutively applied two-dimensional color representations.

2 Separation of reflection components
In our approach we first use the normalized u and v values of an Y'U'V'-space, the uv-chromaticity. The Y'U'V'-space is defined as follows: ?1 3 1 (Y' ,U' ,V' ) = ( R, G, B) ? ?3 ?1 ?3
1 2 ?1 2 3 1 3 ?1 2 3

1

Introduction

The separation of reflection components plays an important role in two-dimensional image processing as well as in three-dimensional interpretation of objects. Currently, the existing techniques are mostly time consuming and thus not often applied. The separation of reflection components is necessary for instance in physically based image segmentation, feature extraction, object recognition, and in almost all three-dimensional approaches, such as binocular stereo, motion from optical flow, shape-from-shading, and active range scanners. For modeling the reflection it is usual to use an additive composition of two reflection components, the interface reflection Lx,s (specular) and the body reflection (matte) Lx,b at each image location x . We will call materials with such reflection characteristics hybrid materials. Shafer [1] combines this with continuos as well as RGB-color information in the Dichromatic Reflection Model (DRM): Lx = Lx,s + Lx,b = c x,s m x,s + c x,b m x,b .

0
?1 2

? ? . ? ? ?

Surfaces of ideal matte materials correspond to exact one point in the uv-space. Therefore, a dichromatic matte cluster (with or without gaps) also corresponds to a single point. Dichromatic specular highlight clusters without gaps form line segments in the uv-space. These line segments start at the point representing the matte cluster and are directed to the chromaticity of the illumination. Due to the additive composition the dichromatic line segments do not reach the illumination color. If the highlight cluster has gaps the corresponding line segment in the uv -space shows gaps, too. It will become clear later that this does not cause problems for the analysis. The theory of a generic chromaticity model (which includes the uv -space) for reflection analysis is described in [4]. Figure 1 illustrates the structure of dichromatic clusters in uv-space and shows

the clusters of one matte and three hybrid surfaces. The chromaticities are represented by the uv-space throughout this paper. We have examined different chromaticity representations, including rg-space, uv-space, HS-space with cartesian co-ordinates and HS-space with polar co-ordinates. Among these representations the uv -space has several advantages. The transformation can be done with an usual framegrabber and the treatment of Gaussian noise influence on the color data can be combined adequately with the further processing.

in the uv-space are searched along line lα referring to the actual α. Each value muv is a matte color and the number of maxima gives the number of different materials in the scene. If all materials in the scene possesses different hue values, we have to look along line lα for a uv-space value with a maximal distance to illumination color. To reduce noise influence this can be combined with seeking local frequency maxima. If the line lα contains more than one local maximum, then scene comprises materials which have identical hue values but different saturation values. This does not affect the matte color estimation procedure.

2.1

Hue space

An one-dimensional hue space (h-space) is used which holds the number of pixels per angle α , where α is an angle in the uv-space spanned by following three points: chromaticity of the illumination color, chromaticity of the pixel color, a defined fix point. The defined fix point can be each value except the illumination chromaticity. Usually chromaticities are defined regarding to the white point, here the illumination color is used as reference color. The difference is usually not high, since in most applications the illumination color is approximately white. This definition is similar to that given in [5].

2.3

Pixel classification

Using a single color image it is not possible to estimate the reflection components for each image location independently. Therefore, one of the matte colors estimated with the procedure given in Section 2.2 must be assigned to each image location. If there is a single matte color on a line lα each chromaticity lying in an angular segment in the neighborhood of lα is identified with this matte color (see Figure 1). The angular segments are defined in h-space.

2.2

Estimation of reflection components
v

To eliminate specular highlights (the specular reflection component) from a color image both the interface reflection color c x,s and the body reflection color c x,b must be known for each image location x . The elimination is done by setting the geometrical scaling factor m x,s of the interface reflection to zero [3,7]. Interface color estimation: For estimation of the interface reflection color it is usually necessary to distinguish between different materials in the scene. In the robust method proposed by Tominaga and Wandell [8] the materials are separated by hand. The method of Lee [9] uses a LoG detecting edges within highlights. Its success depends highly on the choice of the Gaussian standard deviation, which depends on the other hand on the surface roughness. This roughness dependency is also true for the method of Klinker [3,7]. Besides, the quality of matte image generation depends extremely on the accuracy of interface color estimation. For the application in shape recovery (Section 3 and 4) we estimate the interface color with the Macbeth ColorChecker [10]. Matte color estimation: The algorithm for body reflection color (matte color) estimation first transforms each measured RGB-color vector to the uv-space and to the h-space. To facilitate the further processing in h-space local maxima are enhanced by using a morphological filter [6]. This data set is searched for maxima. Each maximum in hspace corresponds to the hue component of a body reflection chromaticity, we look for. For each local maximum mh = h(α ) found in h-space, local maxima muv = uv( h(α )) cb3

cb2 cb1 cs u

cb4

Figure 1: Chromaticity representation of three hybrid and one matte surface. Chromaticities outside these segments leave unclassified. Alternatively, segments spanned by two matte chromaticities and the illumination chromaticity can be bisected. If a line lα contains two or more matte colors then all RGB-colors lying in the segment must be rotated in the dichromatic plane spanned by the 3D counterpart of the line lα and the origin (black point). This keeps the analysis in a two-dimensional color space, since these colors only depend on intensity and saturation regarding to the illumination color. A sketch of three hybrid color clusters is illustrated in Figure 2. Each point represents a hue-less color descriptor.

cbc cbb

cba

Figure 2: Dichromatic color clusters in a single dichromatic plane. The color clusters in Figure 2 belong to hybrid materials with the typical L- and T-shape [3,7]. Colors in the linear shaped part are matte, the others colors comprise of a matte and a non-zero specular component. The segmentation of colors in the neighborhood of each line lα is done as follows: First, analogous to the h-space an one-dimensional intensity space (i-space) is used to count the number of pixels having the respective intensity and hue value. The highest significant appearing intensity value in i-space is a measure of the length of the matte color cluster. Then, each color which possesses a specular component is projected along vector c s onto each of the physical possible matte color vectors appertaining to lα . If the intensity of the projected color is greater than the maximal intensity value associated with a matte color c b then c b is excluded from matte color candidates. This happens if the saturation of a color with a specular component is greater than the saturation of a matte color associated with another material. Among the remaining matte colors the projection distances are compared. The correct matte color has the shortest distance. Theoretically the color clusters can overlap each other. Then the pure color space analysis should be combined with support of image space relations (neighborhoods).

conjecture that PSM is a good candidate to evaluate the performance, because PSM only works with matte (Lambertian) images. In Section 4 we show some PSM results with and without reflection component separation. Using our separation method the geometric information of the 3D-objects calculated from the PSM images can be combined with photometric and colorimetric attributes. Since PSM additionally recovers surface albedo values, illumination independent surface color descriptors can be obtained. From these color descriptors a set of CIE tristimulus values can be assigned to each surface element [14-16] without using a spectroradiometer. Moreover, from the separated specular reflection component and the surface normals it is possible to extract roughness values of uniform colored surface regions that possess a non-zero specular reflection component [17].

4 Results
We have tested the separation method with several synthetic and real object scenes. For image generation we have used a Sony 3-CCD-DXC 730P camera. The gain control and the gamma correction were turned off. Since the dynamic range of CCD-cameras is strongly limited it is sometimes expedient to combine images with different irises to avoid clipping and pre-kneeing. In the experiments reported here always a single iris setting is used. The objects were illuminated with a slide projector. The highlight elimination method can handle scenes illuminated with more than one source, but such images are not useful for the PSM application. Figure 3 shows the image of a real plastic sphere composed of eight colored segments.

3

Applications
Figure 3: Input image for matte image generation. In Figure 4 the generated matte image of the multi-colored sphere is shown. The matte image construction is done for each of the three PSM input images. Due to very high intensities in the highlight some colors are distorted by the pre-kneeing circuit of the camera. This causes lower intensities than expected when the colors are projected onto the matte color vector. The distortion can be avoided by changing the iris setting.

To demonstrate the performance of the proposed method we apply the separated matte reflection component to the Photometric Stereo Method (PSM) [11,12]. PSM calculates surface normals from the shading variations in three images, taken of the objects with the light source in different positions and consecutive illumination. Unlike [12] we apply an albedo independent look-up-table method to recover surface normals. Using this look-up-table method the knowledge of illumination parameters (direction and strength) is not required. From the surface normals the depth can be calculated using an integration technique. Comparisons with several approaches have shown, that a FFT-based method [13] produces the best results. We

Figure 4: Matte image of the sphere shown in Figure 3. Figure 5 and 6 demonstrate the influence of highlights on surface reconstruction. Figure 5 illustrates the rotated reconstructed surface of the sphere using the original input images. Our albedo independent PSM shape recovery scheme is utilized. A texture mapping (original image shown in Figure 3) is applied to the reconstructed range data. The widespread highlights on two sphere segments cause a bump on the surface.

Figure 7: Plastic watering can with widespread highlights. The marked region refers to Figure 11. On the watering can the highlight area is quite large. The generated matte image is given in Figure 8.

Figure 5: Recovered surface of the sphere distorted by highlights. The reconstructed surface using the three matte images produced by our separation method is shown in Figure 6. The matte image shown in Figure 4 is mapped onto the surface.

Figure 8: Matte counterpart of the watering can shown in Figure 7. A mesh plot of the recovered surface of the can without applying the separation of reflection components is given in Figure 9. The body of the can is strongly deformed.

Figure 6: Surface of the sphere using the proposed method. Another example of surface reconstruction is given for a plastic watering can. One of the original images is shown in Figure 7.

Figure 9: Mesh plot of reconstructed surface deformed by highlights. The result of surface reconstruction using three matte images is shown in Figure 10. Note, that camera and object position were not altered during image generation.

[2]

[3]

[4] Figure 10: Mesh plot of the recovered surface using generated matte images. The reconstructed surface of the watering can is quantitatively analyzed by calculating slant angle histograms of the rectangular region marked in Figure 7. The slant is defined as angle between the optical axis and the reconstructed surface normal. 600 500 number of pixels 400 300 200 100 0 -100 30 40 50 60 slants Figure 11: Histograms of slant angles in the rectangular region shown in Figure 7. The slant histograms for surface normals recovered with and without highlight elimination is shown in Figure 11. Using the original images the calculated slants scatters strongly. The correct slants are scattered in a narrow Gaussian distribution. -10 0 10 20 [10] matte can original can [5] [6]

[7]

[8]

[9]

[11]

[12]

5

Conclusion

[13]

We have developed a method for separating reflection components using consecutive two-dimensional color space analysis rather than a three-dimensional one, so far commonly used. This approach has several advantages. It is less geometry dependent. It facilitates the treatment of more than one hybrid material in the scene. Further, it takes into consideration that more than one dichromatic cluster can appear in a dichromatic plane. The quality of the results has been demonstrated by applying the matte reflection component to a shading based shape recovery scheme.

[14]

[15]

[16]

6
[1]

References
S.A. Shafer, "Using Color to Separate Reflection Components", COLOR research and application, 10(4), Winter 1985, pp. 210-218. [17]

C.-H. Lee, E.J. Breneman, C.P. Schulte: "Modeling Light Reflection for Computer Color Vision", IEEE Trans. on PAMI, Vol. 12, No. 4, 1990, pp. 402409. G.J. Klinker, S.A. Shafer and T. Kanade, "A Physical Approach to Color Image Understanding", IJCV, 4, 1990, pp. 7-38. K. Schlüns and O. Wittig, "Photometric Stereo for Non-Lambertian Surfaces Using Color Information", Proc. 7th Int. Conf. on Image Analysis and Processing, Monopoli, Italy, Sept. 20-22, 1993, pp. 505-512. A.R. Smith, "Color Gamut Transform Pairs", Computer Graphics 12(3), 1978. P. Maragos, "Tutorial on advances in morphological image processing and analysis", O p t i c a l Engineering, Vol. 26, No. 7, 1987, pp. 623-632. G.J. Klinker, "A Physical Approach to Color Image Understanding", PhD Thesis, Technical Report CMU-CS-88-161, Carnegie Mellon University, Computer Science Department, May 1988. S. Tominaga and B.A. Wandell, "Standard surfacereflectance model and illuminant estimation", J. Opt. Soc. Am. A, Vol. 6, No. 4, 1989, pp. 576-584. C.-H. Lee, "Method for Computing the SceneIlluminant Chromaticity from Specular Highlights", J. Opt. Soc. Am. A, Vol. 3, No. 10, 1986, pp. 1694-1699,. C.S. McCamy, H. Marcus and J.G. Davidson, "A Color-Rendition Chart", J. of Applied Photographic Engineering Vol. 2, No. 3, 1976, pp. 95-99. R.J. Woodham, "Photometric Method for Determining Surface Orientations from Multiple Images", Optical Engineering, Vol. 19, No. 1, 1980, pp. 139-144. R.J. Woodham, "Gradient and curvature from the photometric-stereo method, including local confidence estimation", J. Opt. Soc. Am. A, Vol. 11, No. 11, 1994, pp. 3050-3068. R.T. Frankot and R. Chellappa, "A Method for Enforcing Integrability in Shape from Shading Algorithms", IEEE Trans. on PAMI, Vol. 10, No. 4, 1988, pp. 439-451. R.L. Lee, "Colorimetric Calibration of a Video Digitizing System: Algorithm and Applications", COLOR research and application, Vol. 13. No. 3, 1988, pp. 180-186. M.S. Drew and B.V. Funt, "Natural Metamers", CVGIP: Image Understanding, Vol. 56, No. 6, 1992, pp. 139-151. N.J.C. Strachan, P. Nesvadba and A.R. Allen, "Calibration of a Video Camera Digitising System in the CIE L*u*v* Colour Space", PRL 11, 1990, pp. 771-777. G. Kay and T. Caelli, "Inverting an Illumination Model from Range and Intensity Maps", CVGIP: Image Understanding, Vol. 59, No. 2, 1994, pp. 183-201.


推荐相关:

...Components and its Application in 3D Shape Recov....pdf

space) for reflection analysis is described in previous paper in detail [4...Highlight elimination As a result of matte color estimation and segmentation ...

...and Elimination of Interference Terms in the Wig....pdf

Detection and Elimination of Interference Terms in ...Indeed, joint TF analysis has proved to be a ...they will remain unaffected in the C representation...

A General Method for Spatial Reasoning in Spatial D....pdf

for a few speci c shapes and types of spatial...lines in 1D, between two regions in 2D, etc....Theorem proving (elimination): If the domain ...

...OF SYMBOLIC ANALYSIS TECHNIQUES NEEDED FOR THE E....pdf

10 5 0 ADM ARC2D BDNA DYFESM FLO52 MDG MG...However in our analysis of the Perfect Benchmarks...Symbolic Dependence Analysis for High-Performance ...

...Throughput and Clock Frequency in ROCCC C to VHD....pdf

in ROCCC: C to VHDL Compiler for FPGAs Betul...elimination of algebraic identities, copy propagation...Our analysis and optimization passes use the high...

Formation of a submicrocrystalline structure in a t....pdf

(For interpretation of the references to color in...°C; or (ii) a novel forming process in ...(i) the elimination of the high-angle character...

c axis superfluid response of Copper-Oxide supercon....pdf

In this paper we propose that weak T -dependence of the c-axis super?...model for high-Tc compounds involves nearest-neighbour hopping between Cu 3d...

An Occlusion-Based Representation of Shape for View....pdf

3D properties of the shape and provides strong ...in O(n4 ) space and O(n5 ) time for a ...elimination of many of the visual events that ...

网站首页 | 网站地图
All rights reserved Powered by 学霸学习网 www.tceic.com
copyright ©right 2010-2021。
文档资料库内容来自网络,如有侵犯请联系客服。zhit325@126.com