Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Study of Three-Dimensional Image Brightness Loss in Stereoscopy

Study of Three-Dimensional Image Brightness Loss in Stereoscopy Appl. Sci. 2015, 5, 926-941; doi:10.3390/app5040926 OPEN ACCESS applied sciences ISSN 2076-3417 www.mdpi.com/journal/applsci Article Study of Three-Dimensional Image Brightness Loss in Stereoscopy 1, 1,† 2,† 3,† 2,† Hsing-Cheng Yu *, Xie-Hong Tsai , An-Chun Luo , Ming Wu and Sei-Wang Chen Department of Systems Engineering and Naval Architecture, National Taiwan Ocean University, 2 Pei-Ning Road, Keelung 20224, Taiwan; E-Mail: jack_tsai@itri.org.tw Institute of Computer Science and Information Engineering, National Taiwan Normal University, Taipei 10610, Taiwan; E-Mails: anchunlou@itri.org.tw (A.C.L.); schen@csie.ntnu.edu.tw (S.W.C.) Electronics and Optoelectronics Research Laboratories, Industrial Technology Research Institute, Hsinchu 31040, Taiwan; E-Mail: itria40212@itri.org.tw These authors contributed equally to this work. * Author to whom correspondence should be addressed; E-Mail: hcyu@ntou.edu.tw; Tel.: +886-2-2462-2192 (ext. 6059); Fax: +886-2-2462-5945. Academic Editors: Wen-Hsiang Hsieh and Takayoshi Kobayashi Received: 26 July 2015 / Accepted: 13 October 2015 / Published: 21 October 2015 Abstract: When viewing three-dimensional (3D) images, whether in cinemas or on stereoscopic televisions, viewers experience the same problem of image brightness loss. This study aims to investigate image brightness loss in 3D displays, with the primary aim being to quantify the image brightness degradation in the 3D mode. A further aim is to determine the image brightness relationship to the corresponding two-dimensional (2D) images in order to adjust the 3D-image brightness values. In addition, the photographic principle is used in this study to measure metering values by capturing 2D and 3D images on television screens. By analyzing these images with statistical product and service solutions (SPSS) software, the image brightness values can be estimated using the statistical regression model, which can also indicate the impact of various environmental factors or hardware on the image brightness. In analysis of the experimental results, comparison of the image brightness between 2D and 3D images indicates 60.8% degradation in the 3D image brightness amplitude. The experimental values, from 52.4% to 69.2%, are within the 95% confidence interval. Appl. Sci. 2015, 5 927 Keywords: 3D; image brightness loss; stereoscopy; three-dimensional image 1. Introduction One of the primary challenges encountered in the development of three-dimensional (3D) displays is image brightness. Even under application of current stereoscopic techniques, viewers experience a certain amount of image brightness degradation when viewing 3D images in cinemas or on stereoscopic televisions. Regardless of whether the images are recorded with half-mirror or double-parallel type filming equipment, the brightness is impaired. Both polarizing film group displays and the viewing of 3D images with 3D glasses cause brightness degradation. 3D displays must send separate images to the right and left eyes. However, image brightness loss occurs as a result of crosstalk, which manifests in the case of an overly high contrast between the images. If the brightness is increased to compensate for the 3D image brightness loss in stereoscopy only, spectators will not have the same visual experience as with two-dimensional (2D) images. Furthermore, some of the light is scattered when spectators wear 3D glasses; only half the light intensity from the screen is reflected upon the eyes, and the image brightness is reduced by 30%–50%. Clearly, the image brightness in 3D stereoscopic images is lower than that in 2D images during film playback. In addition, some spectators have strong antipathy towards 3D films, as a result of feelings of dizziness and other uncomfortable physiological phenomena [1–5]. Because the inter-ocular disparities between human eyes are 5–6.5 cm, the resulting perspective is slightly different when both retinas receive the same images. This difference is also called visual disparity. Rods and cones are the retinal cells that control visual signals; they are responsible for converting the brightness of light, color, and other information into optic nerve messages in the brain. Fusing of the two different perspectives generates depth perception, so that human eyes can discern 3D objects visually [6,7]. This study focuses on whether the 3D images in cinemas or on stereoscopic television can be adjusted for decreased 3D light degradation. 2. Three-Dimensional Image and Brightness Loss 2.1. Theory of Stereo Vision The 3D visual phenomenon is generated by the parallax effect, which includes both binocular parallax and motion parallax. Binocular parallax is due to the different perspectives of the eyes, resulting in slightly different image messages being received by the left and right eyes. The two received images are combined in the brain to synthesize the stereoscopic effect [8,9], as shown in Figure 1. Motion parallax is due to changes in the observation point, and is affected by the distance of an object relative to a moving background, as shown in the simplified illustration of this parallax in Figure 2. When observed from point A, the object appears to be left of the butterfly, but for observation from point B, the object appears to be to the right of the butterfly. Appl. Sci. 2015, 5 928 Figure 1. Two images sent to the brain to synthesize the stereoscopic effect. Figure 2. Motion parallax. Using either of the parallax effects, human eyes can produce 3D visual effects when viewing an object. To construct an image display device that is capable of producing 3D visual effects, one can utilize the parallax technique to adjust the brightness of light, color, and viewing direction, so that the messages received by the left and right eyes have a difference in viewing angle, causing a stereoscopic effect. The brain activity during the synthesis process depends on complex factors related to human evolution, psychology, and medicine. However, the current plane 3D imaging technology remains focused on the concept of the “right and left eyes receiving different images,” which is related to Appl. Sci. 2015, 5 929 binocular parallax. In other words, provided some analogy of the “right and left eyes receiving different images” of the visual environment is employed, viewers can observe 3D images on a 2D plane screen [10,11]. Currently, on the basis of the binocular parallax theory, 3D image display technologies are roughly divided into stereoscopic and auto-stereoscopic types. 2.2. Three-Dimensional Display and Three-Dimensional Glasses There are two kinds of stereoscopic displays: passive and active. Passive polarization glasses, which are also called passive 3D glasses, are an example of passive stereoscopic technology. The working principle of this technology is to paste a micro-retarder layer on the front of a general television or monitor, using the polarization direction of the light to separate the left- and right-eye images. Then, the passive polarization glasses can correctly cause each of the viewer's eyes to see the appropriate right and left images, producing a 3D effect. The advantage of this approach is its low cost, but the screen resolution is reduced to half the original 2D image resolution, and the overall brightness is also reduced. Color-coded anaglyph 3D glasses can be divided into those using red and cyan, green and red, or blue and yellow filters, as shown in Figure 3. Figure 3. Classification by color-code: (a) red and cyan and (b) red and green anaglyph 3D glasses used to create 3D stereo images. Active 3D glasses, shown in Figure 4, are another type of 3D glasses that are also called shutter glasses. In this process, the 3D screen continually alternates the display between the left- and right-eye images at a frequency of 120–240 Hz, with the right and left eyes being quickly switched and shielded alternately by the shutter glasses. This causes the left and right eyes to see the correct respective images, causing the brain to perceive a 3D image. Appl. Sci. 2015, 5 930 Figure 4. Active 3D glasses: (a) right and (b) left frame displays. The main advantage of this technology is that the picture resolution, color, and stereoscopic 3D effect are not sacrificed. However, some users of active 3D glasses suffer from dizziness and discomfort. The other advantages are that there is less blur, this technology is low cost, and it is applicable to television or computer screens as well as projectors, provided their update frequencies can meet the requirements of the glasses. Therefore, active shutter glasses are being used in the majority of the 3D display systems being introduced onto the market at present, including 3D televisions, 3D glasses, and 3D cinema screens. 2.3. Three-Dimensional Image Brightness Loss The sources of the loss in brightness can be divided into two categories: (a) Those due to the half-mirror 3D camera rig used to film the 3D content; and (b) Those resulting from the 3D glasses during viewing of the 3D device. Note that the degrees of brightness degradation caused by polarization glasses and shutter glasses are almost identical, as both of these devices allow only half of the light to penetrate the lenses, whether in the spatial or temporal domain. The sources of the brightness differences between 2D and 3D images that can have a visual impact on the eye can also be divided into two categories: (a) Control during filming of a scene, with adjustments being made in accordance with the ambient light (for example, the aperture for the right-eye image may need to be increased so that the brightness of the images received by both eyes are within a similar range). This prevents the eyes from becoming rapidly fatigued, while also reducing the time and cost of dimming of the film during post-production. (b) The screen brightness may be too low or too high during 2D or 3D viewing. In this case, the eyes can begin to suffer from significant discomfort within a short period of time. This study focuses on the brightness of 2D and 3D images within the range where a viewer can experience the same level of brightness for both eyes, while not experiencing physical discomfort such as eye soreness, dizziness, vomiting, etc., during the viewing of the 3D images. 3. Experiment Design for Brightness Loss Measurement In general, commercially available 3D LCD televisions are adjusted by standard measurement methods when the products are manufactured to leave factories [12]. Moreover, the screen image brightness in average can be obtained by a light meter. The measurement method of local Appl. Sci. 2015, 5 931 block-by-block does not need accurately to acquire precise brightness because human eyes can discern average brightness of the screen image. Hence, the 3D LCD televisions are adopted in the study to compare the brightness difference between 2D and 3D films. When comparing 2D and 3D films with the same base brightness, viewers tend to find that 2D images are brighter than 3D images. The adjustments necessary to ensure similar image brightness perception by both eyes are mainly decided based on experimental results. Then, the increases and decreases in brightness are controlled by tuning the filming aperture setting and the shutter speed. This can reduce the overall filming cost and time. In this study, the ambient brightness was first set for the filming of a scene. Two lighting groups, with 3200 and 7500 lumens (lx) were used as spatial lighting. Then, test images were recorded in an indoor studio, as shown in Figure 5. The experimental and hardware conditions are listed in detail in Table 1. In the experiment, the normal ambient exposure value (EV) was set to ISO 400, which is the most commonly used value. Five aperture values (F4, F5.6, F8, F11, and F16) were selected, which were selected depending on the experiment. The shutter speed (4–1/3200 s) employed in conjunction with each of the five aperture values was used as a cross-aperture experimental variable. Figure 5. Recording environment. Appl. Sci. 2015, 5 932 Table 1. Detailed environmental and hardware parameters. Aperture EV Shutter Speed (s) ISO Scene Recording Brightness (lx) F4 −2–+2 1/50–1/3200 400 3200 and 7500 F5.6 −2–+2 1/6–1/500 400 3200 and 7500 F7.1 −2–+2 1/6–1/500 400 3200 and 7500 F9 −2–+2 3–1/500 400 3200 and 7500 F10 −2–+2 4–1/500 400 3200 and 7500 The experimental equipment included an illuminometer (i.e. CL200A, Konica Minolta, Tokyo, Japan), Canon 400D camera (Canon, Fukushima Prefecture, Japan), Vizio 32- and 47-inch television screens (WUSH, Irvine, CA, USA), polarized glasses, a Sony 52-inch television screen (Sony, Tokyo, Japan), and flash glasses are shown in Table 2, respectively. The illuminometer is a brightness measuring tool, which produces readings in lx. The height of the darkroom was equivalent to three televisions, and the optical axis of the optical test equipment was oriented vertically to the center of the display screen, at a distance of three (high-definition television (HDTV)) and four (standard-definition television (SDTV)) times the display screen height. This was so that all of the light was received as the average light of a single image [13], as shown in Figure 6. A test image is displayed with both 2D and 3D mode on a 3D TV, and luminance measurements are performed for each mode. In this way, the range of luminance for 2D mode and 3D mode can be found. In addition, there are only grayscale images but color image are not need in this experiment to enhance actual measurement images. Furthermore, the grayscale images have been added, as shown in Figures 7 and 8, respectively. Figure 6. Schematic of darkroom for display brightness measurement [13]. Appl. Sci. 2015, 5 933 Table 2. 3D TV vendors. Vendor Model 3D Glasses Used Time (h) Year Vizio VL320M 32-inch Polarized Glasses 50 2012 Vizio M420KD 42-inch Polarized Glasses 45 2012 Sony KDL-52XBR7 52-inch Flash Glasses 70 2011 Figure 7. F5.6 aperture 2D/3D test images. Figure 8. F10 aperture 2D/3D test images. Note that the CL200A illuminometer is one of the commonly used models in the industry and is also relatively easy to obtain. According to statistical-method-aided analysis, the results of such experiments can be applied to the industrial sector. Furthermore, the specifications of this device Appl. Sci. 2015, 5 934 conform to Japanese Industrial Standards (JIS) C1609-1:2006 Class AA, and are extremely consistent with the International Commission on Illumination (CIE) standard observer curves [14]. After fixing the ISO and aperture conditions, images were recorded for different shutter conditions. To avoid obtaining different results for the same ambient light, the parameters were typically only adjusted after the last shooting iteration for a given setup. The camera was turned to aperture priority (AV) mode, and the camera function key “*” was pressed after the aperture adjustment. Then, the following steps were performed in order to obtain the optimal shutter value: 1. The camera was turned to M mode and images were recorded at the given shutter value, ensuring that the EV was 0. 2. The images were recorded within the EV −2 and EV +2 ranges. As a result, each group had 19 datasets. 3. The obtained images were presented on the television screen, and the image brightness was measured in the 2D and 3D modes. 4. The data was collected for analysis using statistical product and service solutions (SPSS) software (IBM, Chicago, IL, USA). 4. Experimental Results An image brightness regression model was used to analyze the screen image brightness relationship with the following variables:  Dependent variable (Y): Screen image brightness;  Independent variables (X): (1) Screen size; (2) Screen recording brightness; (3) Mode (2D or 3D); (4) Photographic equipment EV; (5) Interactions between variables, as shown in Table 3. Depending on the variables’ regression standardized residuals, it was determined whether the distribution of the sample was normal, for which the bell curve is called a completely normal distribution curve. Because of sampling errors, there was a gap between the actual observed-value histogram and the normal distribution curve (i.e., Figure 9). However, no extreme values beyond three standard deviations were found in this experiment. As a result, the sample values corresponded naturally with the normal distribution. The study then examined the variables’ standardized regression residual error on the normal P-P diagram, which exhibits a 45° line from lower left to upper right (i.e., Figure 10). Therefore, the sample observations are approximately in line with the basic assumption, as shown in Tables 4–6. Appl. Sci. 2015, 5 935 Table 3. Dependent and independent variables. Variable Type Name Values Because of the nature of the luminance variables, there Dependent is no normal distribution, so a Box-Cox transform is Screen Image Brightness Variable (Y) used to convert the variable (λ = 0.3), so that ε (Note 1) has a normal distribution. (1) Screen Size 32, 47, and 52 inch (2) Field Brightness 3200 and 7500 lx (3) 2D or 3D mode 2D and 3D modes Converted using the camera’s shutter aperture Independent (4) Photographic combination. Variables (X) Equipment EV Conversion Formula: EV = log (N /t), where N is the aperture (F value), and t is the shutter speed (s). (5) Interactions between The interactions between each variable. Variables The risk-free rate, ε, extracts a random sample from a normal distribution with a mean of 0 and a standard deviation of 1. Figure 9. Standardized residuals histogram. Appl. Sci. 2015, 5 936 Figure 10. Standardized regression residuals of normal P-P diagram. Table 4 shows the measures used in the brightness regression model. In this model, R is used to illustrate the explanatory power of the entire pattern. However, this measure tends to overestimate phenomena depending on the sample size; the smaller the sample, the more prone the model is towards overestimation. Therefore, the majority of researchers use R , which is the error variance and variable (Y) divided by the degree of freedom. Table 4. Brightness regression model summary. Correlation Coefficient of Adjusted Coefficient of Coefficient (R) Determination (R ) Determination ( R ) Brightness Regression Model 0.995 0.990 0.990 Analysis of variance (ANOVA) is a particular form of statistical hypothesis testing that is widely applied in the analysis of experimental data (i.e., Table 5). Statistical hypothesis testing is a method of decision-making based on data. If the test results (calculated by the null hypothesis) fall within a certain likelihood of not being accidental, they are deemed to be statistically significant. For example, when the “p value”, which is calculated from the data, is less than the defined critical significance level, the original hypothesis can be deemed invalid. A regression coefficient of 0 can indicate that the variable has no effect on the model. Table 5. Analysis of variance (ANOVA). Source Sum of Squares Degrees of Freedom Mean Square F Value Return 5382.379 10 538.238 10200.217 Residual 55.617 1054 0.053 Total 5437.996 1064 Appl. Sci. 2015, 5 937 The statistical coefficients of the linear regression model are presented in Table 6. Note that mode switching between 2D and 3D images has the most significant impact on the screen image brightness. That is, once the screen is switched from the 2D to 3D mode, the image brightness on the screen exhibits a very significant decline. Table 6. Statistical coefficients of linear regression model. Non-Standardized Standardized Coefficients Coefficients Brightness Regression Variables T B Standard Error β (A) Constant 21.259 0.115 184.892 (1) Screen Size 1 1.905 0.145 0.399 13.180 (2) Screen Size 2 0.707 0.145 0.147 4.890 (B) Shooting Scene Brightness 0.690 0.020 0.153 35.235 (C) 2D or 3D Mode −5.749 0.119 −1.270 −48.340 (D) Photographic Equipment EV −1.342 0.008 −1.037 −169.750 (E) Interaction Value of Variables (1) and (C) −0.314 0.030 −0.051 −10.499 (1) and (D) −0.096 0.010 −0.293 −9.753 (2) and (D) −0.045 0.010 −0.137 −4.549 (B) and (C) −0.137 0.029 −0.026 −4.809 (C) and (D) 0.311 0.008 0.991 37.541 Each of the independent variables is explained and analyzed below. (1) Screen size After the 2D/3D mode variable, the screen size is the most important environmental variable when measuring brightness in the darkroom. The linear regression model assumes three screen sizes as separate dummy variables. The statistical results show a significant impact on the brightness, and larger screens have a positive effect on the brightness value, falling within the range of reasonable consideration. Therefore, the screen size and interactions between other factors significantly affect the brightness. (2) Scene recording brightness Only two brightness values were used in this study: 3200 and 7500 lx. The statistical results show that these two variables are within the range of reasonable consideration. (3) Mode (2D or 3D) This variable is the core consideration of the experiment, and its significance is apparent in the results of the statistical analyses. Thus, this statistical coefficient affects the screen image brightness very significantly. The experimental measurements and the linear regression model prove that the 3D image brightness is lower than that of the 2D images. To estimate the screen image brightness value, the linear regression model takes the 2D and 3D modes as dummy variables. The expected screen display mode is set in the linear regression model so that the estimated image brightness value can be obtained. Appl. Sci. 2015, 5 938 (4) Photographic equipment EV In the experiment with different aperture and shutter conditions, this setting has a direct impact upon the image brightness. A larger EV indicates less exposure, and the statistical results are also consistent with this finding. (5) Interaction values of variables Each of the variables interacts with the others to a greater or lesser degree, but the coefficients of a given variable have a lesser effect on the screen brightness than interactions with all other major independent variables. Figure 9 is a standardized residuals histogram that shows the regression distribution of the standardized residuals. Figure 10 is a normal probability plot diagram (P-P diagram). In statistical analysis, it is often necessary to determine whether a dataset is from a normal population using regression analysis or multivariate analysis. Of all the analysis methods, the use of statistical graphics to make such a judgment is relatively easy and convenient. With a P-P diagram and a least-squares line, the user can easily determine whether or not the entered data are from a normal population. Another function is to aid researchers in interpreting the meaning of the P-P plot. The least-squares line is obtained from the linear equation derived from the method of least squares, which is a linear equation that obtains the sum of the squared residuals between the least-squares line and the data minimal. The regression equations adopted in this study are primarily based on concepts from reference [15], which is used as a mathematical model for derivation of the basic theory. After transforming the variables of the regression model via Box-Cox to λ = 0.3, ε matches the normal distribution, as does the R model. The linear regression model parameters are expressed in terms of YX =+ββ +ε , ii 0 i i (1) where Y is a random variable, X is a known fixed constant, ε is an unobservable, and i = 1, … , n i i i x (i-th test; Y is the reaction value corresponding to X ). Expressing the main variables in the i i experimental linear model formula yields YX =+ 21.259 1.905 + 0.707X+ 0.69X− 5.749X−1.342X+μ , 12 B C D (2) where X is the size of screen 1, X is the size of screen 2, X is the scene recording brightness, X 1 2 B C indicates the mode (2D or 3D), X is the photographic equipment EV, and μ is the interaction value of the variables. From analysis of the image brightness using this linear regression model, different variables affect the screen brightness by different degrees, although the interactions between variables affect the screen brightness only minimally. However, changing from the 2D to 3D mode has the most significant effect on the brightness. Once the screen changes from the 2D to 3D mode, the screen brightness declines noticeably. As detailed above, the linear regression model attempts to consider the effects of the main environment and hardware when estimating the screen brightness value. The brightness values are affected by numerous external factors; however, conversion from 2D to 3D mode has the largest impact. In fact, 3D image professionals require a certain amount of time to adjust the 3D image brightness in such scenarios. Therefore, the linear regression model can help to estimate image Appl. Sci. 2015, 5 939 brightness if other environmental factors or hardware conditions are controlled. The faster the shutter speed, the lower the screen image brightness, and low-brightness value images that are difficult to observe can even be obtained. The 3D image brightness degradation in response to increased shutter speed is clearly shown in Table 7; further, the 3D image brightness values are significantly lower than the 2D image brightness values for specific shutter conditions. Table 7. F5.6 aperture experimental results. Shutter 2D Mode 3D Mode RGB Values Speed (s) Brightness Value (lx) Brightness Value (lx) 1/125 6.6 R:125, G:131, B:126 2.2 1/250 2.2 R:74, G:79, B:74 0.7 The experimental results show that, when the aperture is F5.6 (i.e., Table 7), the image brightness value is the same in 2D mode with a shutter speed of 1/250 s as it is in 3D mode with 1/125-s shutter speed. Therefore, the 3D display exhibits approximately 50% image brightness degradation. The experimental results for the F10 aperture are listed in Table 8. For this aperture setting, the image brightness value is the same in 2D mode with a shutter speed of 1/40 s as it is in 3D mode with 1/10-s shutter speed. Thus, the 3D display has only 50% of the 2D image brightness. Table 8. F10 aperture experimental results. Shutter 2D Mode 3D Mode RGB Values Speed (s) Brightness Value (lx) Brightness Value (lx) 1/20 19.8 R:193, G:198, B:194 6.6 1/40 9.4 R:138, G:144, B:139 3.2 1/50 6.6 R:117, G:123, B:117 2.2 1/80 3.2 R:81, G:87, B:82 1.1 If the low-brightness (value 0) data are removed from the experimental dataset, the 3D image brightness average value can be increased to approximately 39.2% of the 2D image brightness value. This means that, in comparing the polarizing 3D and 2D image brightness values, the 3D image brightness decreases by approximately 60.8%. For a 95% confidence interval, degradation values of 52.4%–69.2% are within the reasonable range of consideration, as shown in Table 9. Because the polarized 3D image brightness is approximately 60% less than that of the corresponding 2D image, in order to achieve the same image brightness as the 2D image, the 3D image brightness must be increased. Table 9. Comparison of 2D and 3D image experimental results. Experimental Item Values 3D Image Brightness Degradation 60.8% 3D Image Brightness Degradation within 95% Confidence Level 52.4%–69.2% Appl. Sci. 2015, 5 940 The above reference data is applicable to the production of 3D stereoscopic displays, and shows that the 3D image brightness must be increased in order to achieve the same image brightness as a 2D image on a screen. The use of this data can reduce the time period required for setting adjustments. Complementary adjustments taking the various environmental factors into consideration are recommended for practical imaging designs, so that the gap in display brightness between 2D and 3D images can be reduced. As a result, the main idea of this study is trying to measure the brightness loss (or difference) between filming 3D movie and watching 3D movie. The results show 60.8% brightness loss, which means we have to increase 2.5 times lighting intensity while filming to reach the standard brightness. This modification elevates the watching 3D movie to proper brightness. 5. Conclusions In order to improve image brightness degradation in three-dimensional (3D) displays utilizing stereoscopic images in cinemas or on television, this study aimed to quantify the 3D image brightness degradation in such cases. It also aimed to estimate the image brightness relationship between 3D and two-dimensional (2D) images and, hence, to modify the brightness values of the former. The values measured based on the capturing of a single 2D and a single 3D image were estimated using the photographic principle. Moreover, image brightness data were collected in the 2D and 3D modes for analysis using statistical product and service solutions (SPSS), and so that the image brightness values could be estimated by the statistical regression model for different environmental factors or hardware devices. Finally, a comparison of the polarizing 3D image brightness value with that of a 2D image based on the experimental results indicated that the 3D image brightness can be decreased by 60.8%. Furthermore, the degradation values of 52.4%–69.2% are within the 95% confidence interval. Acknowledgments This work was supported by the Ministry of Science and Technology of Taiwan (Grant No. MOST 103-2221-E-019-045-MY2). Author Contributions Hsing-Cheng Yu conceived and designed the experiment, and contributed to a main part of manuscript writing. Xie-Hong Tsai and Ming Wu contributed to implement and setup the experiment. An-Chun Luo and Sei-Wang Chen contributed in corresponding data analysis. All authors contributed to polish the paper for improving the fluency. Conflicts of Interest The authors declare no conflict of interest. References 1. Banno, A.; Ikeuchi, K. Omnidirectional texturing based on robust 3D registration through Euclidean reconstruction from two spherical images. Comput. Vis. Image Underst. 2010, 114, 491–499. Appl. Sci. 2015, 5 941 2. Gao, Z; Zhang, Y.N.; Xia, Y.; Lin, Z.G.; Fan, Y.Y.; Feng, D.D. Multi-pose 3D face recognition based on 2D sparse representation. J. Vis. Commun. Image Represent. 2013, 24, 117–126. 3. Zhang, Y.N.; Guo, Z.; Xia, Y.; Lin, Z.G.; Feng, D.D. 2D representation of facial surfaces, for multi-pose 3D face recognition. Pattern Recognit. Lett. 2012, 33, 530–536. 4. Berretti, S.; Bimbo, A.D.; Pala, P. 3D face recognition using iso-geodesic stripes. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 2162–2177. 5. Queirolo, C.C.; Silva, L.; Segundo, O.R.B.; Segundo, M.P. 3D face recognition using simulated annealing and the surface interpenetration measure. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 206–219. 6. Teng, C.H.; Chen, Y.S.; Hsu, W.H. Constructing a 3D trunk model from two images. Graph. Models 2007, 69, 33–56. 7. Lenz, R.K.; Tsai, R.Y. Techniques for calibration of the scale factor and image center for high accuracy 3-D machine vision. IEEE Trans. Pattern Anal. Mach. Intell. 1988, 10, 713–720. 8. Michael, H.L.; Armstrong, T.J. The effect of viewing angle on wrist posture estimation from photographic images using novice raters. Appl. Ergon. 2011, 42, 634–643. 9. Lowe, B.D. Accuracy and validity of observational estimates of wrist and forearm posture. Ergonomics 2004, 47, 527–554. 10. Nawrot, M.; Joyce, L. The pursuit theory of motion parallax. Vision Res. 2006, 46, 4709–4725. 11. David, G.; Woods, V.; Li, G.Y.; Buckle, P. The development of the quick exposure check (QEC) for assessing exposure to risk factors for work-related musculoskeletal disorders. Appl. Ergon. 2008, 39, 57–69. 12. Zhang, J.; Li, S.; Shen, L.; Hou, C. A comparison of testing metrics between 3D LCD TV and 3D PDP TV. Commun. Comput. Inf. Sci. 2012, 331, 125–132. 13. Zhao, X.; Song, H.; Zhang, S.; Huang, Y.; Sun, Q.; Fan, K.; Hu, J.; Fan, G. 3D definition certification technical specifications for digital TV displays. CESI001-2011. 2011, 5–8. 14. CL-200A Chroma Meter. Available online: http://sensing.konicaminolta.asia/products/cl-200a- chroma-meter/ (accessed on 20 February 2014). 15. Li, P. Box-Cox transformations: An overview, 2005. Available online: http://www.ime.usp.br/~abe/lista/pdfm9cJKUmFZp.pdf (accessed on 11 April 2005). © 2015 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/). http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Applied Sciences Multidisciplinary Digital Publishing Institute

Study of Three-Dimensional Image Brightness Loss in Stereoscopy

Loading next page...
 
/lp/multidisciplinary-digital-publishing-institute/study-of-three-dimensional-image-brightness-loss-in-stereoscopy-9rJgRciL9n

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Multidisciplinary Digital Publishing Institute
Copyright
© 1996-2019 MDPI (Basel, Switzerland) unless otherwise stated
ISSN
2076-3417
DOI
10.3390/app5040926
Publisher site
See Article on Publisher Site

Abstract

Appl. Sci. 2015, 5, 926-941; doi:10.3390/app5040926 OPEN ACCESS applied sciences ISSN 2076-3417 www.mdpi.com/journal/applsci Article Study of Three-Dimensional Image Brightness Loss in Stereoscopy 1, 1,† 2,† 3,† 2,† Hsing-Cheng Yu *, Xie-Hong Tsai , An-Chun Luo , Ming Wu and Sei-Wang Chen Department of Systems Engineering and Naval Architecture, National Taiwan Ocean University, 2 Pei-Ning Road, Keelung 20224, Taiwan; E-Mail: jack_tsai@itri.org.tw Institute of Computer Science and Information Engineering, National Taiwan Normal University, Taipei 10610, Taiwan; E-Mails: anchunlou@itri.org.tw (A.C.L.); schen@csie.ntnu.edu.tw (S.W.C.) Electronics and Optoelectronics Research Laboratories, Industrial Technology Research Institute, Hsinchu 31040, Taiwan; E-Mail: itria40212@itri.org.tw These authors contributed equally to this work. * Author to whom correspondence should be addressed; E-Mail: hcyu@ntou.edu.tw; Tel.: +886-2-2462-2192 (ext. 6059); Fax: +886-2-2462-5945. Academic Editors: Wen-Hsiang Hsieh and Takayoshi Kobayashi Received: 26 July 2015 / Accepted: 13 October 2015 / Published: 21 October 2015 Abstract: When viewing three-dimensional (3D) images, whether in cinemas or on stereoscopic televisions, viewers experience the same problem of image brightness loss. This study aims to investigate image brightness loss in 3D displays, with the primary aim being to quantify the image brightness degradation in the 3D mode. A further aim is to determine the image brightness relationship to the corresponding two-dimensional (2D) images in order to adjust the 3D-image brightness values. In addition, the photographic principle is used in this study to measure metering values by capturing 2D and 3D images on television screens. By analyzing these images with statistical product and service solutions (SPSS) software, the image brightness values can be estimated using the statistical regression model, which can also indicate the impact of various environmental factors or hardware on the image brightness. In analysis of the experimental results, comparison of the image brightness between 2D and 3D images indicates 60.8% degradation in the 3D image brightness amplitude. The experimental values, from 52.4% to 69.2%, are within the 95% confidence interval. Appl. Sci. 2015, 5 927 Keywords: 3D; image brightness loss; stereoscopy; three-dimensional image 1. Introduction One of the primary challenges encountered in the development of three-dimensional (3D) displays is image brightness. Even under application of current stereoscopic techniques, viewers experience a certain amount of image brightness degradation when viewing 3D images in cinemas or on stereoscopic televisions. Regardless of whether the images are recorded with half-mirror or double-parallel type filming equipment, the brightness is impaired. Both polarizing film group displays and the viewing of 3D images with 3D glasses cause brightness degradation. 3D displays must send separate images to the right and left eyes. However, image brightness loss occurs as a result of crosstalk, which manifests in the case of an overly high contrast between the images. If the brightness is increased to compensate for the 3D image brightness loss in stereoscopy only, spectators will not have the same visual experience as with two-dimensional (2D) images. Furthermore, some of the light is scattered when spectators wear 3D glasses; only half the light intensity from the screen is reflected upon the eyes, and the image brightness is reduced by 30%–50%. Clearly, the image brightness in 3D stereoscopic images is lower than that in 2D images during film playback. In addition, some spectators have strong antipathy towards 3D films, as a result of feelings of dizziness and other uncomfortable physiological phenomena [1–5]. Because the inter-ocular disparities between human eyes are 5–6.5 cm, the resulting perspective is slightly different when both retinas receive the same images. This difference is also called visual disparity. Rods and cones are the retinal cells that control visual signals; they are responsible for converting the brightness of light, color, and other information into optic nerve messages in the brain. Fusing of the two different perspectives generates depth perception, so that human eyes can discern 3D objects visually [6,7]. This study focuses on whether the 3D images in cinemas or on stereoscopic television can be adjusted for decreased 3D light degradation. 2. Three-Dimensional Image and Brightness Loss 2.1. Theory of Stereo Vision The 3D visual phenomenon is generated by the parallax effect, which includes both binocular parallax and motion parallax. Binocular parallax is due to the different perspectives of the eyes, resulting in slightly different image messages being received by the left and right eyes. The two received images are combined in the brain to synthesize the stereoscopic effect [8,9], as shown in Figure 1. Motion parallax is due to changes in the observation point, and is affected by the distance of an object relative to a moving background, as shown in the simplified illustration of this parallax in Figure 2. When observed from point A, the object appears to be left of the butterfly, but for observation from point B, the object appears to be to the right of the butterfly. Appl. Sci. 2015, 5 928 Figure 1. Two images sent to the brain to synthesize the stereoscopic effect. Figure 2. Motion parallax. Using either of the parallax effects, human eyes can produce 3D visual effects when viewing an object. To construct an image display device that is capable of producing 3D visual effects, one can utilize the parallax technique to adjust the brightness of light, color, and viewing direction, so that the messages received by the left and right eyes have a difference in viewing angle, causing a stereoscopic effect. The brain activity during the synthesis process depends on complex factors related to human evolution, psychology, and medicine. However, the current plane 3D imaging technology remains focused on the concept of the “right and left eyes receiving different images,” which is related to Appl. Sci. 2015, 5 929 binocular parallax. In other words, provided some analogy of the “right and left eyes receiving different images” of the visual environment is employed, viewers can observe 3D images on a 2D plane screen [10,11]. Currently, on the basis of the binocular parallax theory, 3D image display technologies are roughly divided into stereoscopic and auto-stereoscopic types. 2.2. Three-Dimensional Display and Three-Dimensional Glasses There are two kinds of stereoscopic displays: passive and active. Passive polarization glasses, which are also called passive 3D glasses, are an example of passive stereoscopic technology. The working principle of this technology is to paste a micro-retarder layer on the front of a general television or monitor, using the polarization direction of the light to separate the left- and right-eye images. Then, the passive polarization glasses can correctly cause each of the viewer's eyes to see the appropriate right and left images, producing a 3D effect. The advantage of this approach is its low cost, but the screen resolution is reduced to half the original 2D image resolution, and the overall brightness is also reduced. Color-coded anaglyph 3D glasses can be divided into those using red and cyan, green and red, or blue and yellow filters, as shown in Figure 3. Figure 3. Classification by color-code: (a) red and cyan and (b) red and green anaglyph 3D glasses used to create 3D stereo images. Active 3D glasses, shown in Figure 4, are another type of 3D glasses that are also called shutter glasses. In this process, the 3D screen continually alternates the display between the left- and right-eye images at a frequency of 120–240 Hz, with the right and left eyes being quickly switched and shielded alternately by the shutter glasses. This causes the left and right eyes to see the correct respective images, causing the brain to perceive a 3D image. Appl. Sci. 2015, 5 930 Figure 4. Active 3D glasses: (a) right and (b) left frame displays. The main advantage of this technology is that the picture resolution, color, and stereoscopic 3D effect are not sacrificed. However, some users of active 3D glasses suffer from dizziness and discomfort. The other advantages are that there is less blur, this technology is low cost, and it is applicable to television or computer screens as well as projectors, provided their update frequencies can meet the requirements of the glasses. Therefore, active shutter glasses are being used in the majority of the 3D display systems being introduced onto the market at present, including 3D televisions, 3D glasses, and 3D cinema screens. 2.3. Three-Dimensional Image Brightness Loss The sources of the loss in brightness can be divided into two categories: (a) Those due to the half-mirror 3D camera rig used to film the 3D content; and (b) Those resulting from the 3D glasses during viewing of the 3D device. Note that the degrees of brightness degradation caused by polarization glasses and shutter glasses are almost identical, as both of these devices allow only half of the light to penetrate the lenses, whether in the spatial or temporal domain. The sources of the brightness differences between 2D and 3D images that can have a visual impact on the eye can also be divided into two categories: (a) Control during filming of a scene, with adjustments being made in accordance with the ambient light (for example, the aperture for the right-eye image may need to be increased so that the brightness of the images received by both eyes are within a similar range). This prevents the eyes from becoming rapidly fatigued, while also reducing the time and cost of dimming of the film during post-production. (b) The screen brightness may be too low or too high during 2D or 3D viewing. In this case, the eyes can begin to suffer from significant discomfort within a short period of time. This study focuses on the brightness of 2D and 3D images within the range where a viewer can experience the same level of brightness for both eyes, while not experiencing physical discomfort such as eye soreness, dizziness, vomiting, etc., during the viewing of the 3D images. 3. Experiment Design for Brightness Loss Measurement In general, commercially available 3D LCD televisions are adjusted by standard measurement methods when the products are manufactured to leave factories [12]. Moreover, the screen image brightness in average can be obtained by a light meter. The measurement method of local Appl. Sci. 2015, 5 931 block-by-block does not need accurately to acquire precise brightness because human eyes can discern average brightness of the screen image. Hence, the 3D LCD televisions are adopted in the study to compare the brightness difference between 2D and 3D films. When comparing 2D and 3D films with the same base brightness, viewers tend to find that 2D images are brighter than 3D images. The adjustments necessary to ensure similar image brightness perception by both eyes are mainly decided based on experimental results. Then, the increases and decreases in brightness are controlled by tuning the filming aperture setting and the shutter speed. This can reduce the overall filming cost and time. In this study, the ambient brightness was first set for the filming of a scene. Two lighting groups, with 3200 and 7500 lumens (lx) were used as spatial lighting. Then, test images were recorded in an indoor studio, as shown in Figure 5. The experimental and hardware conditions are listed in detail in Table 1. In the experiment, the normal ambient exposure value (EV) was set to ISO 400, which is the most commonly used value. Five aperture values (F4, F5.6, F8, F11, and F16) were selected, which were selected depending on the experiment. The shutter speed (4–1/3200 s) employed in conjunction with each of the five aperture values was used as a cross-aperture experimental variable. Figure 5. Recording environment. Appl. Sci. 2015, 5 932 Table 1. Detailed environmental and hardware parameters. Aperture EV Shutter Speed (s) ISO Scene Recording Brightness (lx) F4 −2–+2 1/50–1/3200 400 3200 and 7500 F5.6 −2–+2 1/6–1/500 400 3200 and 7500 F7.1 −2–+2 1/6–1/500 400 3200 and 7500 F9 −2–+2 3–1/500 400 3200 and 7500 F10 −2–+2 4–1/500 400 3200 and 7500 The experimental equipment included an illuminometer (i.e. CL200A, Konica Minolta, Tokyo, Japan), Canon 400D camera (Canon, Fukushima Prefecture, Japan), Vizio 32- and 47-inch television screens (WUSH, Irvine, CA, USA), polarized glasses, a Sony 52-inch television screen (Sony, Tokyo, Japan), and flash glasses are shown in Table 2, respectively. The illuminometer is a brightness measuring tool, which produces readings in lx. The height of the darkroom was equivalent to three televisions, and the optical axis of the optical test equipment was oriented vertically to the center of the display screen, at a distance of three (high-definition television (HDTV)) and four (standard-definition television (SDTV)) times the display screen height. This was so that all of the light was received as the average light of a single image [13], as shown in Figure 6. A test image is displayed with both 2D and 3D mode on a 3D TV, and luminance measurements are performed for each mode. In this way, the range of luminance for 2D mode and 3D mode can be found. In addition, there are only grayscale images but color image are not need in this experiment to enhance actual measurement images. Furthermore, the grayscale images have been added, as shown in Figures 7 and 8, respectively. Figure 6. Schematic of darkroom for display brightness measurement [13]. Appl. Sci. 2015, 5 933 Table 2. 3D TV vendors. Vendor Model 3D Glasses Used Time (h) Year Vizio VL320M 32-inch Polarized Glasses 50 2012 Vizio M420KD 42-inch Polarized Glasses 45 2012 Sony KDL-52XBR7 52-inch Flash Glasses 70 2011 Figure 7. F5.6 aperture 2D/3D test images. Figure 8. F10 aperture 2D/3D test images. Note that the CL200A illuminometer is one of the commonly used models in the industry and is also relatively easy to obtain. According to statistical-method-aided analysis, the results of such experiments can be applied to the industrial sector. Furthermore, the specifications of this device Appl. Sci. 2015, 5 934 conform to Japanese Industrial Standards (JIS) C1609-1:2006 Class AA, and are extremely consistent with the International Commission on Illumination (CIE) standard observer curves [14]. After fixing the ISO and aperture conditions, images were recorded for different shutter conditions. To avoid obtaining different results for the same ambient light, the parameters were typically only adjusted after the last shooting iteration for a given setup. The camera was turned to aperture priority (AV) mode, and the camera function key “*” was pressed after the aperture adjustment. Then, the following steps were performed in order to obtain the optimal shutter value: 1. The camera was turned to M mode and images were recorded at the given shutter value, ensuring that the EV was 0. 2. The images were recorded within the EV −2 and EV +2 ranges. As a result, each group had 19 datasets. 3. The obtained images were presented on the television screen, and the image brightness was measured in the 2D and 3D modes. 4. The data was collected for analysis using statistical product and service solutions (SPSS) software (IBM, Chicago, IL, USA). 4. Experimental Results An image brightness regression model was used to analyze the screen image brightness relationship with the following variables:  Dependent variable (Y): Screen image brightness;  Independent variables (X): (1) Screen size; (2) Screen recording brightness; (3) Mode (2D or 3D); (4) Photographic equipment EV; (5) Interactions between variables, as shown in Table 3. Depending on the variables’ regression standardized residuals, it was determined whether the distribution of the sample was normal, for which the bell curve is called a completely normal distribution curve. Because of sampling errors, there was a gap between the actual observed-value histogram and the normal distribution curve (i.e., Figure 9). However, no extreme values beyond three standard deviations were found in this experiment. As a result, the sample values corresponded naturally with the normal distribution. The study then examined the variables’ standardized regression residual error on the normal P-P diagram, which exhibits a 45° line from lower left to upper right (i.e., Figure 10). Therefore, the sample observations are approximately in line with the basic assumption, as shown in Tables 4–6. Appl. Sci. 2015, 5 935 Table 3. Dependent and independent variables. Variable Type Name Values Because of the nature of the luminance variables, there Dependent is no normal distribution, so a Box-Cox transform is Screen Image Brightness Variable (Y) used to convert the variable (λ = 0.3), so that ε (Note 1) has a normal distribution. (1) Screen Size 32, 47, and 52 inch (2) Field Brightness 3200 and 7500 lx (3) 2D or 3D mode 2D and 3D modes Converted using the camera’s shutter aperture Independent (4) Photographic combination. Variables (X) Equipment EV Conversion Formula: EV = log (N /t), where N is the aperture (F value), and t is the shutter speed (s). (5) Interactions between The interactions between each variable. Variables The risk-free rate, ε, extracts a random sample from a normal distribution with a mean of 0 and a standard deviation of 1. Figure 9. Standardized residuals histogram. Appl. Sci. 2015, 5 936 Figure 10. Standardized regression residuals of normal P-P diagram. Table 4 shows the measures used in the brightness regression model. In this model, R is used to illustrate the explanatory power of the entire pattern. However, this measure tends to overestimate phenomena depending on the sample size; the smaller the sample, the more prone the model is towards overestimation. Therefore, the majority of researchers use R , which is the error variance and variable (Y) divided by the degree of freedom. Table 4. Brightness regression model summary. Correlation Coefficient of Adjusted Coefficient of Coefficient (R) Determination (R ) Determination ( R ) Brightness Regression Model 0.995 0.990 0.990 Analysis of variance (ANOVA) is a particular form of statistical hypothesis testing that is widely applied in the analysis of experimental data (i.e., Table 5). Statistical hypothesis testing is a method of decision-making based on data. If the test results (calculated by the null hypothesis) fall within a certain likelihood of not being accidental, they are deemed to be statistically significant. For example, when the “p value”, which is calculated from the data, is less than the defined critical significance level, the original hypothesis can be deemed invalid. A regression coefficient of 0 can indicate that the variable has no effect on the model. Table 5. Analysis of variance (ANOVA). Source Sum of Squares Degrees of Freedom Mean Square F Value Return 5382.379 10 538.238 10200.217 Residual 55.617 1054 0.053 Total 5437.996 1064 Appl. Sci. 2015, 5 937 The statistical coefficients of the linear regression model are presented in Table 6. Note that mode switching between 2D and 3D images has the most significant impact on the screen image brightness. That is, once the screen is switched from the 2D to 3D mode, the image brightness on the screen exhibits a very significant decline. Table 6. Statistical coefficients of linear regression model. Non-Standardized Standardized Coefficients Coefficients Brightness Regression Variables T B Standard Error β (A) Constant 21.259 0.115 184.892 (1) Screen Size 1 1.905 0.145 0.399 13.180 (2) Screen Size 2 0.707 0.145 0.147 4.890 (B) Shooting Scene Brightness 0.690 0.020 0.153 35.235 (C) 2D or 3D Mode −5.749 0.119 −1.270 −48.340 (D) Photographic Equipment EV −1.342 0.008 −1.037 −169.750 (E) Interaction Value of Variables (1) and (C) −0.314 0.030 −0.051 −10.499 (1) and (D) −0.096 0.010 −0.293 −9.753 (2) and (D) −0.045 0.010 −0.137 −4.549 (B) and (C) −0.137 0.029 −0.026 −4.809 (C) and (D) 0.311 0.008 0.991 37.541 Each of the independent variables is explained and analyzed below. (1) Screen size After the 2D/3D mode variable, the screen size is the most important environmental variable when measuring brightness in the darkroom. The linear regression model assumes three screen sizes as separate dummy variables. The statistical results show a significant impact on the brightness, and larger screens have a positive effect on the brightness value, falling within the range of reasonable consideration. Therefore, the screen size and interactions between other factors significantly affect the brightness. (2) Scene recording brightness Only two brightness values were used in this study: 3200 and 7500 lx. The statistical results show that these two variables are within the range of reasonable consideration. (3) Mode (2D or 3D) This variable is the core consideration of the experiment, and its significance is apparent in the results of the statistical analyses. Thus, this statistical coefficient affects the screen image brightness very significantly. The experimental measurements and the linear regression model prove that the 3D image brightness is lower than that of the 2D images. To estimate the screen image brightness value, the linear regression model takes the 2D and 3D modes as dummy variables. The expected screen display mode is set in the linear regression model so that the estimated image brightness value can be obtained. Appl. Sci. 2015, 5 938 (4) Photographic equipment EV In the experiment with different aperture and shutter conditions, this setting has a direct impact upon the image brightness. A larger EV indicates less exposure, and the statistical results are also consistent with this finding. (5) Interaction values of variables Each of the variables interacts with the others to a greater or lesser degree, but the coefficients of a given variable have a lesser effect on the screen brightness than interactions with all other major independent variables. Figure 9 is a standardized residuals histogram that shows the regression distribution of the standardized residuals. Figure 10 is a normal probability plot diagram (P-P diagram). In statistical analysis, it is often necessary to determine whether a dataset is from a normal population using regression analysis or multivariate analysis. Of all the analysis methods, the use of statistical graphics to make such a judgment is relatively easy and convenient. With a P-P diagram and a least-squares line, the user can easily determine whether or not the entered data are from a normal population. Another function is to aid researchers in interpreting the meaning of the P-P plot. The least-squares line is obtained from the linear equation derived from the method of least squares, which is a linear equation that obtains the sum of the squared residuals between the least-squares line and the data minimal. The regression equations adopted in this study are primarily based on concepts from reference [15], which is used as a mathematical model for derivation of the basic theory. After transforming the variables of the regression model via Box-Cox to λ = 0.3, ε matches the normal distribution, as does the R model. The linear regression model parameters are expressed in terms of YX =+ββ +ε , ii 0 i i (1) where Y is a random variable, X is a known fixed constant, ε is an unobservable, and i = 1, … , n i i i x (i-th test; Y is the reaction value corresponding to X ). Expressing the main variables in the i i experimental linear model formula yields YX =+ 21.259 1.905 + 0.707X+ 0.69X− 5.749X−1.342X+μ , 12 B C D (2) where X is the size of screen 1, X is the size of screen 2, X is the scene recording brightness, X 1 2 B C indicates the mode (2D or 3D), X is the photographic equipment EV, and μ is the interaction value of the variables. From analysis of the image brightness using this linear regression model, different variables affect the screen brightness by different degrees, although the interactions between variables affect the screen brightness only minimally. However, changing from the 2D to 3D mode has the most significant effect on the brightness. Once the screen changes from the 2D to 3D mode, the screen brightness declines noticeably. As detailed above, the linear regression model attempts to consider the effects of the main environment and hardware when estimating the screen brightness value. The brightness values are affected by numerous external factors; however, conversion from 2D to 3D mode has the largest impact. In fact, 3D image professionals require a certain amount of time to adjust the 3D image brightness in such scenarios. Therefore, the linear regression model can help to estimate image Appl. Sci. 2015, 5 939 brightness if other environmental factors or hardware conditions are controlled. The faster the shutter speed, the lower the screen image brightness, and low-brightness value images that are difficult to observe can even be obtained. The 3D image brightness degradation in response to increased shutter speed is clearly shown in Table 7; further, the 3D image brightness values are significantly lower than the 2D image brightness values for specific shutter conditions. Table 7. F5.6 aperture experimental results. Shutter 2D Mode 3D Mode RGB Values Speed (s) Brightness Value (lx) Brightness Value (lx) 1/125 6.6 R:125, G:131, B:126 2.2 1/250 2.2 R:74, G:79, B:74 0.7 The experimental results show that, when the aperture is F5.6 (i.e., Table 7), the image brightness value is the same in 2D mode with a shutter speed of 1/250 s as it is in 3D mode with 1/125-s shutter speed. Therefore, the 3D display exhibits approximately 50% image brightness degradation. The experimental results for the F10 aperture are listed in Table 8. For this aperture setting, the image brightness value is the same in 2D mode with a shutter speed of 1/40 s as it is in 3D mode with 1/10-s shutter speed. Thus, the 3D display has only 50% of the 2D image brightness. Table 8. F10 aperture experimental results. Shutter 2D Mode 3D Mode RGB Values Speed (s) Brightness Value (lx) Brightness Value (lx) 1/20 19.8 R:193, G:198, B:194 6.6 1/40 9.4 R:138, G:144, B:139 3.2 1/50 6.6 R:117, G:123, B:117 2.2 1/80 3.2 R:81, G:87, B:82 1.1 If the low-brightness (value 0) data are removed from the experimental dataset, the 3D image brightness average value can be increased to approximately 39.2% of the 2D image brightness value. This means that, in comparing the polarizing 3D and 2D image brightness values, the 3D image brightness decreases by approximately 60.8%. For a 95% confidence interval, degradation values of 52.4%–69.2% are within the reasonable range of consideration, as shown in Table 9. Because the polarized 3D image brightness is approximately 60% less than that of the corresponding 2D image, in order to achieve the same image brightness as the 2D image, the 3D image brightness must be increased. Table 9. Comparison of 2D and 3D image experimental results. Experimental Item Values 3D Image Brightness Degradation 60.8% 3D Image Brightness Degradation within 95% Confidence Level 52.4%–69.2% Appl. Sci. 2015, 5 940 The above reference data is applicable to the production of 3D stereoscopic displays, and shows that the 3D image brightness must be increased in order to achieve the same image brightness as a 2D image on a screen. The use of this data can reduce the time period required for setting adjustments. Complementary adjustments taking the various environmental factors into consideration are recommended for practical imaging designs, so that the gap in display brightness between 2D and 3D images can be reduced. As a result, the main idea of this study is trying to measure the brightness loss (or difference) between filming 3D movie and watching 3D movie. The results show 60.8% brightness loss, which means we have to increase 2.5 times lighting intensity while filming to reach the standard brightness. This modification elevates the watching 3D movie to proper brightness. 5. Conclusions In order to improve image brightness degradation in three-dimensional (3D) displays utilizing stereoscopic images in cinemas or on television, this study aimed to quantify the 3D image brightness degradation in such cases. It also aimed to estimate the image brightness relationship between 3D and two-dimensional (2D) images and, hence, to modify the brightness values of the former. The values measured based on the capturing of a single 2D and a single 3D image were estimated using the photographic principle. Moreover, image brightness data were collected in the 2D and 3D modes for analysis using statistical product and service solutions (SPSS), and so that the image brightness values could be estimated by the statistical regression model for different environmental factors or hardware devices. Finally, a comparison of the polarizing 3D image brightness value with that of a 2D image based on the experimental results indicated that the 3D image brightness can be decreased by 60.8%. Furthermore, the degradation values of 52.4%–69.2% are within the 95% confidence interval. Acknowledgments This work was supported by the Ministry of Science and Technology of Taiwan (Grant No. MOST 103-2221-E-019-045-MY2). Author Contributions Hsing-Cheng Yu conceived and designed the experiment, and contributed to a main part of manuscript writing. Xie-Hong Tsai and Ming Wu contributed to implement and setup the experiment. An-Chun Luo and Sei-Wang Chen contributed in corresponding data analysis. All authors contributed to polish the paper for improving the fluency. Conflicts of Interest The authors declare no conflict of interest. References 1. Banno, A.; Ikeuchi, K. Omnidirectional texturing based on robust 3D registration through Euclidean reconstruction from two spherical images. Comput. Vis. Image Underst. 2010, 114, 491–499. Appl. Sci. 2015, 5 941 2. Gao, Z; Zhang, Y.N.; Xia, Y.; Lin, Z.G.; Fan, Y.Y.; Feng, D.D. Multi-pose 3D face recognition based on 2D sparse representation. J. Vis. Commun. Image Represent. 2013, 24, 117–126. 3. Zhang, Y.N.; Guo, Z.; Xia, Y.; Lin, Z.G.; Feng, D.D. 2D representation of facial surfaces, for multi-pose 3D face recognition. Pattern Recognit. Lett. 2012, 33, 530–536. 4. Berretti, S.; Bimbo, A.D.; Pala, P. 3D face recognition using iso-geodesic stripes. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 2162–2177. 5. Queirolo, C.C.; Silva, L.; Segundo, O.R.B.; Segundo, M.P. 3D face recognition using simulated annealing and the surface interpenetration measure. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 206–219. 6. Teng, C.H.; Chen, Y.S.; Hsu, W.H. Constructing a 3D trunk model from two images. Graph. Models 2007, 69, 33–56. 7. Lenz, R.K.; Tsai, R.Y. Techniques for calibration of the scale factor and image center for high accuracy 3-D machine vision. IEEE Trans. Pattern Anal. Mach. Intell. 1988, 10, 713–720. 8. Michael, H.L.; Armstrong, T.J. The effect of viewing angle on wrist posture estimation from photographic images using novice raters. Appl. Ergon. 2011, 42, 634–643. 9. Lowe, B.D. Accuracy and validity of observational estimates of wrist and forearm posture. Ergonomics 2004, 47, 527–554. 10. Nawrot, M.; Joyce, L. The pursuit theory of motion parallax. Vision Res. 2006, 46, 4709–4725. 11. David, G.; Woods, V.; Li, G.Y.; Buckle, P. The development of the quick exposure check (QEC) for assessing exposure to risk factors for work-related musculoskeletal disorders. Appl. Ergon. 2008, 39, 57–69. 12. Zhang, J.; Li, S.; Shen, L.; Hou, C. A comparison of testing metrics between 3D LCD TV and 3D PDP TV. Commun. Comput. Inf. Sci. 2012, 331, 125–132. 13. Zhao, X.; Song, H.; Zhang, S.; Huang, Y.; Sun, Q.; Fan, K.; Hu, J.; Fan, G. 3D definition certification technical specifications for digital TV displays. CESI001-2011. 2011, 5–8. 14. CL-200A Chroma Meter. Available online: http://sensing.konicaminolta.asia/products/cl-200a- chroma-meter/ (accessed on 20 February 2014). 15. Li, P. Box-Cox transformations: An overview, 2005. Available online: http://www.ime.usp.br/~abe/lista/pdfm9cJKUmFZp.pdf (accessed on 11 April 2005). © 2015 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).

Journal

Applied SciencesMultidisciplinary Digital Publishing Institute

Published: Oct 21, 2015

There are no references for this article.