Day4


contents

  • color normalization & detection for multi colored object
  • multiple object

color normalization & detection for multi colored object

illuminantion color estimation

Many illuminantion color estimation methods have been proposed.

Gray-World

This is the most famous and basic method, and base on "Gray-World Assumption".

Gray-World Assumption
If a scene is illuminated by an achromatic light source, the average color of all pixels is gray (achromatic). This assumption indicates that the average color becomes the illumination color.
  • Line 09-10 (averaging color)
    b, g, r = cv2.split(img)
    ave = [b.mean(), g.mean(), r.mean()]
    • “cv2.split” can split the image to each channel
      • “b.mean” returns mean value of b channel
  • Line 13-14 (correcting color)
    for c in range(3):
    res[:,:,c] /= ave[c]
    • This process makes the average color of the image white
  • Line 15 (normalization)
    res *= 255/res.max()
    • “res.max()” returns max value of res
    • the image range becomes 0 to 255

but... "Gray-World" often fails in estimating illuminantion color as following cases!!

  • Yellow wall, green forest, etc.

These scenes are biased color even under the white light sources.


The average color of gray(achromatic) objects becomes illumination color.

If we can find gray(achromatic) objects in the scenes, it’s so great!!

Gray-Pixels

This method estimates gray pixel regions with Illumination Invariant Measure (IIM), and calculates the illumination color by averaging the color of them.

  1. calculate Illumination Invariant Measure (IIM) The captured image values I.png with cinrgb.png can be normally expressed as the product of the illuminant L.png and surface reflectance R.png.
     
    I=LR.png cinrgb.png
     
    fig.I=LR.png
     
    We reasonably assume that the illuminant L.png is uniform within small local patches (at least with a size of 3×3). Thus, the standard deviation of small local patches in logarithmic space is expressed as following equation.
     
    logI.png, where L'.png is constant. SD=2.png
     
    Where x_bar.png is the mean of x.png. This SD.png is independent of illuminant component onlyL.png. So, it’s IIM. SD.png is the standard deviation of surface reflectance in local patches.
     
  2. If target patch is gray(achromatic),difference of SD.png between 3 color channels should be small.
     
    P=.png
     
  3. Bright pixels might have more illuminant component than dark pixels.
     
    GI=.png where Y=.png.
     
  4. Remove invalid pixels (ex. saturated color, blacked up shadows)
  5. Gray-Pixels regards the lower 1% pixels of GI.png as gray pixels.
  6. The average of these gray pixel colors is illumination color.
  • Line 17 (logarithmic transform)
    img_ln= np.log(img+ eps)
    • "eps" avoids a zero divide
  • Line 20 (calculate standard deviation in small local patch 3×3)
    sd= stdfilt2d(img_ln)
    • function "stdfilt2d" is already prepared
  • Line 22-31 (calculate P, Y)
    # average of 3 channel's standard deviation
    sd_mu= sd.mean(axis=2)
    # duplicate the sd_muinto 3 channels ([w,h] -> [w,h,3])
    sd_mu= np.tile(sd_mu[:, :, None], [1, 1, 3])
    # calculate GI
    tmp= (sd-sd_mu) * (sd-sd_mu) / (sd_mu+ eps)
    P = np.sqrt(np.average(tmp, axis=2))
    Y = np.average(img, axis=2)
    GI = cv2.boxFilter(P / (Y + eps), ddepth=-1, ksize=(7, 7))
    • cv2.boxFilter is one of averaging filters
  • Line 36-43 (calculate GI, and remove invalid pixels)
    # remove invalid pixels (ex. saturated color, blacked up shadows)
    maxGI= np.max(GI)
    bth= 0.95
    GI[sd.sum(axis=2) == 0] = maxGI # SDr== SDg== SDb== 0
    GI[img[:, :, 0] > bth] = maxGI # over-exposure or saturated color
    GI[img[:, :, 1] > bth] = maxGI # over-exposure or saturated color
    GI[img[:, :, 2] > bth] = maxGI # over-exposure or saturated color
    GI[L < 0.1] = maxGI # blacked up shadows
  • Line 45-47 (calculate gray-pixel map)
    # regard the lower num% pixels of GI as gray pixels.
    th= np.nanpercentile(GI, num)
    GPmap= GI < th # thresholding
    • "np.nanpercentile" can calculate threshold of the "num" percentile of GI
  • Line 55-60 (calculate the average color of these pixels)
    # average colors of the gray pixels in each channel
    lighting_color= np.zeros(3)
    b, g, r = cv2.split(img)
    lighting_color[0] = np.mean(b[GPmap== True])
    lighting_color[1] = np.mean(g[GPmap== True])
    lighting_color[2] = np.mean(r[GPmap== True])
    • b[GPmap==Ture] scopes the pixels that satisfy "GPmap==True".
  • Line 68 (correct color)
    resGP= correctColor(img.copy(), lighting_color)
    • It's already prepared
    • color correction is the same of Gray-World.

multiple object

FrontPage
Day1-PM
Day2
Day4
Day5-AM
Day5-PM


  • counter: 101
  • today: 1
  • yesterday: 0
  • online: 1

edit