Many illuminantion color estimation methods have been proposed.
This is the most famous and basic method, and base on "Gray-World Assumption".
b, g, r = cv2.split(img) ave = [b.mean(), g.mean(), r.mean()]
for c in range(3): res[:,:,c] /= ave[c]
res *= 255/res.max()
but... "Gray-World" often fails in estimating illuminantion color as following cases!!
These scenes are biased color even under the white light sources.
The average color of gray(achromatic) objects becomes illumination color.
If we can find gray(achromatic) objects in the scenes, it’s so great!!
This method estimates gray pixel regions with Illumination Invariant Measure (IIM), and calculates the illumination color by averaging the color of them.
img_ln= np.log(img+ eps)
sd= stdfilt2d(img_ln)
# average of 3 channel's standard deviation sd_mu= sd.mean(axis=2) # duplicate the sd_muinto 3 channels ([w,h] -> [w,h,3]) sd_mu= np.tile(sd_mu[:, :, None], [1, 1, 3]) # calculate GI tmp= (sd-sd_mu) * (sd-sd_mu) / (sd_mu+ eps) P = np.sqrt(np.average(tmp, axis=2)) Y = np.average(img, axis=2) GI = cv2.boxFilter(P / (Y + eps), ddepth=-1, ksize=(7, 7))
# remove invalid pixels (ex. saturated color, blacked up shadows) maxGI= np.max(GI) bth= 0.95 GI[sd.sum(axis=2) == 0] = maxGI # SDr== SDg== SDb== 0 GI[img[:, :, 0] > bth] = maxGI # over-exposure or saturated color GI[img[:, :, 1] > bth] = maxGI # over-exposure or saturated color GI[img[:, :, 2] > bth] = maxGI # over-exposure or saturated color GI[L < 0.1] = maxGI # blacked up shadows
# regard the lower num% pixels of GI as gray pixels. th= np.nanpercentile(GI, num) GPmap= GI < th # thresholding
# average colors of the gray pixels in each channel lighting_color= np.zeros(3) b, g, r = cv2.split(img) lighting_color[0] = np.mean(b[GPmap== True]) lighting_color[1] = np.mean(g[GPmap== True]) lighting_color[2] = np.mean(r[GPmap== True])
resGP= correctColor(img.copy(), lighting_color)