Discussion:
How to rescale an image from 16 bit to 12 bit?
(too old to reply)
Yuping Liang
2009-05-21 20:28:02 UTC
Permalink
I have a png image saved by Labview8, the data is in 12 bit depth since my linescan camera streams out 12 bit data, but saved with 16 bit in standard png format.

When read the data by imtool, Matlab shows 16 bit pixel value instead of 12 bit. Do you know if there is a way in Matlab to scale back the data and show 12 bit value instead of 16 bit? or do you know any conversion tool to convert it back to 12 bit before I read it in with imtool?

Thanks,
Yuping
ImageAnalyst
2009-05-21 21:22:04 UTC
Permalink
Post by Yuping Liang
I have a png image saved by Labview8, the data is in 12 bit depth since my linescan camera streams out 12 bit data, but saved with 16 bit in standard png format.
When read the data by imtool, Matlab shows 16 bit pixel value instead of 12 bit. Do you know if there is a way in Matlab to scale back the data and show 12 bit value instead of 16 bit? or do you know any conversion tool to convert it back to 12 bit before I read it in with imtool?
Thanks,
Yuping
-------------------------------------------------------------
Yuping:
It's probably doing like a lot of cameras where it packs the lower 4
bits with zero and the packs the camera's 12 bits into the upper 12
bits of the 16 bit word. This is so the image appears brighter -
otherwise it would only appear 1/16th as bright if some app scaled the
pixels to the full 65535 gray levels that are possible with a 16 bit
pixel.

You should be able to just do an integer divide by 16, or maybe
there's a bitshifting operator where you can downshift by 4 bits.
That will make your data in the range of 0-4095 with every gray level,
instead of 0-65535 but spaced every 16 gray levels. You can verify
that this is how it is by taking a histogram. Do you have gaps and
only have bins with non-zero values every 16 bins? If so, then it's
like I said. If not, then something else is going on and you'll have
to discover what it is by inspecting the values.
Regards,
ImageAnalyst
Ulf-Erik Walter
2009-05-22 08:31:01 UTC
Permalink
Hi,
Post by ImageAnalyst
It's probably doing like a lot of cameras where it packs the lower 4
bits with zero and the packs the camera's 12 bits into the upper 12
bits of the 16 bit word. This is so the image appears brighter -
otherwise it would only appear 1/16th as bright if some app scaled the
pixels to the full 65535 gray levels that are possible with a 16 bit
pixel.
This issue is discussed quite often here; we got often customers who wish to have the lower n bits used, filling the upper 16-n bits with zeros.

But this causes new problems: As we have many different cameras with 10, 12 or 14 bit (using 16 bit), some of thaem capable in averaging several images. This solution ( filling zeros into the lower bits) leads to 0 is black, 65535 is white, histogram is a kind of comb for all cameras in all 16bit modes.

Best regards,

Ulf-Erik Walter
Allied Vision Technologies

Loading...