Ask Your Question
0

Working with 16-bit tiff images

asked 2013-11-19 15:10:18 -0500

FatLungs gravatar image

Hello everyone, I am grabbing images from a camera that outputs 16bit tiff. Is there a workflow in SimpleCV that includes this file type?

Currently I am using numpy and matplotlib to convert the tiffs into arrays to be read, like so:

arr = np.asarray(img.getdata()).reshape(img.size[1], img.size[0])
plt.imshow(arr, cmap=cm.gray)
plt.show()

But I need to convert back to a simplecv image if I want use img.getEdgeImage()

edit retag flag offensive close merge delete

4 Answers

Sort by ยป oldest newest most voted
0

answered 2013-12-30 20:21:52 -0500

Hi there That exactly what i want to do.I tried to convert the 16-bit Tiff images to be read.But my tiff converting tool doesn't support to do that effectively.I just want to knot that if there is any converter which supports to convert 16-bit Tiff images directly.Thanks for any suggestions.

edit flag offensive delete link more
0

answered 2013-11-19 18:04:12 -0500

xamox gravatar image

I'm not sure if we have support for 16-bit images. You can just pass a numpy array into the image constructor, so to update your example:

arr = np.asarray(img.getdata()).reshape(img.size[1], img.size[0])
simg = Image(arr)
simg.show()
edit flag offensive delete link more

Comments

I get an error, I do: img = Image("Bert20120621imamed.tif") arr = np.asarray(img.getdata()).reshape(img.size[1], img.size[0]) where the tif image is Type=Int16 with 6 bands and get: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) /usr/lib/pymodules/python2.7/SimpleCV/Shell/Shell.pyc in <module>() ----> 1 arr = np.asarray(img.getdata()).reshape(img.size[1], img.size[0]) AttributeError: Image instance has no attribute 'getdata'

alobo gravatar imagealobo ( 2013-12-19 08:29:07 -0500 )edit
0

answered 2014-02-21 11:21:50 -0500

uranor gravatar image

I would be very interested in knowing the answer to this question too. A lot of scientific cameras have greater bit depth (usually 10 or 12 at the FPA level) and thus output 16bit grayscale images (e.g. IR cameras). Webcam and commercial cameras are almost invariably 8bit RGB.

There are several issues with handling 16bit grayscale images: one is that your computer display (normally) is an 8bit display and therefore it cannot show the full dynamic range of such images. As a result you may end up staring at a boring grey field, because either the OS or your video adapter driver makes the choice of which 8bit are displayed. Therefore you need to actively tell it which 8bit you are going to display (i.e. where is your signal? 8MSB or 8lsb? Middle?) and transform the image to 8bit before displaying ( e.g. img.show() ).

On the other end, while you process images, you usually want to keep the whole 16bit depth, especially when hunting for low SNR signatures, like one often does in remote sensing.

So, what is the support that SimpleCV has for 16bit TIFF images? If none, does it automatically convert everything to 8bit per channel RGB? In this case information would be lost: for a grayscale 16bit image what happens is that 8bit are chosen out of the 16 and replicated over the three color channels.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

Stats

Asked: 2013-11-19 15:10:18 -0500

Seen: 706 times

Last updated: Feb 21 '14