Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

farmerdoug

macrumors 6502a
Original poster
Sep 16, 2008
541
0
This is my next problem. If anybody wants to add their 2 cents...

We have three dimensional data in a 2-d image. We transform it into a 3-d image cube. The code figures where to find in the 2-d image the data for a point in the 3-d cube. Now I need to reverse this as follows: From the position in the 2-d image go x pixels in one direction. Figure out where in the 3-d cube that data goes.

Code:
for(lensx = 0; lensx < CUBEWIDTH; lensx++)
	
	{   
		for(lensy = 0; lensy < CUBEWIDTH; lensy++)
		
		{  
			if(lookuptable[lensx][lensy][2] > 0)
			{
				Xref = lookuptable[lensx][lensy][0] + Token.xshift;
				Yref = lookuptable[lensx][lensy][1] + Token.yshift;
				height = heightmap[lensy*CUBEWIDTH + lensx];
				tilt = tiltmap[lensy*CUBEWIDTH + lensx];
				vertscale = VERT_SCALE_DELTA/height;
				for(w = 0; w < NCHAN; w++)
				
				{
					lambda = LAMBDA_MIN + LAMBDA_INC*w;
					crudeXcntr = (int)lroundf(Xref + (LAMBDA_REF - lambda)/vertscale*tilt);
					crudeYcntr = (int)lroundf(Yref + (LAMBDA_REF - lambda)/vertscale);
 

mobilehaathi

macrumors G3
Aug 19, 2008
9,368
6,352
The Anthropocene
You need to describe this better, at least to me. You have a point on the surface (or anywhere in the volume?) of a 3-D object which you project into a 2D space. You want to move the 2D point and "reverse project" it back to the 3D object?

You'll have some uniqueness problems as the projection function won't be a bijection. If you constrain the way in which movement in the 2D shape can change the 3D point position, you might be able to make it work.
 

farmerdoug

macrumors 6502a
Original poster
Sep 16, 2008
541
0
It's not a object. I didn't describe it because its not essential to the answer to know what it is but its not a secret. Light is passed through a lenslet array and then through a prism producing spectra. The focal plane array is not orthogonal but rotated with respect to the lenslet array so that spectra can be tightly packed into the focal plane.
So 3-d data - x, y and wavelength - are packed into a 2-d array. The mapping from the focal plane in to a cube is determined; now I need a reverse map.
 

mobilehaathi

macrumors G3
Aug 19, 2008
9,368
6,352
The Anthropocene
It's not a object. I didn't describe it because its not essential to the answer to know what it is but its not a secret. Light is passed through a lenslet array and then through a prism producing spectra. The focal plane array is not orthogonal but rotated with respect to the lenslet array so that spectra can be tightly packed into the focal plane.
So 3-d data - x, y and wavelength - are packed into a 2-d array. The mapping from the focal plane in to a cube is determined; now I need a reverse map.

I see, so basically you need an efficient way to search your 2d array to find which point contains the data of interest?

Edit: Let me refine my current understanding of your problem.

You have a map (u,v) -> (x,y,w). You want an efficient way to form (x,y,w) -> (u,v), with the intention of mapping a movement in (x,y,w) space into (u,v) space? I guess my first question would be: is this mapping a bijection? I assume by your initial description that (u,v) coordinates are in a subset of N0 X N0. Are (x,y,w) in R^3? or do they map to a countable set? I'm just trying to understand the nature of your data, perhaps I'm making it too complicated.

A (very) naive approach would be to brute force search through (u,v) to find the reverse map, although you could surely improve on that with some clever algorithm/data structure. However, if there are properties of the mapping that you know a priori, you might be able to exploit that...
 
Last edited:

farmerdoug

macrumors 6502a
Original poster
Sep 16, 2008
541
0
I'm terse- probably not helpful in situations like these. Here's more.

I see, so basically you need an efficient way to search your 2d array to find which point contains the data of interest?
Yes /no
Edit: Let me refine my current understanding of your problem.
You have a map (u,v) -> (x,y,w).
Yes
You want an efficient way to form (x,y,w) -> (u,v),
Sort of
with the intention of mapping a movement in (x,y,w) space into (u,v) space?
No


Remember, we are talking about spectra. See image. The spectrum from points in the camera's field of view are displayed. The spectra are displaced by about 3-4 pixels in x and 10 in y. They are about 3 pixels wide and say 30 long. Using the forward mapping, I extract (x,y,w) from (u,v) using data in a 3x3 square around u,v.
Now in fact that the signal from any particular point spreads beyond 3x3 is what this problem is about. Cross talk.

So I am at x,y,w; I know u,v; from u,v I can move left or right to find adjacent maxima but I don't know w yet. I can get by without w but would prefer not too.
 

Attachments

  • ds9.jpg
    ds9.jpg
    66.2 KB · Views: 102

jared_kipe

macrumors 68030
Dec 8, 2003
2,967
1
Seattle
Now in fact that the signal from any particular point spreads beyond 3x3 is what this problem is about. Cross talk.

So I am at x,y,w; I know u,v; from u,v I can move left or right to find adjacent maxima but I don't know w yet. I can get by without w but would prefer not too.

So you need to sharpen the image?
 

farmerdoug

macrumors 6502a
Original poster
Sep 16, 2008
541
0
Yes but not in any visual sense although the images will look sharper. I need accurate numbers.
 

chown33

Moderator
Staff member
Aug 9, 2009
10,745
8,419
A sea of green
Yes but not in any visual sense although the images will look sharper. I need accurate numbers.

To make sure I understand, you're trying to increase the accuracy of the measured spectral values, right?

So for one of your overlapping pixels, that pixel (or area) has some contribution from object A, and some contribution from object B, where A and B are emitters of electromagnetic radiation (e.g. light, infrared, microwaves, x-rays, etc.). The net value of the pixel is thus the sum of contributions from A and B at that particular pixel.

So for some pixel value P, P=A+B. Ideally, when there's no overlap, P=A or P=B, so logically the contribution of other signals is 0. I.e. you could also say P=A+0 or P=0+B.

Because each spectrum is represented spatially, i.e. in the vertical dimension, the contributions of A and B at a pixel don't represent the same spectral line (same frequency) in the original measured signal. That doesn't change the answer, I'm just stating it for completeness.

If that's a reasonably accurate description, then I think you're doomed.

What I've described is basically aliasing: one frequency in the original measured input appearing as another frequency in the sampled output. And one thing I recall from DSP is that aliasing can't be completely removed after sampling.

It doesn't matter that your spectra (frequencies) are represented spatially (i.e. vertically). You still have the problem of P=A+B, where you know neither A nor B exactly. You only know their sum P, the actual measured value at a given pixel.


The reason aliasing can't be removed is because you don't know exactly what to remove, at least not in a calculable DSP sense. If you know the exact signal and its exact contribution, then you can subtract it. But you'd have to know the exact amplitude of either the aliased signal or the original unaliased signal. But you don't know either of those.

If you already knew the original unaliased signal, then you wouldn't have sampled the original analog signal, you'd just use the data samples you already had.

Conversely, to know the exact amplitude of the aliased signal, you would have had to measure all of the original analog signal's higher frequency components, which combined to produce the aliased signal. But if you could do that, you'd have used that higher-frequency measurement device, so then you wouldn't have any aliasing to remove.

If you had an accurate prediction algorithm for either the original signal or the aliased contribution, you could run the predictor and subtract its value. But if you have an accurate predictor, you already know what the measured data will be, so you wouldn't need to measure it. For example, imagine a yellow gradient image from left to right, summed to a violet gradient from right to left. You can accurately predict what each gradient will produce separately at any pixel. So even if you measure a white or grayish blended pixel near the middle of the image, you can accurately subtract the contribution of one signal to get the contribution of the other. This only works because you have an accurate predictor algorithm. An inaccurate predictor may or may not produce useful results. It depends on the amplitude of the aliased signal, and what the desired S/N ratio is. You might decrease aliasing but increase other noise factors.


Or I could be completely wrong, and an unsharp mask would work fine.
 
Last edited:

jared_kipe

macrumors 68030
Dec 8, 2003
2,967
1
Seattle
...
Or I could be completely wrong, and an unsharp mask would work fine.

Oh I'm not trying to say that he can ever truly recover all of it, but it might make it easier or at least faster. For example maybe bin those 3x3 pixels into a single pixel, reducing the overall size of the image by 9 without much information loss. Or keep the current size and sharpen it somehow to try to remove the cross talk.


In any case, this problem is highly hardware dependent. I would suggest taking a variety of 'test' pictures of known spectra and/or shapes.
Like the Balmer series, or a laser of known wavelength.

I would think that you could create a 'reasonable' sharpening algorithm with this approach.

EDIT: oh and different brightness / exposure values. Some sensors exhibit almost no crosstalk at certain pixel saturation but more other times...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.