I am having an issue with an imaging toolkit that it returning coordinates that are somewhat off. Basically, the degree of error in the returned coordinates is a function of the amount the image has been rotated and the distance the coordinates are from the center of the image.
The toolkit author claims this is because I am retrieving coordinates from legacy (deprecated) functions and that it is impossible to return accurate coordinates without using newer functions which take into account more factors than just image rotation. Unfortunately, adapting my code to use the newer functions would be more work than its worth (and the author is clear its not worth their time to address the legacy functions I am using).
However, I suspect I am not getting the truth about the coordinates from the legacy functions-- it seems likely to me that they can be mathematically manipulated to meet my needs; I would guess it is only in more complex usage scenarios than mine that there is no way to get accurate coordinates using the legacy functions.
The reason I say this is the errors in the coordinates appear very systemmatic based on the amount of rotation and distance from the image center. I strongly suspect straight-forward trigonometry can be used to compensate for the error. The error in the X,y coordinates are off in the following directions based upon the location of the coordinate in relation to the image center. In the table below "+" means the coordinate returned is higher than it should be and "-" means it is lower than it should be. Coordinates further from the image center are off by a greater magnitude, but in the same direction.
Code: Select all
left right
top +,- -,-
bottom +,+ -,+
I started with good intentions, employing trigonometry to the best of my abilities to try to find an equation that converts the returned coordinates into accurate coordinates. However, that quickly devolved into thoughtless trial-and-error after my first attempts didn't pan out. This lead to a couple of formulas that get much better coordinates, but I'm failing to come up within any reasoned mathematical theory that might help me get exactly correct coordinates. Can you come up with the explanation I am not and suggest a formula that might get the exact result I am looking for?
Where:
x1 = x coordinate on non-rotated image relative to image center
x2 = x coordinate on rotated version of image relative to image center
y1 = y coordinate on non-rotated image relative to image center
y2 = y coordinate on rotated version of image relative to image center
costheta = the cosine of the angle of rotation in the rotated version
sintheta = the sine of the angle of rotation in the rotated version
I have found the following simple compensation gets me pretty close (~25% the magnitude of the original error):
x2/costheta;
y2*costheta;
In this case the direction of error changes and is dependent on the direction of rotation such that in images rotated clockwise (negative theta) the direction of error is:
Code: Select all
left right
top +,+ +,-
bottom -,+ -,-
Code: Select all
left right
top -,- -,+
bottom +,- +,+
x2*costheta + (y2 - y1) * sintheta;
y2/costheta + (x2 - x1) * sintheta;
While the resulting coordinates of the second formula are more accurate, the direction of error in this case is the same as in the first compensation attempt. Again, there's almost no thought to how I came up with this in the end... I just kept pluging stuff in until I found something that (almost) worked in hopes that I could find a theory to explain it afterwards. I haven't.
Anyone think they might be able to explain the math behind the error in the coordinates I am receiving?