Page 1 of 1

Need help with some calcs, RE: sensors

Posted: 2008-05-15 01:57pm
by Ender
Right then, so in the process of trying to figure out what you need for a minimum interdiction net for planetary defense I was reminded from the bit with the Imperial Patrol Frigate from Shield of Lies. The ship has a highly advanced sensor suite that covers everything from EM sensing (Dedicated Energy Receptors is the SW name apparently), gravity sensors (like the CGT array for cloaked ships), and FTL sensors to pick up ships in hyperspace. So this represents the best they have, which should provide us with an upper limit on the sensors. It has a tow line 100x its own length, meaning it operates like a phased array telescope and has a baseline of 30,000 m.

Now it was able to detect the Quella ship (which is 1500 m long) from 3.8 light hours out. They were able to get enough information at that range to be unable to make a positive ID of the ship and stated it was at the edge of detection range. So while they can likely detect things from further out, at that distance they would not be able to distinguish the various ships, only tell something was there and a few rudimentary characteristics.

Now 3.8 light-hours is 4.1*10^12 meters. With our knowledge of the length of the Quella ship, we work backwards from subtending the angle and (360*1500) / (2*pi*4.1*10^12) = 2.09*10^-8 degrees is our angular resolution.

Since we know the baseline, we rearrange the equation for angular resolution and get (30000 * 2.09*10^-8 ) = 6.28*10^-4 for a "wavelength". "Wavelength" is in quotes because given the variety of sensors used it represents more of an average used in detection then a specific value. We don't know what sensor provided what info and how it was combined. That number does however fall in the far infrared range, and IR is what you would use to search the skies for ships which I suspect is intentional (hate for him aside, K-Mac's books have some of the best science in them)

Ok, then we look at a separate example, the Venator star destroyer being able to target something at 10 light-minutes. This is useful because it gives us the standard by which sensors are measured - realistically your ability to pick something up is going to be based off each targets characteristics, you would need a standard object to be able to compare sensor quality. Just going by angular resolution won't do, because in an array type the baseline determines it, so if you used that sensor quality would be determined by placement rather then the quality of the sensor package.

We also know that this is related to the sensors rather then the gun because of other examples. Your ability to hit a target is based off of your distance to the target, the possible acceleration of the target, speed of your weapon, how coherent your weapon is at the target, relative velocities, and how good your sensors are. Given those variables and the known constants of TL speed and coherency, the best rating for weapon range is the quality of your sensors.

So then, the Venator is able to pick up enough information to acheive a target lock at 10 light minutes, or 1.8*10^11 m. The Venator is 1137 m long, which will serve as our maximum baseline here to establish the upper limit, as we do not know the actual placement of the sensors to determine the true baseline.

Taking the earlier deserved "wavelength" and dividing it by the length of the Venator gives an angular resolution of 5.53*10^-7 degrees. rearranging to subtend the angle to get the distance and you get (5.53*10^-7*2*pi*1.8*10^11)/360 = 1735.87 m for the size of the standard object.

A few points here, we know they have hit other objects from further - the planned bombardment of Hoth and the example in Rebel Dawn both spring to mind. However in both cases those targets were in orbit, which makes their path predictable. In those instances it is not a case of getting a target lock, it is a relatively simple math calculation to know where it will be when and how long it will take your shot to get there. As their targeting computers can hit other ships, this would not present a problem. Further, any other examples would need sufficient data to be calculated: either the information that lets you determine the angular resolution first, the distance to the target and the size of the target next as you only need 2 out of the 3 (range, size, or angular resolution) to determine the third.


Now I've done this, but I am not sure I did it right. It has been a very long time since I reviewed my theory about optics and I never received any special training on it - this is HS level coupled with some google research. Frankly until the other day I never gave this much thought. A few things I am unsure about include

*array baseline - I think you only need two to figure this, based off how our eyes work, but I'm not sure. Does the baseline actually need to be the diameter e.g. if I had a rectangular sheet of telescopes, would I need to divide by pi and take the square root to get the equivalent radius that I then double and plug in as the baseline, or just use the long sides of the rectangle? I am pretty sure it is the latter, as you are doing triangulation, but I am not positive.

*size of the target - I am using one dimension (length) to determine visibility, shouldn't I be looking at area? And if so, how do I adjust, treat the presented surface area as a circle, find the diameter and plug that in to the above equations, or am I doing this right? I'm pretty positive I am not, it seems strange to me that a TIE would not be easier to see if its wingpanels were presented to you then if you saw it head on. SO if I am wrong, what is right?

*any other flaws I missed?

I realize this is written a bit haphazardly, I hope it is sufficiently clear. Any help and commentary would be appreciated.

Posted: 2008-05-15 02:00pm
by Ender
Quick note: before his gets parroted in some VS debate, in addition to the fact I have questions about this method, it also provides sensor range to an arbitrary, in universe target. Their actual detection range will vary with the size of the target - an ISD will be able to see the Executor from a lot further of then it will a TIE fighter.

Posted: 2008-05-15 04:01pm
by Wyrm
Ender wrote:Now 3.8 light-hours is 4.1*10^12 meters. With our knowledge of the length of the Quella ship, we work backwards from subtending the angle and (360*1500) / (2*pi*4.1*10^12) = 2.09*10^-8 degrees is our angular resolution.
... Ho-kay. Why is 1500 m being used here as resolving length you use to calculate angular resolution, Ender? That's the overall length of the Quella ship, so if you use this angular resolution, the Quella ship is going to look like a 1500 m-wide blob. Is the length of the Quella ship that unique?

You have to think of your angular resolution as your "pixel size". How many pixels do you need to identify a 1500 m long ship as a Quella ship with reasonable accuracy? That will set your angular resolution, and we can proceed to the next step.
Ender wrote:Since we know the baseline, we rearrange the equation for angular resolution and get (30000 * 2.09*10^-8 ) = 6.28*10^-4 for a "wavelength". "Wavelength" is in quotes because given the variety of sensors used it represents more of an average used in detection then a specific value. We don't know what sensor provided what info and how it was combined. That number does however fall in the far infrared range, and IR is what you would use to search the skies for ships which I suspect is intentional (hate for him aside, K-Mac's books have some of the best science in them)
Almost right. You have to include a factor of 1.220 for the size of the Airy disk in there somewhere, as per the the Rayleigh criterion (what you're using):

sin θ = 1.220 λ/D

For small θ, sin θ ≈ θ, so you're okay there. Rearranging, you get:

θ D / 1.220 ≈ λ

Note that shorter wavelengths will have a better resolution for the same base length, so this λ will be the longest wavelength useful in identifying the Quesadilla.

Edit: The approximation sin θ ≈ θ is only when θ is in radians, while your figure is in degrees. Bad Ender!

Let's say you need about 100 px on a side to identify a Quesadilla by its rough shape. This makes the length you need to distinguish on the craft 15 m, so angular resolution needed at 4.1e12 m is θ = 15 m/4.1e12 m = 3.7e-12 radians. Multiply by our aperture diameter (30,000 m) and our constant (1.220) and get our wavelength: λ = (3.7e-12)(30,000m)/(1.220) = 9.0e-8 m, which carries you into the ultraviolet.

Edit Part Deux: Grrr... Answering the rest of Ender's questions:
Ender wrote:*array baseline - I think you only need two to figure this, based off how our eyes work, but I'm not sure. Does the baseline actually need to be the diameter e.g. if I had a rectangular sheet of telescopes, would I need to divide by pi and take the square root to get the equivalent radius that I then double and plug in as the baseline, or just use the long sides of the rectangle? I am pretty sure it is the latter, as you are doing triangulation, but I am not positive.
The D in the Rayleigh criterion is the distance between two sensors. Two sensors sensing radiation of wavelength λ situated D apart can distinguish between two sources that have an angular separation of θ radians. That is, if you take your two sensors and scan across the sky, two sources separated by θ will give two distinct blips on your readout.

The catch is that the sensor outputs have to be able to interfere in some way. This has to be either a pipe, or some sort of specially prepared recording (qv. "very long baseline interferometry" for more). Otherwise, the baseline is the width of the sensor window itself, which is going to be much less than 30 km. (Single optical telescopes are able to get away with this easily, because you can't tell which part of the aperture the photon actually came into the telescope from.)
Ender wrote:*size of the target - I am using one dimension (length) to determine visibility, shouldn't I be looking at area? And if so, how do I adjust, treat the presented surface area as a circle, find the diameter and plug that in to the above equations, or am I doing this right? I'm pretty positive I am not, it seems strange to me that a TIE would not be easier to see if its wingpanels were presented to you then if you saw it head on. SO if I am wrong, what is right?
No. This comes from the Rayleigh formula: the sine function spits out a dimensionless number, therefore the right-hand side must also be dimensionless. Think about it.
Ender wrote:*any other flaws I missed?
The above considerations are only useful if your sensor has to form an image to work, such as shape-recognition. Other kinds of sensors don't need this. For instance, if your scanner looks for an infared spectrum of the Quesadilla's engines, then you don't need any sort of angular resolution. We've been taking spectra of stars for more than a century, but we can't see a star's disk from down here.

Posted: 2008-05-15 06:35pm
by Ender
Wyrm wrote:
Ender wrote:Now 3.8 light-hours is 4.1*10^12 meters. With our knowledge of the length of the Quella ship, we work backwards from subtending the angle and (360*1500) / (2*pi*4.1*10^12) = 2.09*10^-8 degrees is our angular resolution.
... Ho-kay. Why is 1500 m being used here as resolving length you use to calculate angular resolution, Ender? That's the overall length of the Quella ship, so if you use this angular resolution, the Quella ship is going to look like a 1500 m-wide blob. Is the length of the Quella ship that unique?

You have to think of your angular resolution as your "pixel size". How many pixels do you need to identify a 1500 m long ship as a Quella ship with reasonable accuracy? That will set your angular resolution, and we can proceed to the next step.
Ok, what kind of resolution do I need then? Below you say 100 px, what is the basis for that, and how do I determine if that is what they used or something more specific?
Almost right. You have to include a factor of 1.220 for the size of the Airy disk in there somewhere, as per the the Rayleigh criterion (what you're using):

sin θ = 1.220 λ/D

For small θ, sin θ ≈ θ, so you're okay there. Rearranging, you get:

θ D / 1.220 ≈ λ

Note that shorter wavelengths will have a better resolution for the same base length, so this λ will be the longest wavelength useful in identifying the Quesadilla.
This still goes for telescopes? I ask because the structure here makes it seem like it doesn't. I know wiki is a shitty source, but as I said, this is outside my training.

Edit: The approximation sin θ ≈ θ is only when θ is in radians, while your figure is in degrees. Bad Ender!

Let's say you need about 100 px on a side to identify a Quesadilla by its rough shape.
What do you base that off?
This makes the length you need to distinguish on the craft 15 m, so angular resolution needed at 4.1e12 m is θ = 15 m/4.1e12 m = 3.7e-12 radians. Multiply by our aperture diameter (30,000 m) and our constant (1.220) and get our wavelength: λ = (3.7e-12)(30,000m)/(1.220) = 9.0e-8 m, which carries you into the ultraviolet.
ok

Posted: 2008-05-15 09:04pm
by Wyrm
Ender wrote:Ok, what kind of resolution do I need then? Below you say 100 px, what is the basis for that, and how do I determine if that is what they used or something more specific?
100 px is an illustrative example. I'm afraid you'll have to go elsewhere for what kind of resolution you'll need. It mainly depends on how many pixels you need to "visually" distinguish a Quesadilla from another type of ship of approximately the same size. ("Visually" used very loosely here.) You may need only a few, or a shitload of them. But 1 px = 1500 m ain't gonna cut it.
Ender wrote:This still goes for telescopes? I ask because the structure here makes it seem like it doesn't. I know wiki is a shitty source, but as I said, this is outside my training.
The Wikipedia telescope eqn is an approximation, based on the rule that sin θ ≈ θ for small angles θ (in radians — which for optical telescopes is just about always the case) and dropping the constant 1.220. You'll note that adding in the constant only makes a 22% difference in the figure. It's bad form to use an approximation to tell if two objects are separated unless by the approximation they are clearly separated.
Ender wrote:
Let's say you need about 100 px on a side to identify a Quesadilla by its rough shape.
What do you base that off?
Nothing substantial, and used as an illustrative example only, as the bolded phrase "Let's say" tends to suggest.