by onimoni
Last Updated March 25, 2017 20:18 PM

**Base Case.** Assume a well-lit plain white (no depth and like *really white*) background, and a *magically* levitating black micro ball (such that, it would yield a very small object -of a few pixels- in the end picture with almost no 3D quality).

Here, regardless of focal length, you would not be able to reverse engineer what focal length was used by just looking at the picture.

**Familiar Case I.** Suppose we have a standard portrait photoshoot in a studio, with only the face on the image, and again a well-lit white background.

Here, one would easily differentiate between a wide-angle lens and a tele, based on the distortion of the face.

**Familiar Case II.** Suppose we have an object with a landscape in the background. If we want the object to stay the same size using different focal lengths, we change perspective (i.e. position of camera) to compensate, and we end up with different (relative) distances between the object and background.

**Early Hypothesis.** I guess the determination of focal length is based on

- the width and height of the object (without/regardless of background); you would be able to notice/measure distortion,
- the depth of the object or picture in general; change in perspective (to compensate for the focal length) would show a elongated depth, e.g. a square box or a person with an urban landscape as the background.

**Question.** Is there any theory on what the relationship is between 1., 2. -and possible others- and focal length?

ps: any help reformulating this question is more than welcome :) (I'll remove this later)

what is needed in determining focal lengths in pictures?

What's needed is information about what the scene actually looks like. Your "base case" intentionally removes all the information about the distances between camera, subject, and background, making it impossible to make any real determination about the geometry of the scene. In "Familiar Case I" it's easier to make a guess at the focal length used because we all have some experience with how faces look and how they tend to be distorted by lenses. At the extreme wide end, we also get a bit of the background included in the shot, which also helps. In "Familiar Case II" we have both subject and background information to go on -- we can make a guess as to the size of the car, and also the real-world width of the background in the shot. We still have to estimate the distances between camera, subject, and background, but there are a lot of clues that help make that possible.

Is there any theory on what the relationship is between 1., 2. -and possible others- and focal length?

I'm not sure what you mean here, but the angle of view determines what's visible in the shot. Angle of view is determined by focal length and sensor size.

Ordinary optical analysis is based in Newtonian physics and hence upon Euclidean geometry.

The standard methods for determining unknown values in

*applied*Euclidean geometry is trigonometry.To simplify the problem, can treat:

`sensor size + focal length + cropping`

as an angle.

To calculate an angle we need an angle + a distance or two distances (e.g. opposite and adjacent sides of a right triangle).

If there is sufficient data focal length can be *proved*. If there is not sufficient data, it may be possible to estimate the focal length based on a set of reasonable assumptions. If there is not sufficient information, then it is not possible to estimate.

Consider an object distant enough from the camera to cover a single pixel.

Crop an image of that object down to the single pixel.

Consider an square patch of known size placed parallel to the sensor plane at a known distance from a sensor of known size and filling 50% of the vertical dimension of the resulting image without cropping.

Distances and sizes and crops are estimated and estimate of the accuracy of those estimates is made and accompanies the calculated focal length.

Since we know the sensor size, there is a direct relation between angle of view and focal length. Therefore, if we want to know the focal length we need to find the angle of view.

Base case: In the base case if we know the diameter of the levitating ball, we can extrapolate the width of the picture (parallel to the sensor) at the distance to the sensor. If we would also know the distance of the levitating ball to the sensor, we could calculate the angle of view.

Base case 2: Consider 2 levitating balls with same diameter, one located left in the frame and one right. Let the left ball be located 2m further from the sensor than the right ball.

Solution base case 2: Let the left ball be twice as small in the picture relative to the left ball. Then the left ball must be twice as far from the sensor. Therefore the distance of the right ball to the sensor must be 2m. Since we now have the width of an object and the distance of that object we can calculate the angle of view and focal length.

Recap base case 2: We started with 3 lengths in the picture, two object widths and one distance between objects. We did not need any extra data to calculate the angle of view.

Familiar case 1: We could apply the same method if we would know the width of the tip of the nose, the width of an eye and the distance of the tip of the nose to the eye. However, where it becomes difficult is that it would have to be the distance from nose to eye from the sensor point of view. A slight turn of the face will change this distance.

Familiar case 2: Let's say that we know the length of the windshield and the length of the hood of the car (from front to windshield). We have three lengths in the picture since we have the close hood-length and the far hood-length. The length of the windshield gives us the length between close and far hood. Unfortunately we do not have enough data to calculate the angle of view. This is because we do not have the distance from close to far hood relative to the sensor and the hood-length is not depicted parallel to the sensor. We are able to calculate the field of view if we use the fact that the hood length has a 90 degree angle with the windshield. With some complex matrix computation we can then convert the hood-lengths to lengths parallel to the sensor and the windshield length to a distance relative to the sensor.

Conclusion: In the first three pictures we could calculate the angle of view directly because all distances where either parallel or perpendicular to the sensor. In the last example we needed to "translate" the lengths that we had to parallel and perpendicular lengths, by using extra data.

- ServerfaultXchanger
- SuperuserXchanger
- UbuntuXchanger
- WebappsXchanger
- WebmastersXchanger
- ProgrammersXchanger
- DbaXchanger
- DrupalXchanger
- WordpressXchanger
- MagentoXchanger
- JoomlaXchanger
- AndroidXchanger
- AppleXchanger
- GameXchanger
- GamingXchanger
- BlenderXchanger
- UxXchanger
- CookingXchanger
- PhotoXchanger
- StatsXchanger
- MathXchanger
- DiyXchanger
- GisXchanger
- TexXchanger
- MetaXchanger
- ElectronicsXchanger
- StackoverflowXchanger
- BitcoinXchanger
- EthereumXcanger