![epaperpress com ptlens epaperpress com ptlens](https://live.staticflickr.com/3487/4027999845_827dfe4988_b.jpg)
When including the principal distance (which is part of the interior orientation), the position of the image is unequivocally defined. Other terms for that are camera extrinsics or simply pose. Together, these six parameters establish the so-called exterior/outer orientation. The location defines the projection centre O with three coordinates (X, Y, Z) and the direction is defined by three rotation angles roll, pitch and yaw (omega, phi, kappa). You have to imagine that a camera is placed at a certain location in space (in the air or on the ground) and is pointed in a certain direction. However, besides the lens distortions, geometric errors are also induced by the topographical relief and the tilt of the camera axis. It can, for instance, be used to map something when the image was taken in a vertical way and no relief was present.
![epaperpress com ptlens epaperpress com ptlens](https://live.staticflickr.com/3129/2646936091_023557540d.jpg)
#Epaperpress com ptlens free#
Undistort is a function which uses the interior parameters calculated by SFM (Structure from Motion) to create a new photograph that is free of lens distortion. The latter explains why I would never ask PhotoScan to optimise for aspect and skew. photodetector width to height equals 1) can be assumed for any digital frame camera. perpendicular axis) and a unit aspect ratio (i.e. In addition to the abovementioned parameters, several other camera characteristics can be calibrated such as affinity in the image plane, consisting of aspect ratio (or squeeze) and skew (or shear). Since PhotoScan can solve for four radial lens distortion parameters (k1, k2, k3, k4) and two decentring lens distortion parameters (p1, p2), the total lens distortion can be modelled very accurately and much better than most tools such as PTlens. The latter is stored for each image in the intrinsic parameter matrix K. In a Structure from Motion approach such as PhotoScan, a self-calibration/auto-calibration is run to automatically define the camera’s interior orientation.
![epaperpress com ptlens epaperpress com ptlens](http://content.photographyreview.com/channels/photographyreview/images/products/product_327517.jpg)
After this geometric camera calibration, all parameters that allow for the building of a model that can reconstruct all image points on their ideal position are obtained, thereby fulfilling the basic assumption used in the collinearity condition. All the parameters of the interior orientation (also called camera intrinsics) are determined by a geometric camera calibration procedure. Therefore, the deviations from the perfect situation are modelled by suitable distortion parameters, which complete the interior orientation. To metrically work with images, every image point must be reconstructed to its location according to this ideal projective camera. However, since optical distortions are always present in real cameras, the image points are imaged slightly off of the location they should be at according to the central projection. The mathematical parameters describing this ideal situation are the principal distance and the principal point (forming the so-called interior/inner orientation). In the case of an ideal camera, which would be a perfect central projection system in which projection implies a transformation of a higher-dimensional 3D object space into a lower-dimensional 2D image space, the lens imaging system would be geometrically distortionless. Since digital image sensors are by default treated as perfectly planar surfaces and refraction is a very specific topic that is only of major importance when imaging from rather high altitudes and off-nadir angles, only lens distortions are generally considered. Lens distortions (radial and decentring), atmospheric effects (mainly refraction) and a non-planar image sensor are factors which prevent this. In the field of photogrammetry, this is expressed by the collinearity equation which states that the object point, the camera’s projection centre and the image point are located on a straight line and the image is formed on an exact plane. In photogrammetry and computer vision, the geometry of perspective projection is used to model the formation of an image mathematically.