Accurate 3D Mapping Algorithm for Flexible Antennas

This work addresses the problem of performing an accurate 3D mapping of a flexible antenna surface. Consider a high-gain satellite flexible antenna; even a submillimeter change in the antenna surface may lead to a considerable loss in the antenna gain. Using a robotic subreflector, such changes can be compensated for. Yet, in order to perform such tuning, an accurate 3D mapping of the main antenna is required. This paper presents a general method for performing an accurate 3D mapping of marked surfaces such as satellite dish antennas. Motivated by the novel technology for nanosatellites with flexible high-gain antennas, we propose a new accurate mapping framework which requires a small-sized monocamera and known patterns on the antenna surface. The experimental result shows that the presented mapping method can detect changes up to 0.1millimeter accuracy, while the camera is located 1 meter away from the dish, allowing an RF antenna optimization for Ka and Ku frequencies. Such optimization process can improve the gain of the flexible antennas and allow an adaptive beam shaping. The presented method is currently being implemented on a nanosatellite which is scheduled to be launched at the end of 2018.


Introduction
The vision of having a reliable and affordable global network which can be accessed from any point on the globe at any time is a huge scientific challenge which has attracted many researches during the last few decades.Most proposed solutions are based on a network of hundreds or thousands of LEO nanosatellites which will constitute a global network with the earth via RF communication.These new-space projects are of interest to major companies such as Google, Qualcomm, Facebook, and SpaceX.OneWeb is an example of such a project involving a large constellation of LEO satellites.Other projects such as Google's Project Loon [1] or Facebook's Aquila Drone are not directly focused on satellite constellations but generally assume that such global network already exists.The new-space industry includes many small-or medium-size companies which develop products for the new-space market (e.g., Planet Labs and Spire are focusing on global imaging [2] and global IoT).One of the most famous LEO satellite constellations is the Iridium network, developed in the '90s; this global network is still operational, and the second-generation network named Iridium Next is currently being deployed.Optimizing a global network in terms of coverage, deployment, and services involves extremely complicated problems from the computational point of view.In order to reduce the cost of deploying such network, many new-space companies are working on miniaturizing their satellites-as launching 100 LEO nanosatellites often costs less than launching a single large satellite into a geosynchronous orbit.In order to allow a long-range, wide-band RF communication between a satellite and a ground station, high-gain directional antennas are being used.Having such a dish antenna on-board of the satellite significantly increases its size and weight, and therefore, almost all current nanosatellites have a limited bandwidth as they use small low-gain antennas allowing a bandwidth of sub-Mbps.NSLComm has developed a concept of nanosatellite with a relatively large expendable antenna, allowing a significantly better link budget from a nanosatellite [3].Nevertheless, flexible antennas are sensitive to surface distortion especially in space, where significant temperature changes are common.In this paper, we present a generic method to accurately map the surface of a flexible antenna located on a satellite.The presented framework requires very limited space and computing power, allowing it to be implemented even for small nanosatellites.

Related Works
Mapping a 3D surface is an important problem which is of interest to many researches.Available literature suggests solutions for wide-range mapping techniques including time of flight [4], triangulation [5], structured light [6], RGBD [7], stereo vision [8], and image-based modeling [9].
In this work, we focus on the challenging task of mapping a satellite flexible antenna-which is not suitable for common 3D scanning techniques due to space limitations and the need to perform a 3D scan from a fixed and single angle (i.e., a single image).The ability to infer a 3D model of an object from a single image is necessary for human-level scene understanding.Tatarchenko et al. [10] have presented a convolution network capable of inferring a 3D representation of a previously unseen object given a single image of this object, while in the work of Williams et al. [11], a graph theory and dynamic programming techniques over the shape constraints were presented to compute the anterior and posterior surfaces in individual 2D images.Tanskanen et al. [12] have proposed a complete on-device 3D reconstruction pipeline for mobile monocular hand-held devices, which generates dense 3D models with an absolute scale on-site while simultaneously supplying the user with real-time interactive feedback.Medina et al. [13] suggest a resistor-based 2D shape sensor, and Shvalb et al. [3] show that, using a robotic flexible subreflector, even relatively significant changes in a dish surface can be fixed; naturally, having a 3D model of the current surface of the main dish antenna can improve the accuracy and the run time of such systems.

Our Contribution.
In this work, we present a novel method which can robustly recover a surface shape from a single image with known markers (with known shape).The suggested method uses a set of visual markers in order to compute a pointcloud.To the best of our knowledge, this is the first work which presents a framework for performing 3D reconstruction of smooth surfaces with submillimeter accuracy that is applicable for on-board satellite flexible antenna.

Flexible Antenna for Nanosatellites
The general concept of the flexible antenna with an adjustable robotic subreflector was presented recently [3].It is based on a flexible expandable main reflector and an adjustable robotic subreflector which can compensate for minor changes in the main reflector surface.Mechanical mechanisms for manipulating the robotic subreflector may be based on linear servo or piezoelectric motors [3] but can also be based on bioinspired manipulators (see [14]).In order to optimize high-frequency RF communication (e.g., Ka bands), the main antenna should be mapped with an accuracy level which is 25-50 times higher than the communication typical wavelength (about 1 cm in Ka), leading to a challenging mapping accuracy requirement of about 0.2-0.1 mm on average (see [15]).A nanosatellite with such flexible antenna should also be equipped with the following components: (i) a global position receiver (e.g., GPS), (ii) a star tracker in order to determine its orientation, and (iii) an altitude control mechanism (based on both reaction/momentum wheels and a magnetorquer).Using the above components, the satellite on-board computer can aim the antenna to a specific region on earth-in general, this process resembles the task of the imaging satellite that needs to aim its camera to a given region.Denote that for a LEO satellite, such process is a continuous (always on) process, unlike the case of geosynchronous satellites, which only need to maintain a fixed orientation.
The use of flexible antennas for space applications is a relatively new concept.Having an on-board accurate mapping system for the flexible antenna will allow two major benefits: (1) A fast and accurate tuning of the robotic subreflector to compensate for the distortion of the main reflector and an adaptive beam-shaping capability of the transmitted pattern.
(2) A study of the changes in the flexible surface with respect to temperature and time.
Due to space and weight limitations, the on-board 3D mapping system should be as compact as possible.Moreover, the method should use limited computing power for onboard algorithms or limited bandwidth methods for ground-based algorithms.Following these requirements, we shall use a monocamera and known shape targets for the mapping task.

Monocamera Mapping Algorithm
In order to map the 3-dimensional pointcloud of the satellite antenna, we first embed a set of targets (or markers) with a known shape and size.A single camera is assumed to be located near the antenna focal point.We shall now present the general algorithm which analyzes the acquired image to compute the 3D surface of the dish.This process consists of the following stages: (i) camera calibration, (ii) initial pointcloud computation, and (iii) global adjustment.
4.1.Camera Calibration.We start by calibrating the camera using an algorithm proposed by Zhang in [16].Camera calibration is the process of estimating intrinsic and/or extrinsic parameters.Intrinsic parameters deal 2 International Journal of Antennas and Propagation with the camera's internal characteristics, such as its focal length, skewness, distortion, and image center.The camera calibration step is essential for a 3D computer vision, as it allows one to estimate the scene's structure in Euclidean space removing lens distortions, which degrades accuracy.Figure 1 depicts an image taken after a calibration process.Figure 2 illustrates the position of the camera on the satellite which allows one to view the whole antenna span.
Accordingly, the camera's FoV (field of view) should be chosen to be in the range of 60 °and 90 °.Such a relatively large FoV imposes a nonnegligible camera distortion.Thus, the calibration process is necessary to allow an accurate angular transformation between the camera coordinate system (i.e., pixel position) and the satellite global coordinate system.
Often, one would also like to express the position of points (x, y, and z) given in the camera coordinate system in a world (satellite) coordinate system.This may be done by simply rotating the set of points P about the angle of inclination of the camera (see Figure 2).

Initial Pointcloud Generator
Algorithm.We start our discussion considering circular targets.Algorithm 1 produces the initial pointcloud which we further use in this paper.Here, the function T(im) uses the information from the calibration step to remove lens distortion from the image.C camera is the angular resolution (taken from the camera parameters).We mark by Segment(F) the function that segments the acquired image and detect the targets T and compute the triplet (center, area, and geometry) for each target.In order to compute Δα for each target T, consider two of its vertices w 1 and w 2 and do the following: (1) Calculate the normal of the surface that the target lies on-for each pattern, the manner in which the normal is calculated is different (we shall discuss this below).
(2) For each pair w 1 , w 2 ∈ T, do the following: (a) Consider the plane that passes through the camera point (the origin point) and points w 1 and w 2 .Define line l 1 as the intersection between this plane and the plane that the target lies on.
(b) Let l 2 be the line connecting the camera and the midpoint between w 1 and w 2 .
(3) Set α to be the angle between l 1 and l 2 (see Figure 3).Input: Undistorted image (frame) F. Output: 3D pointcloud.1: Let im t = T im .2: T ← Segment F 3: for each triplet t i ∈ T do Compute a 3D point: p i ∈ P as follows: (1) The x,y coordinates of p i are the center values of t i (2) ni ← the normal of the target.
(3) Δα i ← the angular difference between ni and the vector to t i .(4) Let p i z = t i area ⋅ C camera /cos Δα i 4: end for Algorithm 1: Initial 3D mapping using circular targets.We implement the algorithm above for multicircular targets.The use of such targets is motivated by the ability of both preventing the pixel snapping problemallowing a subpixel resolution accuracy of the center of the target and the accuracy of the normal of the plane the target is lying on.The advantages of using circles therefore contribute to the overall calculated accuracy of the Z dimension.
Figure 4 depicts the multicircular target cropped from an acquired image; note that each circle is distorted to be an ellipse rather than a circle as a result of the varying orientations and distances.The following explains how one computes the pointcloud using multicircular targets: (1) We apply an ellipse detector algorithm which uses a nonlinear pattern (connected component) in the binarized image.Next, the estimation is refined by using a subpixel resolution algorithm in the grayscale image.We detect both outer and inner ellipses; then, for each pair of ellipses, we find the average center, which will be more robust to varying light intensity conditions which could cause a pixel-snapping problem (i.e., a pixel in one image can deviate by a single pixel in another image with the same conditions).In Figure 5, an example of the ellipse detector algorithm result is shown.
(2) For each target, calculate the center C p by running the K-means clustering algorithm on its ellipse centers (in Figure 6 shown as an example of the K-means result).
(3) Let the x,y coordinates of the target in the pointcloud be the x, y coordinates of C p .
(4) Find the normal of the ellipse as follows: Consider the largest ellipse in the target, find its max(a, b), where a is the major axis and b is the minor axis, and its intersections with the ellipse, and let them be p 1 and p 2 , respectively.
Assume that the camera view is on the yz plane; then, (c) Now, we must have 2 from which we can calculate the absolute value of x 1 and similarly 2 from which we can calculate the absolute value of x 2 .
(d) Note that one needs to determine the signs of x 1 and x 2 .If we choose p 1 , p 2 so that they are opposite (in the sense that they have inverse coordinates about the center , then x 1 and x 2 should have the same absolute value with opposite signs.Since we can correctly determine whether x 1 and x 2 are closer to the camera than the center or further away, this could be easily set. (e) Let the vectors , with the n vector is normal to the ellipse.
(f) Then, compute the angular difference Δa i and Z-value as we mentioned above.
As will be exemplified below, using circular targets enables an average accuracy level of below 0.1 mm.Yet, such a method requires high-quality printing of curved lines over a flexible antenna made of Kapton foil.Such printing is hard to perform with space-qualified ink.Therefore, we needed to adjust the algorithm to work with targets which are composed of just straight lines.International Journal of Antennas and Propagation 4.3.Mapping Using a Uniform Grid.After testing a wide range of possible straight-line patterns, we conclude that a simple uniform grid (see Figure 7) is the most suitable target available on the actual surface of the flexible antenna.At first, we have tested algorithms for detecting lines using edge detection methods.Such approach leads to relatively poor results as the edges on the antenna as captured by the camera (see Figure 7) are not straight lines but rather complicated curves.Performing regression to such curves introduced significant errors.Thus, we examined an alternative methodology where we first detect all the inner corners of each square (using regression to square).We then define Level 0 to be the set of all center points of each square and Level 1 to be the set of the centers of the unit squares implied by the points in Level 0 (see Figure 8).Algorithm 2 computes a 3D pointcloud from an image of a grid-based target using the notion of the Level 1 point set.
In order to implement the above algorithm, the following properties should be defined: (i) Let S be the set of all small squares in L 1 : including unit squares, two-unit squares, and up to some relatively small number of units-usually smaller than the 10-degree angle.
(ii) Given a square s i , its area, and normal-one can approximate a distance of s i from the camera, where the center of s i is the angular coordinate of p i .This is actually the same method which was used in the circular target of Algorithm 1.
(iii) In some implementations, p i can be generalized to be a weighted point associated with a confidence of the distance approximation based on s i , a i , and n i .That is, the expected distance accuracy to a two-unit   (iv) Computing the normal of s i can be performed using the EPnP algorithm [17] (see Figure 9).
The grid-based algorithm is relatively robust and simple to implement.Yet, in most cases, the average error level was too large, about 0.3-0.5 mm.Moreover, the manufacturing limitation of the flexible antenna requires us to design an "on-board" algorithm which is both accurate and feasible.

On-Board Satellite Implementation
In this section, we define the actual "space algorithm" for computing the 3D surface of a flexible antenna.Algorithm 3 compares two images: a reference image (P) and a current image (I).P is the optimal ("perfect") lab image of the flexible antenna from the satellite camera.This image is taken during an RF test of the complete satellite.I is the "space" image which is compared with P. Algorithm 3 computes the 3D difference map between P and I instead of the actual 3D pointcloud-as the 3D surface of P was mapped in high accuracy level during the final testing stage.As the satellite is about to be launched, its flexible antenna is unfolded and the surface of the main (flexible) antenna may suffer from global distortions due to the flexible nature of the antenna.In order to overcome such global distortions, we decided to use two different coordinate systems: satellite coordinate system and antenna coordinate system.For each target center (in the image 2D point), we consider it in the antenna coordinate system and then we consider its relative position in the satellite coordinate system.In order to determine the place of the target in the antenna coordinate system, we detect the contour of the antenna; then, for each point, we consider its relative position to the contour (edge).Figure 10 depicts the contour detection step flowchart.Having the 2D points in the antenna coordinate systems, we use Algorithm 1 to map the antenna surface.
5.1.3D Optimized Surface Generation Algorithm.We define two levels of noise filtering: level 0 , which uses direct position measurements for estimation (e.g., estimation of a target's center) and level 1 , which averages out level 0 estimations (e.g., the center of neighboring target's centers).We define level i in the same iterative manner.
5.1.1.Minimum LAR Algorithm.After we have found the 3D pointcloud by using the algorithm above, we now find the minimum RMS function which returns a best fitting surface for the 3D points.
Fitting requires a parametric model that relates the response data to the predictor data with one or more coefficients.The result of the fitting process is an estimate of the model coefficients.
The following algorithm returns the best fitting plane for a given 3D pointcloud with a least absolute residual (LAR) Input: Undistorted image (frame) F. Output: 3D pointcloud.1: Let P * ← 2: Let C 0 be a set of all points on corners of the grid.3: Compute Level 1 (L 1 ) from F and C 0 .4: Let S ← be all small squares in L 1 5: for each s i ∈ S do (1) let a i ← be the area of s i .
(2) let n i ← be the normal of s i .
(3) let p i ← be a 3D point associated with s i w.r.t. a i , n i .(4) add p i to P * 6: end for 7: Return P * Algorithm 2: Initial 3D mapping using a grid.6 International Journal of Antennas and Propagation surface optimization in order to increase the expected zaccuracy to below 0.1 mm.
Here, r i is the usual least-squares residuals and h i is the leverage that adjusts the residuals by reducing the weight of high-leverage data points, which have a large effect on the least-squares fit.The standardized adjusted residuals are given by u = r adj /K s , where K is a tuning constant and s is the robust variance given by MAD/c, in which c is the constant and MAD is the median absolute deviation of the residuals.

Experimental Results
In this section, we show the experimental results of each step of the proposed algorithm.

Camera Calibration
Step.For the setup step, we positioned the calibration targets (chessboard) on a 3D printer plate that has movement accuracy of sub-0.1 mm (up/down).A camera fixed at an 80 cm distance from the plate as shown in Figure 11 was located.Then, the plate was translated up and down in 0.1 mm steps.By comparing the translation with a naive distance calculation for the movement, detection calibration was ratified.

Normal Detection Accuracy Test.
In this step, we gave the normal detector Algorithm 4 points with known real normal and ran it.For Figure 12, Table 1 lists some numeric results of the algorithm.
The result above shows an average angular error of ∼0.4 °.Assuming that the monocamera is located close to the subreflector, such angular "noise" induces only minor errors which are commonly smaller than the 10 −4 ratio.
We have built a practical setting that provides an accurate movement of a rigid body (plate) that can obtain a page with printed shapes (targets).In Figure 13, we show the setting with an explanation of its components.Two types of cameras were used: (1) an embedded 14megapixel sensor with a FoV of about 75 °and (2) an Android phone with 16 megapixels (Galaxy S6) with a FoV We have attached various types of paper and Kapton targets on the panel and tested the algorithm's ability to detect fine changes.Figure 14 shows the expected noise level from comparing two images of the same targets (without moving the panel), which is usually lower than 0.04 pixel.Figure 15 presents the proposed algorithm "in action"; the panel with 30 targets was moved 1 mm on average ((a) almost no movement, (b) 0.2 mm, (c) about 1.8 mm, and (d) about 2 mm) (Figure 16), and the presented graph shows the linearity as well as the accuracy of the level 0 dataset, presenting an accuracy level which is better than 0.2 mm on average.
In Figure 17, we show a test on the Kapton foil which introduced a reflection and lighting problems.

Discussion and Future Work
We introduced a novel methodology for mapping flexible adaptive aerospace antennas.The proposed method can detect submillimeter distortions even on relatively large reflectors.The presented methodology allows autonomous (i.e., on-board) computation of surface, which can be used in order to continuously investigate the unknown nature of Kapton foil flexible antennas in the extreme temperatures of space.Using surface 3D mapping, the robotic subreflector can overcome minor distortions in the main reflector, allowing a typical gain improvement of 3-7 dB [3] and a new capability of dynamic beam shaping.The presented method was implemented and tested on a laboratory prototype of a nanosatellite with a two-foot flexible main reflector.The presented method reached an accuracy level of 0.1 millimeter on circular targets and a 0.3-0.5-millimeteraccuracy on grid-based targets (with low-quality printed grid-based targets).Using the "on-board" algorithm which uses a reference image, the expected accuracy reached the required level of 0.1 millimeter.We plan to implement the current method on a real nanosatellite with a flexible antenna, which is scheduled to be launched at the end of 2018.We hope that using the presented framework, the vision of having a large-scale LEO satellite constellation which is both affordable and globally accessible can get one step closer to reality. Figure 18: A difference map between two images taken continuously with no movement.In the scale of the difference, we can see that the maximum difference between the two images is 0.00035 pixel, which is significantly below 0.1 millimeter; this result demonstrates the algorithm's accuracy.

Figure 1 :
Figure 1: An example of an image (after the calibration process).The intrinsic parameters are marked in red, and the expected accuracy (error) is marked in yellow circles (a larger circle implies a larger expected error).

Figure 2 :
Figure 2: The camera location on-board; here, θ is the field of view which will be considered as ∈[60, 90].Note the difference between the coordinate systems of each satellite and camera.

3
International Journal of Antennas and Propagation (4) Define the angular difference as 90 °− avg(α i ).
(a) p 1 = x 1 , u 1 , v 1 , where x 1 is unknown and u 1 and v 1 are the x, y coordinates for the intersection point; in the same way, let p 2 = x 2 , u 2 , v 2 .(b) Then, the center point p c = 0, u c , v c (since we have no depth information, we arbitrarily place the yz plane at the center of the circle).

Figure 4 :
Figure 4: The multicircular targets on a solid main reflector as acquired by the monocamera.

Figure 3 :
Figure 3: An illustration of the information needed to find Δα, where α is the green angle.

Figure 6 :
Figure 6: An example of the K-means clustering (KMC) result; the dotted points on the right are the centers of the ellipses, and the starred ones (blue) are the KMC result.

Figure 7 :
Figure 7: The grid pattern printed on a paper.Note that the blue lines are a bit curved.

Figure 8 :Figure 5 :
Figure 8: Computing Level 0 and Level 1 from a grid-based image: Level 0 is presented as the corners of the green grid.Level 1 is marked as blue dots in the center of each green square.

Figure 9 :
Figure 9: An example of normal detection results on some printed squares; note that target 10 and target 29 are oriented differently with different computed normal vectors.

Figure 11 :( 2 )Figure 10 :
Figure 11: The camera calibration test setup.Using a calibrated 3D printer with a robotic moving panel, we were able to calibrate the Z approximation up to the 0.4 mm accuracy level.

6. 3 . 2 1Figure 13 :
Figure13: The practical setting of the next experiments that proves the accuracy level.The setting is divided into 4 main components: 1 is a dynamic plate that has the surface (plate) to be mapped which can move toward front or back. 2 is a stabilized stand that holds the camera.3 is a light source to overcome the problem of the pixel snapping.4 is a submillimeter spacer that we can measure the movement of the plate (1), so we have a reference for the actual movement.

Figure 14 :Figure 12 :
Figure 14: The typical noise of level 0 targets.Comparing two images of the same targets, they were shot with the same lighting and camera parameters.The typical noise level is about 0.03-0.04pixel.In almost all tested cases, the level 0 noise is below 0.1 pixel.

Figure 16 :
Figure 16: A testing setup: a paper with 30 (5 × 6) targets was attached to a flat panel.This panel can be moved using a set of accurate spacers (at a, b, c, and d), allowing us to test global movements with a high level of accuracy.

Figure 17 :
Figure 17: Kapton foil test.Denote the black reflection (marked by a blue circle) at the lower part-such reflection issues can be solved with a controlled lighting.Yet, any real implementation should be able to overcome reflection problems by detecting them and addressing them as outliers.

Figure 15 :
Figure 15: An example of a surface movement observed by 30 (5 × 6) targets (see Figure 16).Comparing two images, with an average tilt change of about 1 mm.A change of 0.1 pixel at targets is detectable (strongly above the noise level), allowing a z-accuracy of about 0.2 mm on level 1 .

Figure 19 :
Figure 19: The minimum LAR surface optimization algorithm on the setting, running on two images taken continuously with no movement.The upper part of the figure shows the surface from the ZY plane viewpoint, and the lower part shows the XY plane viewpoint.

Table 1
movement is detected-with an overall accuracy better than 0.1 mm.