^{1}

^{1}

^{2}

^{1}

^{2}

This work addresses the problem of performing an accurate 3D mapping of a flexible antenna surface. Consider a high-gain satellite flexible antenna; even a submillimeter change in the antenna surface may lead to a considerable loss in the antenna gain. Using a robotic subreflector, such changes can be compensated for. Yet, in order to perform such tuning, an accurate 3D mapping of the main antenna is required. This paper presents a general method for performing an accurate 3D mapping of marked surfaces such as satellite dish antennas. Motivated by the novel technology for nanosatellites with flexible high-gain antennas, we propose a new accurate mapping framework which requires a small-sized monocamera and known patterns on the antenna surface. The experimental result shows that the presented mapping method can detect changes up to 0.1-millimeter accuracy, while the camera is located 1 meter away from the dish, allowing an RF antenna optimization for Ka and Ku frequencies. Such optimization process can improve the gain of the flexible antennas and allow an adaptive beam shaping. The presented method is currently being implemented on a nanosatellite which is scheduled to be launched at the end of 2018.

The vision of having a reliable and affordable global network which can be accessed from any point on the globe at any time is a huge scientific challenge which has attracted many researches during the last few decades. Most proposed solutions are based on a network of hundreds or thousands of LEO nanosatellites which will constitute a global network with the earth via RF communication. These

Mapping a 3D surface is an important problem which is of interest to many researches. Available literature suggests solutions for wide-range mapping techniques including time of flight [

In this work, we focus on the challenging task of mapping a satellite flexible antenna—which is not suitable for common 3D scanning techniques due to space limitations and the need to perform a 3D scan from a fixed and single angle (i.e., a single image). The ability to infer a 3D model of an object from a single image is necessary for human-level scene understanding. Tatarchenko et al. [

In this work, we present a novel method which can robustly recover a surface shape from a single image with known markers (with known shape). The suggested method uses a set of visual markers in order to compute a pointcloud. To the best of our knowledge, this is the first work which presents a framework for performing 3D reconstruction of smooth surfaces with submillimeter accuracy that is applicable for on-board satellite flexible antenna.

The general concept of the flexible antenna with an adjustable robotic subreflector was presented recently [

The use of flexible antennas for space applications is a relatively new concept. Having an on-board accurate mapping system for the flexible antenna will allow two major benefits:

A fast and accurate tuning of the robotic subreflector to compensate for the distortion of the main reflector and an adaptive beam-shaping capability of the transmitted pattern.

A study of the changes in the flexible surface with respect to temperature and time.

Due to space and weight limitations, the on-board 3D mapping system should be as compact as possible. Moreover, the method should use limited computing power for on-board algorithms or limited bandwidth methods for ground-based algorithms. Following these requirements, we shall use a monocamera and known shape targets for the mapping task.

In order to map the 3-dimensional pointcloud of the satellite antenna, we first embed a set of

We start by calibrating the camera using an algorithm proposed by Zhang in [

An example of an image (after the calibration process). The intrinsic parameters are marked in red, and the expected accuracy (error) is marked in yellow circles (a larger circle implies a larger expected error).

Figure

The camera location on-board; here,

Often, one would also like to express the position of points (

We start our discussion considering circular targets. Algorithm _{camera}_{1} and _{2} and do the following:

Calculate the normal of the surface that the target lies on—for each pattern, the manner in which the normal is calculated is different (we shall discuss this below).

For each pair

Consider the plane that passes through the camera point (the origin point) and points _{1} as the intersection between this plane and the plane that the target lies on.

Let _{2} be the line connecting the camera and the midpoint between

Set _{1} and _{2} (see Figure

Define the angular difference as 90° − avg(_{i}

1: Let

2:

3:

(1) The _{i}_{i}

(2)

(3) ∆_{i}_{i}.

(4) Let

4:

An illustration of the information needed to find ∆

We implement the algorithm above for multicircular targets. The use of such targets is motivated by the ability of both preventing the pixel snapping problem—allowing a subpixel resolution accuracy of the center of the target and the accuracy of the normal of the plane the target is lying on. The advantages of using circles therefore contribute to the overall calculated accuracy of the

Figure

We apply an ellipse detector algorithm which uses a nonlinear pattern (connected component) in the binarized image. Next, the estimation is refined by using a subpixel resolution algorithm in the grayscale image. We detect both outer and inner ellipses; then, for each pair of ellipses, we find the average center, which will be more robust to varying light intensity conditions which could cause a pixel-snapping problem (i.e., a pixel in one image can deviate by a single pixel in another image with the same conditions). In Figure

For each target, calculate the center _{p} by running the

Let the

Find the normal of the ellipse as follows: Consider the largest ellipse in the target, find its max(

The multicircular targets on a solid main reflector as acquired by the monocamera.

The result of the ellipse detector is the target itself (a), shows the inner ellipses detected (b), and shows the outer ellipses detected (c).

An example of the

Assume that the camera view is on the

_{1} is unknown and

Then, the center point

Now, we must have

Note that one needs to determine the signs of _{1} and _{2} are closer to the camera than the center or further away, this could be easily set.

Let the vectors

Then, compute the angular difference ∆_{i}

As will be exemplified below, using circular targets enables an average accuracy level of below 0.1 mm. Yet, such a method requires high-quality printing of curved lines over a flexible antenna made of Kapton foil. Such printing is hard to perform with space-qualified ink. Therefore, we needed to adjust the algorithm to work with targets which are composed of just straight lines.

After testing a wide range of possible straight-line patterns, we conclude that a simple uniform grid (see Figure _{0} to be the set of all center points of each square and Level_{1} to be the set of the centers of the unit squares implied by the points in Level_{0} (see Figure _{1} point set.

1: Let

2: Let _{0} be a set of all points on corners of the grid.

3: Compute _{1} (_{1}) from _{0}.

4: Let _{1}

5: _{i}

(1) let _{i}_{i}

(2) let _{i}_{i}

(3) let _{i}_{i}_{i}, n_{i}

(4) add _{i}

6:

7: Return

The grid pattern printed on a paper. Note that the blue lines are a bit curved.

Computing Level_{0} and Level_{1} from a grid-based image: Level_{0} is presented as the corners of the green grid. Level_{1} is marked as blue dots in the center of each green square.

In order to implement the above algorithm, the following properties should be defined:

Let _{1}: including unit squares, two-unit squares, and up to some relatively small number of units—usually smaller than the 10-degree angle.

Given a square _{i}_{i}_{i}_{i}

In some implementations, _{i}_{i}_{i}_{i}

Computing the normal of _{i}

An example of normal detection results on some printed squares; note that target 10 and target 29 are oriented differently with different computed normal vectors.

The grid-based algorithm is relatively robust and simple to implement. Yet, in most cases, the average error level was too large, about 0.3–0

In this section, we define the actual “space algorithm” for computing the 3D surface of a flexible antenna. Algorithm

1: Compute

2: Perform a 2

3:

(1) Compute it’s ratio,

(2) Associate

4:

5: Given

6: Call Minimum

An example of the flowchart of the dish edge detection process. (a) The acquired image is binarized. (b) The intersections between the ribs of the dish with the dish are “cleaned.” (c) Detecting the connected components in the binarized image is exemplified. (d) The edge of the dish is then located by finding the resultant convex hull.

We define two levels of noise filtering: level_{0}, which uses direct position measurements for estimation (e.g., estimation of a target’s center) and level_{1}, which averages out level_{0} estimations (e.g., the center of neighboring target’s centers). We define level_{i}

After we have found the 3D pointcloud by using the algorithm above, we now find the minimum RMS function which returns a best fitting surface for the 3D points.

Fitting requires a parametric model that relates the response data to the predictor data with one or more coefficients. The result of the fitting process is an estimate of the model coefficients.

The following algorithm returns the best fitting plane for a given 3D pointcloud with a least absolute residual (LAR) surface optimization in order to increase the expected

Here, _{i}_{i}

In this section, we show the experimental results of each step of the proposed algorithm.

For the setup step, we positioned the calibration targets (chessboard) on a 3D printer plate that has movement accuracy of sub-0

The camera calibration test setup. Using a calibrated 3D printer with a robotic moving panel, we were able to calibrate the

In this step, we gave the normal detector Algorithm

1: Fit the model by weighted least squares.

2:

(1) Let

(2) Let

3: If the fit converges, exit. Otherwise, repeat.

4:

The normal detection algorithm tester: shown in blue are the real points of the orthogonal segment corners to the normal and shown in red is the algorithm approximation.

Real normal | Approximated normal | ||||
---|---|---|---|---|---|

0.1808 | 0.6601 | 0.7291 | 0.1716 | 0.6743 | 0.7182 |

0.0417 | 0.2682 | 0.9625 | 0.0386 | 0.272 | 0.9615 |

0.0682 | 0.666 | 0.7428 | 0.0695 | 0.6716 | 0.7377 |

0.0594 | 0.0738 | 0.9955 | 0.0515 | 0.0694 | 0.9963 |

The result above shows an average angular error of ∼0^{−4} ratio.

We have built a practical setting that provides an accurate movement of a rigid body (plate) that can obtain a page with printed shapes (targets). In Figure

The practical setting of the next experiments that proves the accuracy level. The setting is divided into 4 main components: 1 is a dynamic plate that has the surface (plate) to be mapped which can move toward front or back. 2 is a stabilized stand that holds the camera. 3 is a light source to overcome the problem of the pixel snapping. 4 is a submillimeter spacer that we can measure the movement of the plate (1), so we have a reference for the actual movement.

We have attached various types of paper and Kapton targets on the panel and tested the algorithm’s ability to detect fine changes. Figure _{0} dataset, presenting an accuracy level which is better than 0.2 mm on average.

The typical noise of level_{0} targets. Comparing two images of the same targets, they were shot with the same lighting and camera parameters. The typical noise level is about 0.03–0.04 pixel. In almost all tested cases, the level_{0} noise is below 0

An example of a surface movement observed by 30 (5 × 6) targets (see Figure _{1}.

A testing setup: a paper with 30 (5 × 6) targets was attached to a flat panel. This panel can be moved using a set of accurate spacers (at a, b, c, and d), allowing us to test global movements with a high level of accuracy.

In Figure

Kapton foil test. Denote the black reflection (marked by a blue circle) at the lower part—such reflection issues can be solved with a controlled lighting. Yet, any real implementation should be able to overcome reflection problems by detecting them and addressing them as outliers.

In this subsection, we present two typical examples of computing the difference map between two pairs of images. In the first example, we compared two images with no movement—this example is needed in order to test the expected noise level of the suggested method. Figures

A difference map between two images taken continuously with no movement. In the scale of the difference, we can see that the maximum difference between the two images is 0.00035 pixel, which is significantly below 0.1 millimeter; this result demonstrates the algorithm’s accuracy.

The minimum LAR surface optimization algorithm on the setting, running on two images taken continuously with no movement. The upper part of the figure shows the surface from the

The result of the difference map with a 1.2 mm movement of the top of the plate (and a movement of about 0.3 mm of the lower part of the plate). This result demonstrates the ability of the algorithm to compute a smooth and continuance difference map.

In this figure, we show the result of the minimum LAR surface optimization algorithm with a 1.2 mm movement of the top of the plate (and a movement of about 0.3 mm of the lower part of the plate). The upper part of the figure shows the surface from the

We introduced a novel methodology for mapping flexible adaptive aerospace antennas. The proposed method can detect submillimeter distortions even on relatively large reflectors. The presented methodology allows autonomous (i.e., on-board) computation of surface, which can be used in order to continuously investigate the unknown nature of Kapton foil flexible antennas in the extreme temperatures of space. Using surface 3D mapping, the robotic subreflector can overcome minor distortions in the main reflector, allowing a typical gain improvement of 3–7 dB [

We plan to implement the current method on a real nanosatellite with a flexible antenna, which is scheduled to be launched at the end of 2018. We hope that using the presented framework, the vision of having a large-scale LEO satellite constellation which is both affordable and globally accessible can get one step closer to reality.

The authors declare that they have no conflicts of interest.

This research was partially supported by NSLComm. NSLComm (