Lessons from a Space Lab - An Image Acquisition Perspective

The use of Deep Learning (DL) algorithms has improved the performance of vision-based space applications in recent years. However, generating large amounts of annotated data for training these DL algorithms has proven challenging. While synthetically generated images can be used, the DL models trained on synthetic data are often susceptible to performance degradation, when tested in real-world environments. In this context, the Interdisciplinary Center of Security, Reliability and Trust (SnT) at the University of Luxembourg has developed the ‘SnT Zero-G Lab’, for training and validating vision-based space algorithms in conditions emulating real-world space environments. An important aspect of the SnT Zero-G Lab development was the equipment selection. From the lessons learned during the lab development, this article presents a systematic approach combining market survey and experimental analyses for equipment selection. In particular, the article focus on the image acquisition equipment in a space lab: background materials, cameras and illumination lamps. The results from the experiment analyses show that the market survey complimented by experimental analyses is required for effective equipment selection in a space lab development project.


INTRODUCTION
In the last few years, Deep Learning (DL) techniques have been proven successful in vision-based space applications such as satellite pose estimation (Dung et al., 2021) (Kisantal et al., 2020) and spacecraft navigation (Song et al., 2022).However, DL models require a vast amount of annotated data to learn data patterns and achieve a high performance.Nevertheless, due to the difficulties of obtaining large real space datasets with correct labels, robust space-related DL-solutions are currently missing.For that reason, previous research has mostly relied on synthetic data for training DL models (Musallam et al., 2021) (Kisantal et al., 2020) (Proenca and Gao, 2020).However, while synthetic images are easy to generate and annotate for training DL-based solutions, they are prone to performance degradation when the model is tested in a real world environment, as DL solutions tend to over-fit the features from the synthetic domain (Peng et al., 2017).This phenomenon is known as the Domain Gap problem (Zhang et al., 2022) (Chen et al., 2022).
To address the domain gap problem, several research institutions around the world, such as European Proximity Operations Simulator (EPOS) (Boge and Ma, 2011a) and Autonomous Spacecraft Testing of Robotic Operations in Space (ASTROS) (Dor and Tsiotras, 2018), are developing laboratory facilities for mimicking space conditions with the objective of obtaining more reliable datasets.Research suggests that real world space-like image datasets (Spacecraft PosE Estimation Dataset+ (SPEED+) (Park et al., 2021a), SPAcecraft Recognition leveraging Knowledge of space environment 2022 (SPARK-2022) (Rathinam et al., 2022)) collected in these facilities can be used to train and evaluate the robustness of vision-based space algorithms, mitigate the domain gap, and provide higher confidence on the performance of the DL models when deployed in space (Park et al., 2021a).However, the construction of such facility entails a plethora of uncertainties as it is not a standardized nor well-documented process.Consequently, research centers undertaking this endeavour face many challenges, including a lack of support in the form of guides, manuals, or templates (Beierle, 2019) (Boge and Ma, 2011b) (Cassinis et al., 2022a).As any other development project, the major drawback from these uncertainties is the increased probability of cost overruns, project delays, and even project failure (Raz et al., 2002).
In 2019, the Interdisciplinary Center of Security, Reliability and Trust (SnT) at University of Luxembourg undertook the project of developing the 'SnT Zero-G Lab', a facility for mimicking space environment and simulate rendezvous related processes.During the development, SnT Zero-G Lab had faced cost and schedule challenges related to the lack of literature documenting the development of such facilities.This article belongs to a series of articles that SnT is producing with the objective of bringing forward the lessons learned during the development of the SnT Zero-G Lab, and supporting research institutions around the world in developing their own space facilities.
An important aspect of building a space facility like SnT Zero-G Lab is the equipment selection.The selection of the right equipment to emulate a space-like environment and capture images of acceptable quality level in such conditions is crucial.Hence it is important to have details on the available options in the market, selection metrics to be used, and experimental analysis methods for equipment comparison that would support purchase decisions.In this context, the goal of this article is to provide a systematic approach to support equipment selection and decision making when developing a space lab for vision-based applications.In particular, the article focuses on the equipment required in the image acquisition process, the laboratory background materials, cameras and illumination lamps.

The contributions are summarised below:
• A detailed survey of market available choices for image acquisition equipment.Background materials, cameras and illumination lamps were surveyed and compared based on selection metrics.The selection metrics were chosen based on their relevance to the image capturing process (for example, focal length and shutter speed) as well as the lab development project objectives (for example, cost and size).
• Experimental analyses for equipment comparison for background materials and cameras.
• Together, the survey and experimental analyses provide a systematic approach for equipment selection in the development of similar space labs.The presented framework can be extended to include more products when available in future and make a selection based on different project objectives such as budget constraints and intended applications.
The remaining of the paper is organised as follows.First, a literature review of existing space facilities with a focus on image acquisition components is summarized in Section 2.Then, a market survey of laboratory background materials, cameras and illumination equipment is presented in Section 3. The survey is then complemented with different experiments to analyse the suitability and performance of commercially available equipment.In Section 4, the laboratory setup is described, and in Section 5 and Section 6 experimental analyses of laboratory backgrounds and cameras are presented.Section 7 discusses the results and Section 8 concludes the paper.

RELATED WORK
Several facilities providing testbeds for training and validating vision-based space applications exist at different research institutions around the world.In this section, a review of these facilities with a focus on image acquisition components (background materials, cameras and illumination) is presented.

TRON, USA
The Robotic Testbed for Rendezvous and Optical Navigation (TRON) facility at Stanford's Space Rendezvous Laboratory (SLAB) is the first of its kind developed for testing machine learning based space-borne optical navigation algorithms (Park et al., 2021b).The TRON facility can accurately reproduce a wide range of lighting scenarios representative of the space environment.To mimic the diffused light of Earth's albedo, ten light boxes are installed around the walls.Each light box consists of a diffuser plate covered with hundreds of Light Emitting Diodes (LEDs) organized in strips and can adjust their colour and intensity.The light boxes are calibrated to produce radiance across the diffuser plates that are as uniform as possible and compatible with Earth's albedo in Low Earth Orbits (LEO).A metal halide arc lamp is also used at the facility to simulate direct sunlight.As the background material, light-absorbing black commando curtains are placed over all ambient light sources, including the windows and the deactivated light boxes, to enhance the impact of diffused and direct light (Park et al., 2021a).

GRALS, Netherlands
The GNC Rendezvous, Approach and Landing Simulator (GRALS) testbed is situated in the Orbital Robotics and GNC Laboratory (ORGL) at the European Space Research and Technology Centre (ESTEC) (Cassinis et al., 2022b).A Prosilica GC2450 camera mounted on a KUKA robotic arm is used for capturing images at the facility.To recreate a realistic space environment from an illumination standpoint, a movable lamp is mounted on an UR-5 robot and directed towards the target mockup during image acquisition.The lamp is a dimmable, uniform, and collimated light source with a spectral response close to 6000K and exclusive optical lens which provide high uniformity (±5%) shadow-free backlight illumination.Besides, black background curtains are placed around the robots' workspace in order to mask most of the background noise, such as unwanted reflections from the robots' rails.

EPOS, Germany
The European Proximity Operations Simulator (EPOS) test facility was developed by the German Aerospace Center (DLR) to study rendezvous and docking scenarios (Boge and Ma, 2011a).Two different types of cameras are used at the facility: two Charge Couple Device (CCD) Prosilica GC-655M cameras for capturing intensity images (in the visible spectrum) and two Photonic Mixer Device (PMD) cameras (PMDtec Camcube 3.0 and Bluetechnix Argos3D-IRS1020 DLR Prototype) for capturing depth images.For simulating realistic illumination, an ARRI Max 18/12 theater spotlight is used (Benninghoff et al., 2017).This daylight spotlight is equipped with a hydrargyrum medium-arc iodide (HMI) light source and can generate a spectrally realistic irradiation, resembling that in the Earth's orbit around the Sun.The spotlight is mounted on a 2-DOF yoke that is electrically steerable for easy fine-tuning of the illumination direction.To capture images of the target satellite with a space-realistic background, black curtains are used as background and the robotic arm carrying the satellite mockup is wrapped with black molton material.

ASTROS, USA
The Autonomous Spacecraft Testing of Robotic Operations in Space (ASTROS) platform at the Georgia Institute of Technology supports experiments in vision-based autonomous rendezvous and docking, with a focus on on-orbit servicing of spacecrafts (Dor and Tsiotras, 2018).The platform is equipped with a monocular PointGrey Flea3 camera to capture images (and videos) in different resolutions.The facility is capable of producing realistic images by various configurations of lighting and dark environments and can replicate the harsh contrasts of imaging highly reflective surfaces against a dark background as seen in space.However, the technical details of the equipment used for creating these different illumination conditions are not available publicly.An overhead projector on the ceiling projects virtual images from Earth orbit against a projection screen on the wall, (Tsiotras, 2014), which serves as the background for images captured.

ORION, USA
The Florida Institute of Technology developed the Orbital Robotic Interaction, On-orbit servicing, and Navigation (ORION) laboratory to test spacecraft GNC systems for proximity manoeuvres and autonomous or telerobotic capture (Wilde et al., 2016).The ORION simulator uses the commercial-off-the-shelf Litepanels Hilio D12 LED panel to generate a light source sufficiently bright to exceed the dynamic range of common optical sensors while providing a narrow beam angle.The panel generates light with a color temperature of 5600 K (daylight balanced) with 350 W of power.The intensity can be continuously dimmed from 100% to 0%, and the beam angle can be varied between 10°and 60°using lens inserts.The light can be used not only to simulate solar illumination, but also the weaker and diffused Earth's albedo.The background walls, floor, and ceiling of the testbed are painted a low-reflectivity black paint and all windows are covered with black-out blinds to fully control the lighting conditions and to reproduce orbital conditions.

INVERITAS, Germany
Innovative Technologies for Relative Navigation and Capture of Mobile Autonomous Systems (INVERITAS) facility at the Robotics Innovation Center of the German Research Center for Artificial Intelligence (RIC DFKI), designed and constructed under the INVERITAS project, models rendezvous and capture manoeuvres between a client satellite and a servicer satellite in Earth orbit (Paul et al., 2015).The facility is equipped with six mobile spotlights to reproduce space-like illumination conditions.Each spotlight is motorized, allowing pan and tilt rotations and the field of view can be varied between 12°and 30°.The spotlights can be moved up and down from 1-6m.The 575W gas discharge lamps used at the facility deliver a 6000K light, with maximal intensity of 14500 Lux at 10m distance and a 12°field of view.Special light absorbing paints are used on the background walls, ceiling, and on all visible components of the system providing a space-like non-reflective background.
The reviewed literature about vision-based laboratories for space applications presents a summary of the facilities and the image acquisition equipment used.However, details about the different materials and equipment options considered during laboratory development are missing.The logic behind their equipment selection is missing from the literature.Moreover, the fact that every reviewed facility has been developed with different materials and equipment suggests that many commercially available alternatives can be considered to attain similar or equivalent objectives.In addition, each lab development project will probably have different scope and limitations in terms of budget, space available, intended applications, and other resources.Hence it is interesting to understand the benefits and drawbacks of the different market-available equipment options in terms of cost, ability to emulate space-like environment, quality of images captured and other relevant factors.The following section presents a detailed market survey with comparison metrics for commercially available materials and equipment for a vision-based space applications lab.

SURVEY OF MATERIALS AND EQUIPMENT
This section presents a review of commercially available equipment required for image acquisition at a space lab, as illustrated schematically in Figure 1.In this review, background materials (Section 3.1), cameras (Section 3.2) and illumination lamps (Section 3.3) to recreate high-fidelity space conditions, are included.The reference links for each item reviewed are given in supplementary material Section A.

Background materials
The surveyed background materials are presented in Table 1.They were selected to offer a wide variety of prices and reflectivity values.These backgrounds are divided in three categories: fabric, paint and paper as offered by different manufacturers in the market.The values of their reflectivity suggest that the best commercially available option for space conditions recreation is the fabric Black Velvet, from KOYO, with the lowest reflectivity of all surveyed materials, 0.1%.However, there are also materials available in the market whose reflectivity values are not readily available from manufacturers, making a direct comparison difficult.Hence in Section 5 we propose an experimental analysis to compliment the market survey for evaluating the suitability of background materials to use in a space lab.

Cameras
For image capturing at a space lab facility, eight different cameras were surveyed.These cameras were selected to represent a wide cost range (from 25 to 3200 EUR) for cameras with different image capture capabilities.The results from the survey are presented in Table 2.However, the cameras need to be further compared in space-representative situations (varying illumination and exposure conditions).Hence, we propose additional experiments in Section 6 for comparison of the cameras in such conditions.The market survey along with the experiments provided a comprehensive comparison of camera systems for a space lab.

Illumination lamps
In Table 3, a survey of commercially available illumination lamps is presented, selected to represent low, medium and high-cost alternatives.The lamps are assessed regarding their light source temperature/wavelength, luminous flux, power and efficiency, emission angle, total lamp dimensions, cost and brand.The low-cost alternatives include aluminum reflectors of small (10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20) and medium sizes  of around 15W that can be bought at regular home goods stores.A medium-cost alternative is proposed with an omnidirectional growth lamp (Low glare downlight, 580-690 EUR) with power up to 276W.The most expensive alternative included in the survey is the large area solar simulator (Sunbrick, up to 30810 EUR), with programmable spectra and power up to 625W.
The section presented a detailed survey of different background materials, cameras and illumination lamps available in the market.However, this market survey alone is inadequate to make the equipment selection.For instance, the manufacturer may not always provide the needed metrics (as in the case with the background materials), or the equipment may need to operate in a different (harsh space-like) environment than its normal operating conditions (as for the cameras).Hence, to further reduce uncertainty, the survey is complemented with experimental analyses of the background materials and cameras.The   laboratory setup for the experiments is described in the next section, followed by experimental analyses of background materials (Section 5) and cameras (Section 6), respectively.The market survey, along with these supporting experimental analyses, will provide information to guide the selection of suitable image acquisition equipment in a space lab.

LABORATORY SETUP
The data collection activities for the experiments presented in this article were conducted at the SnT Zero-G Lab facility.The SnT Zero-G Lab is a multipurpose facility capable of emulating a large variety of in-orbit operations in different orbital scenarios.The facility has two UR10e robotic arms mounted on rails, providing a 6+1 DoF.The robotic arms are capable of mimicking orbital trajectories of the spacecrafts, other orbital objects or the light sources.To recreate the challenging lighting conditions in space, the Zero-G Lab uses a Godox SL-60 LED Video Light.The wall and ceiling are painted in black and epoxy flooring (black) is used to remove reflections and create a space like environment.The general setup of the laboratory environment for the experiments is shown in Figure 2 and consists of the following components: • Cameras: The cameras were mounted on a tripod directly facing the object of interest, i.e the spacecraft.
• Spacecraft: The spacecraft was mounted on a UR10e robotic arm.A 1U CubeSat was used in the experiments presented in this article.However, the experiments can be also be conducted using other types of spacecraft mock-ups available.• Background: A dark background was placed behind the CubeSat, either mounted on a tripod-like structure or placed independently.
• Light source: A single continuous light source was mounted on a second UR10e robotic arm using a custom designed metal bracket fabricated in stainless steel.
The experiment setup was designed taking into consideration constraints including the size of the room, visual light (VL) reflective surfaces present like the robotic rails, camera capability limitations and the physical restrictions related to the range of possible motion for the robotic arms.For all of the experiments, the camera position remains static while the robotic arms controled the light source and CubeSat positions.The trajectories/positions for the robotic arms were provided as a set of manually defined waypoints.Python scripts were used for automating the image capture process with different camera settings, to determine the appropriate white balance gain parameters and to avoid unwanted color shifts within images for each camera used.Black background materials were placed at a distance of ∼56cm to the rear of the vertical center of the CubeSat and the cameras were placed at a distance of ∼140cm directly in front of the vertical center of the CubeSat.

Cameras
In the experiments, two different cameras were used: Raspberry Pi Low Quality (LQ) and High Quality (HQ) cameras.Technical specifications of the cameras are given in Table 4.These cameras were selected to represent both the low quality DIY cameras and high quality consumer-grade cameras.The LQ camera relies on a inbuilt lens whereas the HQ cameras use a 12mm Edmond Optics lens.For all of the experiments conducted, the reference camera position remained fixed and is denoted as CP0.In Figure 3, CP0 is illustrated with the horizontal angle of 0 • and the vertical position labelled s, indicating the camera was looking straight on the CubeSat at its central height.

Lighting
All experiments were conducted with a single light source (Godox SL-60 LED Video Light) available at the SnT Zero-G Lab facility.The technical specifications are given in Table 3.Three lamp configurations were investigated, mimicking various illumination conditions from a space environment.For example, collimators, producing parallel light beams that create hard shadows and large differences in light intensity between illuminated and dark regions, are typically chosen for mimicking objects in space illuminated by the sun without an atmosphere (Paul et al., 2015).The lamp configurations used in the experiments are defined below: • LAMP0: Light source with a collimator as light modifier • LAMP1: Light source with a reflector as light modifier • LAMP2: Bare light source without any modifiers Additionally, two different light intensities were also used for each of the lamp configurations: • Light Intensity Low (LIL): 10% of the total light intensity available from the source.
• Light Intensity High (LIH): 100% of the total light intensity available from the source.
The combination provided six (2 light intensities x 3 lamp configurations = 6) different illumination conditions used in the experiments.The reference lighting position (LP0) was defined as 30 • to the right of the camera in the vertical plane and above the camera height (a: angle down) horizontally, as shown in Figure 4. LP0 position enables the visualization of both shadows and highlights of the CubeSat and as such, best represented the 3D structure of the object when projected onto a 2D image plane.Details of other possible lighting positions, along with a qualitative comparison of the corresponding captured images are provided in supplementary material Section B.

BACKGROUND ANALYSIS
The objective of the background materials experiment is to determine the background with the highest light absorption such that it appears featureless in the captured images.The backgrounds analysed in this experiment are: Black Velvet fabric (BG0), Moussu paint (BG1), Black 3.0 paint (BG2), Leitz-paper (BG3) and Neewer Background fabirc (BG4).Refer to Table 1 for more details.In the context of the performed experiments, a "featureless background" corresponded to the one that added no discernible information to an image and most closely resembled the black colour, as represented by the RGB pixel value of (0,0,0).

Data Collection
For the data collection, the camera was set to position CP0 and the background to be tested was placed ∼196cm behind CP0.In this case, the CubeSat was not used and was, hence, removed from the camera's field of view.Images were captured for each of the background materials (BG0-4) under different light intensities (LIL and LIH), lamp configurations (LAMP0-2) and for five angles of illumination (LA0-4) as illustrated in Figure 5.A set of sample images collected are shown in Figure 6.The captured images were cropped manually to include the background region before the experimental analysis.

Experiment and results
To evaluate the backgrounds, all the acquired images of each background were compared with a reference image.The chosen reference image was a pure black image synthetically generated by setting the RGB pixel values to (0,0,0).The comparison was quantified with an image similarity measuring index, the Universal Image Quality Index (UQI) (Wang and Bovik, 2002).The UQI was preferred as it provides a  significantly better comparison than the widely used distortion metric mean-squared error (MSE), due to its well-defined mathematical framework.Unlike the traditional error summation methods, UQI considers the following three factors for modelling any image distortion: loss of correlation, luminance distortion and contrast distortion.Given x and y as the input and reference image signals respectively, UQI can be formulated as: where x, σ x , ȳ and σ y represent the mean and standard deviation of all the input and reference samples respectively, and σ xy , the correlation.Moreover, UQI provides error measurements independent of the viewing conditions and individual observers (subjective analysis by humans).
In Figure 7 and Figure 8 the UQI scores for all the acquired images (under different illumination conditions) with respect to the reference image for the LQ and HQ cameras respectively, are presented.It is evident that the black velvet fabric (BG0) had the highest UQI scores compared to all other backgrounds, both in the high intensity (LIH) and low intensity (LIL) light conditions.These results indicate that BG0 is the most featureless background with the highest light absorption, which is in agreement with the market survey (Section 3.1).This performance makes it the best choice, for image acquisition in a space lab.For the rest of the experiments detailed in this article, the BG0 was used.

CAMERA ANALYSIS
Experiments were performed with the two Raspberry Pi cameras (LQ and HQ) with the objective of performing a: • Qualitative comparison of the ideally exposed images under varying illumination conditions for the LQ and HQ cameras.
• Quantitative study of the image quality degradation with different exposure settings (over-exposure and under-exposure) for both the cameras.
Together, these analyses provided an experimental framework for comparing camera capabilities.Furthermore, along with the market survey presented in Section 3.2, it can support in making purchase choices for cameras selection in a space lab.

Ideal Exposure Analysis
Camera exposure settings are defined by three parameters: Aperture, Shutter Speed, and ISO (Gain), known as the exposure tuple (A:SS:ISO).The exposure tuple determine a given exposure value (EV), multiple exposure tuples can result in the same EV (Präkel, 2009).Images captured with small values of aperture, for example, f/2, will allow more light to reach an image sensor for fixed shutter speed and ISO settings, which can be a useful property when capturing images in low light conditions.However, the choice of aperture also impacts the Depth Of Field (DOF) which determines which portions of a 3D object, relative to the focal plane, will appear in focus when projected onto a 2D image plane.The smaller the value of aperture, the shallower the DOF.Therefore, a trade-off exists between image sharpness and brightness when an aperture setting is chosen.Another consideration is the focal length of the lens employed.The greater the lens focal length, the shallower the DOF for a fixed value of aperture.The shutter speed parameter dictates how long a camera's image sensor will receive light.Whether an image will be properly exposed is also a function of the shutter speed.The ISO parameter of the exposure setting impacts how sensitive an image sensor is to light.However, more noise will be introduced into an image if the ISO value is set to be more sensitive to light.In the performed experiments, to not introduce unwanted color shifts into images taken with the same camera, "auto white balance" was disabled and the Red and Blue Gain settings listed in Table 4 were applied for each camera.

Ideal exposure
A careful choice of the exposure tuple is required to generate an "ideally exposed" image within the context of the given illumination conditions and the mechanical limitations of the image capture device (camera).The concept of ideal exposure is application dependent.In the case of the performed experiments, ideally exposed images were those which had the CubeSat well illuminated with all the features (like edges, corners and surface panels) clear and distinguishable.To obtain the initial ideal exposure settings under different illumination conditions, a Sekonic L-558R DualMaster light meter was placed directly in front of the spacecraft object.Then, a careful visual inspection of images captured with further fine-tuned exposure settings was used to define the ideal exposure setting for the experiment.

Data collection
The cameras and the lamp were mounted at their reference positions CP0 and LP0, the CubeSat was positioned between the background and camera at a distance of ∼56cm and ∼140cm respectively as shown in Figure 2. The images were captured under different light intensities (LIL and LIH) and lamp configurations (LAMP0-2).

Experiment and Results
Ideal exposure settings for images captured under different illumination conditions were obtained, as described with a light meter and by visual inspection.The selected ideal exposure settings with the corresponding images are shown in Figure 9.The qualitative analysis of these images showed that, by carefully selecting the exposure settings, good quality well-exposed images can be obtained for different quality cameras under varying illumination conditions.This is particularly relevant in space-like environment where well exposed images with clear and distinguishable features need to be captured under considerable variations in illumination conditions.The results also suggest that even with a LQ camera, images of good quality can be captured if exposure values are well calibrated.

Exposure and Image Quality Analysis
A reference exposure (EX0) was defined to study the effect of overexposure and underexposure in image quality.An aperture value of f/2.0 was selected as it allowed for an acceptable DOF and was achievable with all camera lenses tested.In addition, an aperture value of f/2.0 made it possible to capture images in a low-light setting at a shutter speed that would not introduce motion blur in the established laboratory setting.The reference shutter speed was set at 1/30 th of a second.Finally, the ISO value of 100 was chosen so as to not introduce unwanted noise into the captured images.Thus the reference exposure for capturing reference exposure images in this experiment is defined with the exposure tuple values of (f/2:1/30:100).
The reference light intensity (LI0) corresponds to the light intensity required to set the reference exposure EX0 as the ideal exposure for each of the lamp configuration tested.To establish the LI0 light intensity, a light source was placed at the reference light position LP0 as shown in Figure 4.The light intensity was then adjusted until the reference exposure provided ideal exposure.The LI0 values for LAMP0, LAMP1 and LAMP2 configurations were 75%, 25% and 30% respectively.

Data Collection
Images were collected under the same positional setup described in section 6.1.The images were captured only for the reference light intensities (LI0) and for lamp configurations (LAMP0-2) with each of the exposure settings defined in Table 5.In this experiment, the underexposed and overexposed (EO1-EEO) conditions were obtained by changing only the camera shutter speed.The aperture and ISO values were kept the same.The HQ camera captured images with a 12mm lens, while the LQ camera had an inbuilt 6mm lens which effects the field of view in captured images.Also, because the cameras were mounted side-by-side, the cameras' view fields were slightly horizontally translated.Therefore, images were cropped to centrally align the CubeSat prior to analysis.

Experiment and Results
A quantitative analysis of the image quality degradation with varying exposure settings for a given illumination condition (light intensity and lamp configuration) was performed.Since the images contained a single CubeSat object, with a fixed background, the structural similarity with respect to the reference image provided a relevant measure of image quality degradation.The Multi-scale Structural Similarity Index (MS-SSIM) (Wang et al., 2003) was used to measure structural similarity.The MS-SSIM was derived from the Structural Similarity Index (SSIM) (Wang et al., 2004) extending it to incorporate multi-scale measures using image details at different resolutions.MS-SSIM separates the influence of illumination (average luminance and contrast) to explore the structural information in an image.
In Figure 10, the MS-SSIM score for the acquired images under different exposure settings with respect to the reference image, is presented.For the camera with a higher sensor quality (dynamic range), the image degradation was slower compared to one with a lower sensor quality.For both the cameras, the structural integrity of the images dropped identically on both sides of the curve (over and under exposed) with respect to the reference image.This behaviour indicates that the relative drop in image quality for both the cameras is similar under extreme exposure settings.

DISCUSSION
The results of the performed experiments suggest that laboratory equipment selection is not a straightforward procedure where the most expensive options provide the best results.This was evidenced in the experiments performed with the Raspberry Pi cameras, where the most expensive cameras, those featuring the highest image resolutions for example, are not always significantly better than cheaper alternatives.As shown in Section 6.1, a careful calibration of exposure settings can produce good quality ideally exposed images for both the LQ and HQ cameras.Similarly, Section 6.2 indicated that the relative image degradation in extreme exposure settings for both the cameras is identical; hence, the camera selection is highly application dependent and needs to be analysed experimentally case by case.Market surveys, such as the one presented in Section 3.2 will serve as a start point and need to be followed by experimental analyses.The same criteria apply to the selection of the background material.The experimental results indicated that the highest UQI were obtained with black velvet fabric, which was an expected result as this was the material with the lowest VL reflectivity (Section 3.1).However, as different manufacturers might implement different methods to determine the reflectivity of their products (or might not even provide a reflectivity value at all), if possible, UQI (or even VL reflectivity) of dark background materials should also be tested before making a major purchase.The SnT Zero-G Lab facility is still under development, further experiments will be conducted for camera analysis with images captured in scenarios where the space object moves along a trajectory relative to the camera.The objective of these experiments would be to introduce motion blur and other effects common during applications like vision-based navigation.Future work will also focus on conducting an experimental analysis of different lighting sources to support the market survey in this article.Spectral analysis will provide a more accurate comparison of light sources and help to analyse their similarity to real space conditions.

CONCLUSION
Facilities simulating real-world space environments are an integral part of training and validating visionbased space applications.High fidelity space-like images with annotations can be collected from these facilities to train and test the algorithms.However, development of such a space lab is challenging.The current literature lacks support in the form of manuals or templates.In this context, this article focused on a key aspect of a space lab facility development, the equipment selection.This article presented a systematic approach to equipment selection for image acquisition process, based on the lessons learned during SnT Zero-G Lab development at University of Luxembourg.The approach combines a market survey of equipment followed by experimental analysis.Background materials, cameras and the illumination lamps were surveyed.The background materials were first compared based on the VL reflectivity values obtained from the manufacturers.For comparing materials with unknown reflectivity values, we present an experimental analysis method that calculates UQI scores with reference to a synthetically generated black image to identify suitable options.For camera selection, experiments suggest that a market survey alone will not provide sufficient information to make a purchase decision.The results demonstrate that it is possible to obtain comparably good quality images even from a lower quality (less expensive) camera by carefully calibrating the exposure settings.Hence, for selecting camera systems, the market survey and experimental analysis should be used in tandem to gather the required information.Future work is planned for studying image quality when motion blur and other phenomena are introduced, and for studying the performance of different light sources for simulating the space environment.

B.2 Discussion
A set of sample images captured for positions LP0 to LP8 in show in Figure 2. The results show LP0 position to be the ideal lighting position for capturing images of highest quality.LP0 position shows both shadows and highlights of the CubeSat and as such, best represents the 3D structure of the object when projected onto a 2D image plane.

Figure 1 .
Figure 1.Image acquisition process in a space lab facility illustrated schematically.

Figure 2 .
Figure2.General laboratory setup for data collection using the SnT Zero-G Lab.The spacecraft (CubeSat, in this case) and the light source were mounted on the movable UR10e robotic arms, the cameras were mounted on a fixed tripod and the backgrounds were placed behind the CubeSat.

Figure 3 .
Figure 3.An illustration of the reference camera position, denoted as CP0, with respect to the CubeSat.(A) Top view and (B) Side view.Dotted arrows indicate the viewing direction.

Figure 4 .
Figure 4.An illustration of the reference light position, denoted as LP0, with respect to the camera and the CubeSat.(A) Top view and (B) Side view.The dotted arrows indicate direction of viewing and lighting respectively.

Figure 5 .
Figure 5. Background analysis data collection setup (top view).The green arrows indicates direction of lighting.

Figure 6 .
Figure 6.Sample images of BG2 collected for background analysis experiment.The images were captured with the HQ camera at the reference exposure with LAMP2 and the light intensity set to LIH.From top-left to bottom-right, the angles of incidence are: 90 • , 70 • , 50 • , 30 • , 10 • .

Figure 7 .
Figure 7. Background analysis results using LQ camera.The black velvet tissue (BG0) had the highest UQI values compared to all the other materials tested, and under different illumination conditions.

Figure 8 .
Figure 8. Background analysis results using HQ camera.The black velvet tissue (BG0) had the highest UQI values compared to all the other materials tested, and under different illumination conditions.

Figure 9 .
Figure 9. Ideally exposed images under different illumination conditions with corresponding exposure settings.The results suggest that careful adjustment of exposure settings can result in well exposed images with clear and distinguishable features for the object of interest.

Figure 10 .
Figure 10.Image degradation plots for LQ and HQ cameras with changes in exposure settings.EEU-EU1 are underexposed, E01-EE0 are overexposed and EX0 is the reference image.The results suggest that both the cameras have similar relative image degradation under extreme exposure conditions.

Figure 1 .
Figure 1.Camera and lighting position with respect to the spacecraft object for the comparison of lighting positions

Figure 2 .
Figure 2. Sample images collected for lighting tests.The shown images are captured with the HQ camera at the LI0 light intensity setting for LAMP1 configuration.

Table 1 .
Survey of commercially available background materials selected to represent a different price and reflectivity options.
**As on June 2022

Table 2 .
Survey of cameras available in the market to represent a wide price range and various exposure and resolution capabilities.

Table 3 .
Survey of illumination lamps selected to represent low, medium and high cost alternatives.
*1-sun represents light that reproduces sunlight as specified in the ASTM E927 or IEC 60904-9 standards **As on June 2022

Table 4 .
Technical specifications of the two cameras used in the data collection process

Table 5 .
Details of different exposure settings used.EEU-EU1 denotes under-exposure, EX0 the reference exposure, and EO1-EEO are the over-exposure settings.