DAGAN: A Domain-Aware Method for Image-to-Image Translations

.e image-to-image translation method aims to learn inter-domain mappings from paired/unpaired data. Although this technique has been widely used for visual predication tasks—such as classification and image segmentation—and achieved great results, we still failed to perform flexible translations when attempting to learn different mappings, especially for images containing multiple instances. To tackle this problem, we propose a generative framework DAGAN (Domain-aware Generative Adversarial etwork) that enables domains to learn diverse mapping relationships. We assumed that an image is composed with background and instance domain and then fed them into different translation networks. Lastly, we integrated the translated domains into a complete image with smoothed labels to maintain realism. We examined the instance-aware framework on datasets generated by YOLO and confirmed that this is capable of generating images of equal or better diversity compared to current translation models.


Introduction
Image-to-image translation methods [1,2] have received increased attention in recent years. is type of generative model can be applied to many tasks related to vision , such as art restoration, image synthesis, and resolution enhancement [3,4]. With the development of deep learning techniques, many interesting problems about this topic have been put forth and solved [5], such as noise reduction [6] and brightness enhancement [7]. Multiple output generation [8][9][10] and image realism improvements are examples of this technology. However, almost all research focuses on the translation of full images instead of domains. In this work, with presaved identity matrices, we propose a generative framework DAGAN, which can flexibly translate the instance domain of the original image. As shown in Figure 1, we use identity matrices to successfully segregate instances from original images and translate, respectively. e results demonstrate our model can make domain-aware translations and generate diverse and realistic generations.
Motivated by research concerning variational autoencoders (VAEs) and generative adversarial networks (GAN), existing translation models [10][11][12] can compare multiple maps of one image to produce possible translation outputs. When looking at these solutions, especially under an unpaired setting in recent research, a common solution is to view the image-to-image (I2I) problem as a process of learning the joint distribution of the original and target domains [9]. By using VAEs and weight-sharing scheme, an image can be represented with common but low-dimensional latent codes. In this way, neural networks can be trained to produce images that contain styles or specificities of both domains.
One major limitation of models of current research is that we cannot totally control image translations. We still fail to manipulate the translation level and areas. For example, if an image contains multiple instances, is it possible to translate a certain instance into a style that is different from other instances? is sounds like what is usually performed using commercial software such as Photoshop, but until now, making a domain-aware translation is still challenging.
To tackle this problem, we assume that images are composed of different domains, and for each domain, we built a subtranslation model. At the same time, we followed the pipeline of "segregation ⟶ integration," whereby generations from domain-translation models would finally be integrated into a full image. Generally speaking, we used the UNIT [11] (Unsupervised Image-to-image Translation) framework as the basis of our overall translation works and then treated a full image as a combination of instance and background domains. First, identity matrices were used to record location information of instance domains that needed to be translated individually, and we successfully segregated the instances from the given image and assigned the original location (instance areas) with the mean pixel value of the original image [13]. Next, we translated the segregated background and instance domains, respectively. Last, we utilized identity matrices to integrate the output produced from the background networks with the reconstructed/translated instances.
Our contributions in this work are summarized as follows: (1) We built a domain-aware I2I framework, DAGAN, and applied a background and instance network to facilitate translations for both domains. en, we specially designed two different modes for the instance part, which helped users flexibly control translations in training.
(2) With label-smoothing training, we made reintegrated images more realistic and comfortable.

Related Works
Since implementation in [14,15], GAN models have achieved encouraging results in many vision tasks [5,6]. e most common use of GAN [16,17] is to enforce the mapping of generated images to target domains using adversarial processes. is kind of generative model can be trained to produce realistic images from random noise vectors.  Figure 1: Domain-aware translation of unpaired images. We perform translations of the background and instance domains, respectively, and then integrate both domains with smoothed labels. First, each image is segregated into background and instance domains (as shown in the "background" and "instance" row). After translating, respectively, we integrated both translation generations into a full image (the integration row). (a) Integration. (b) Instance. (c) Background.

Complexity
Furthermore, research studies [18][19][20] have explored combining VAEs with GANs. Common VAE architecture consists of encoder and decoder networks where the encoder learns an interpretable representation z (called latent space) from given images x. With the reliable representation of input, we can control the direction of image processing by choosing the added visual attribute vectors. is kind of operation helps decoders produce better reconstructions or transformations. e core of VAE is that it regularizes the encoder by enforcing the variational posterior q(x | z) as close to the true posterior p(x | z) as possible. e methods presented in this work have been built on conditional VAE and GAN models in both background and instance parts, and we aim to learn visual attributes from target domains. By jointly optimizing the objectives of the instance network and the background network, we learned of a shared latent space C, where a trade-off occurs between the source and target domains when transforming.
As for practical application research in this field, we conclude that these general translation problems aim to learn the mapping relationship from a given image to target domains, but retaining content attributes and semantic consistency in training brings great challenges. In working with Pix2Pix [1], the authors constructed models using paired data to enforce mapping. Although transformation results from Pix2Pix are very realistic, considering the lack of training pairs, solutions using unpaired data are more general and applicable in industrial applications. Zhu et al's. work [2] used cycle-GANs under the same conditions (with unpaired data) to successfully produce high-quality images. Overall, cycle-GANs use high frequencies to hide information and make it imperceptible to humans to ensure that the generator can recover samples later [21]. By cycle-like losses, which encourage inverse translations (bidirectional), Zhu enforced mapping in training. While in the work of UNIT [11], Liu provided another perspective of the learning translation problem. Translation can be seen as a process of learning joint distributions, so they created a common space for both domains. e idea of UNIT inspired many further research efforts [9,10], but when looking back on that research, although many problems have been solved and generations perfectly fit their new distributions, we still cannot perform flexible translations wherever we want. For example, we cannot change the background but retain the instance area's specifics or perform a special instance translation. To summarize, we still cannot manipulate the extent and direction of transformations.
In this work, with the help of identity matrices, we separate background and instance parts from source images. Overall, we follow the assumptions of UNIT, and set up a common latent space to maintain the stability of the models. With improvements in the framework and training strategies, we enabled the proposed model to flexibly translate different domains from unpaired inputs.

Proposed Models
e proposed framework aims to perform an instance-aware translation of a given image A and a target B. If we assume that each image is composed of the background (bgr) and instance (ins) areas, then image A and B can be represented as follows: A: {A bgr , A ins }, and B: { B bgr , B ins }. As illustrated in Figure 2, before training the entire network, we cropped instance areas A ins and B ins from A and B and saved both areas' location values as identity matrices called labels. Cropped areas are then replaced by the mean value of the rest of the image. After training both background and instance networks with labels (not used during training), we can recover the translated images (this step is called integration).
In Section 3.1, we discuss the background and instance parts of images, and in Section 3.2, we further modify the instance domain to produce diverse translated outputs.

Background Network.
Having separated instance area from the input image, the rest part (called background area) is processed by padding with mean pixel values. e background model can be seen as an independent part of the entire translation framework and this model will learn a visual translation between the two domains. Following this assumption, we set an encoder (denoted with E) and generator (denoted with G) for each side (here, for each parameter appearing in this work, we marked the type of domain with its subscript): Similar to UNIT, we assume that, with encoder E A bgr and E B bgr , we can map a given target background into a common latent space C bgr : . en, C A bgr and C B bgr represent the latent codes of domains A bgr and B bgr , respectively. In our work, we shared the weight of the last two layers of E A bgr and E B bgr and the first layer of G A bgr and G B bgr . At the same time, we add two discriminators, D A bgr and D B bgr , which ensure that the translation between two background domains can be performed in an adversarial process, where G A bgr (C B bgr ) and A bgr are distinguished by D A bgr while G B bgr (C A bgr ) and B bgr are distinguished by D B bgr . Figure 3 illustrates the background model in a visual way.

Instance Network.
We designed two modes for instance network: reconstruction and multioutput. Overall, the goal of the instance model is to keep the instance area relatively independent from background areas, so as to not influence the instance area with the background transformation.
First, we adopt similar DCGAN (Deep Convolution Generative Adversarial Networks) [22] architecture for simple instance reconstructions, drawing noise vectors from the Gaussian distribution N (0, 1). is enables the entire framework (including the background) to successfully translate the background area into another style but leave the instance unchanged. en, inspired by [23] and based on the assumption that images are composed with style and content codes, we adapt the instance part into a multioutput model.

Reconstruction Mode.
is mode aims to keep the instance part unchanged and the final integrated images as realistic as possible (the loss functions used will be discussed in later sections). Considering the size of the instance, we have three convolutional layers in both the generator and discriminator, which takes the normal distribution noise Z∼N (0, 1) as the input vector. e process is illustrated in Figure 4.

Multioutput Mode.
Having been cropped from the original image, the instance area can be considered an independent image. Under such conditions, we can perform any desired translation work on this domain. A common approach to achieve diverse outputs lies in treating images as a combination of style and content information. In principle, we can translate images into any style if we add suitable attributes/style codes. Similar to the setting in MUNIT [9] (Multimodal Unsupervised Image-to-Image Translation), we produced multiple instance generations by randomly sampling style codes that drawn from the target instance and then recombined them with content codes. Assuming we are given instance domain A ins and target instance B ins ; first, we map A ins and B ins to the style and content space, respectively (stand by S A ins , C A ins , S B ins , and C B ins , representing style and content codes), where the corresponding encoders are represented as E S A ins , E C A ins , E S B ins , and E C B ins : (2) Like the common space setting in the background part, content space is also shared by both instances. en, we Common space

Loss Functions and Training
Considering the model difference between the background and instance parts, the loss functions used, especially for different modes, should be adjusted accordingly. We jointly solve background and instance translation problems overall by using the full objective function, and then we discussed the loss functions used for both parts (denoted with L ins and L bgr ):

Instance Loss.
We designed two modes that make the instance part a flexible transformation. e first mode focuses on ensuring that instances remain unchanged when the background is translated, and the second mode concentrates on producing a range of instance translations. eoretically speaking, having been cropped from the given images, we can then flexibly add any code to these instances to manipulate their style. Figure 3, the instance network in this mode consists of a noise vectors Z A , and Z B , two generators, and two discriminators, each with three layers (represented by G A ins , G B ins , D A ins , and D B ins ). e adversarial loss is expressed as follows, where M denotes the batch size when training:

Reconstruction Mode. As shown in
4.1.2. Multioutput Mode. As illustrated in Figure 5, the encoder-generator collection {E S A ins , E C A ins , G A ins } constitutes the single part A ins . e L ins in this mode comprises a reconstruction loss and an adversarial loss.

Reconstruction
Loss. Generally speaking, instances are processed in translation as images to latent codes to target directions, as shown in Figure 5. However, in opposition to [14,24], style codes are not drawn from the normal  Figure 4: Reconstruction mode. ree-blocked discriminators aim to distinguish the original input and generator (z). For the generator, its block consists of "Transposed convolution + Batch normalization + Relu activation," while the discriminator's block is composed of "Convolution + Batch normalization + Leaky Relu activation." Complexity 5 distribution N (0, 1). We use two encoders, style and content, which causes translated instances to more or less possess the target domain's attributes. e goal of using this kind of loss is to ensure that images keep the ability to recover in terms of both latent codes and sematic consistency after training: en, L A ins s re and L A ins C re can be represented as follows:

Adversarial Loss.
e main use of GANs lies in their ability to match the target distribution as much as possible in the adversarial process. We use the generators D A ins and D B ins to distinguish between translations and real instances: sections, both the background and instance parts are independent of each other before their integration. Visual domains in this part still follow the reconstruction and translation stream. According to the components in the background part, we used L VAE , L GAN , and L SC to represent VAEs, GANs, and semantic-consistency loss. ree weight parameters-λ v , λ G , and λ sc -are applied to measure the impact of each component. For example, the L A bgr can be formulated as follows: e VAE architecture aims to learn a latent model by approximating the marginal log-likelihood of training data (ELBO algorithm [19] (the lower bound of latent codes)). Its objective function is In this function, the weight parameters λ 1 and λ 2 control the impact of the objective function and KL (meaning kullback leibler divergence) measures how well q A bgr (C A bgr | A bgr ) matches prior distribution p C (C bgr ), which denotes the distribution of common content space C A bgr . To better perform sampling from spaces, we model {q A bgr (C A bgr | A bgr ), p C (C bgr )} with normal distribution and p G A bgr with Laplacian distributions, respectively: e GAN's objective function aims to translate and reconstruct images in the adversarial process: e sematic-consistency objective function makes sure that images can be mapped back to the original latent space, but possesses characteristics from the target domains. e significant modification to this function is using the l 1 norm to directly compare sematic differences instead of using KL terms to measure the distance in latent spaces (such as UNIT [11]).

Training Techniques
e proposed framework tends to decompose an image into its background and instance constituents and then feeds these parts into independent translation networks. Finally, using saved labels, we integrate both translated parts. Such a Complexity 7 solution raises another question: how can integrations be made to look real? If we simply crop, translate, respectively, and integrate, the integration looks strange and uncomfortable, since the background and instance parts are translated in totally different directions (as shown in Figure 6).
To maintain the realism of the images after integration and make them more comfortable to look at, a technique called "label smoothing" [25] is used to further improve the proposed framework.

Methods.
We know that GANs work effectively when discriminators can estimate the ratio of input and model data, even at any point, represented by x: Under the previous conditions, we would train discriminators to estimate: (Instance) reconstruction mode: (Instance) multioutput mode: Let model bgr and model ins represent the background and instance networks. en, outputs produced by the models will be integrated (represented by I A and I B ) as follows:  We add another pair of discriminators {D A and D B } to distinguish B, I A and A, I B after integration. Since integrations are not exactly the same as the originals, A and B, we add parameter α to smooth the training data. e parameters are given as follows: en, to add α into the distinguishing process, we use {D A , and D B } to estimate the ratio of Although smoothing parameters may encourage less confident outputs (compared to the original images) and influence the style of outputs, this adjustment makes integrations much more realistic, so the image does not obviously resemble the combination of two completely different images. e detailed discrimination method is illustrated in Figure 7. A comparison of the results from before and after implementing the smoothing technique will be presented in the results section.

Datasets.
e framework is tested on a pair of benchmarks: cityscape [26] ↔ GTA [27] (a bidirectional translation). Before feeding training images into proposed frameworks, we crop instances from the original images. Limited by memory resources, all instances are resized to 64 × 64, while background parts are resized to 256 × 256.

Car Translation (Cityscape ↔ GTA).
Having been trained in YOLO [28], we achieved all cars' location values and saved this information in bound boxes. For clearer visuals, we cropped instances that were size 300 × 300 (these instances later resized to 64 × 64).
As shown in Figure 8, we clearly see that during reconstruction, the instance domains are reconstructed well, being almost identical to their input. In Figure 9, we observe that the translated background domains immediately have the style and attributes of the target domains. While in Figure 10, style codes sampled from the other space allowed the instance part to produce four different translations, which definitely proves that by matching fixed content with different style information, we can successfully produce diverse translations.
When looking at the final integrations consisting of both the background and instance domains, we clearly observe that although both parts produce high-quality translations, the unsmoothed integrations (the second and forth row in Figure 11) look mismatched, such as a car of a totally different style has been attached to the background, as illustrated in Figure 11. When we apply smoothed labels at the end of training, both smoothed translations fit well Complexity with each other and look like a full image. Although several results still preserve boundaries, the boundaries are extremely close in proximity to the translated content and color (see the 3rd, 4th, and 5th maps in the city integrations presented in Figure 12). Based on comparisons with Figure 11, we conclude that applying smoothing labels is a feasible and useful technique for integrating translations.
By contrast, we also experimented with MUNIT [9] and DRIT [8]. As demonstrated in Figures 13 and 14, although both models achieved good translation performances on Cityscape ↔ GTA, they still failed to manipulate instance-level translations, which means that we are unable to segregate an instance (such as objects and areas) from others, meaning we can only treat all elements as a whole.
When it comes to our generations, with prerecorded location information and smoothing techniques, we make the domain/instance-level translation possible. Instances of "car" were selected as objects and remaining domains were translated to cityscape/GTA style, whereby cars can be maintained or translated into another destination.

Qualitative Evaluation
7.1. Questionnaire. A widely used method to evaluate the realism of generated images is to distribute a questionnaire. We selected two translation methods, DRIT [8] and MUNIT [9], as baselines and randomly sampled 100 images as inputs to compare generations from methods. When conducting questionnaires, each participant would be given five images, the real one that has the target style, generations (from source inputs) from DRIT, MUNIT, and ours (including the smoothed one). Compared to the real one, participants have to select the most realistic generation from four groups.
We gathered 20 students as users from the department of Computer Science. Each person is expected to answer the question "Compared to the real one, among four outputs, which image do you think is the most realistic?" During the questionnaire, we counted the number of people who made the selection and summarized them in Figure 15.

Learned Perceptual Image Patch Similarity Distance (LPIPS Distance).
Since our research focuses on instanceaware and flexible translations, we used LPIPS Distance [9,11] to measure the diversity of generations, which has been proven to correlate well with human preference. Similar to MUNIT [9], we use trained AlexNet [25] as a

Conditional Inception Score (CIS).
Put in [9], this kind of modified metric can better measure the diversity of outputs. We used fine-tuned Inception-v3 [29] on our datasets as the classifier and calculated the CIS score based on 200 input pairs and 400 translations per pair.
We conclude feature comparisons with other methods in Table 1. It can be clearly seen that, unlike existing works, our method enables flexible translations with diverse outputs under unpaired training settings. Figure 15 and Table 2 show the results for the realism and diversity comparison with other methods. For realism, our method does not outperform any previous research, but we noted that the realism of outputs improved greatly since the application of the smoothing technique, and there is not much difference in realism between MUNIT [11] and the smoothed method.
Furthermore, the diversity comparisons show that the proposed framework (with smoothing) obtains the second place in terms of LPIPS and CIS. DRIT [8] achieves the best diversity, but it is not domain-aware. Overall, our method achieves great diversity and produces realistic results.

Conclusions and Future Work
In this work, we presented a flexible framework for domainaware image-to-image translations. With smoothed training, we achieved great translations and better integration in terms of diversity and realism. Current research focuses on full-image translation but overlooks domain level translations and the models' flexibility.
With the proposed framework, we enabled networks to learn instance-level mapping. Compared with generations from other baselines, our method did not perform poorly and allowed users to choose areas and styles they want to change.
At the same time, when observing the midterm results during training, we noticed that the pretreatment step that replaces the cropped instance area with the mean pixel value has a great influence on the final translation. Although this operation does not totally change the distribution of images and perfectly fits background areas, it still presents a great challenge for later translation works.
From our experiment results we clearly see that the midterm outputs from the given or target domains also learned the replaced area's distribution from the other's domain, and this badly influenced later generations, especially when performing integration operations. Under such conditions, we need a smoother replacement or restoration when cropping instances.
On the other hand, our framework achieved a 1: n translation on the background and instance domains. However, as previously mentioned, images are integrations of content and style information, and they are separable by deep-learning techniques. We can also perform multioutput translations on background domains as we did on instances. We can then achieve m:n generations that would be more applicable for industrial business.

Data Availability
e codes used to support the findings of this study were supplied by the Korean government (MSIT) under license, and therefore cannot be made freely available. Requests for access to this data should be made to Xu Yin, Department of Computer Engineering, Inha University, Incheon, 082, South Korea.   Conflicts of Interest e authors declare that they have no conflicts of interest.