In a typical superresolution algorithm, fusion error modeling, including registration error and additive noise, has a great influence on the performance of the super-resolution algorithms. In this letter, we show that the quality of the reconstructed high-resolution image can be increased by exploiting proper model for the fusion error. To properly model the fusion error, we propose to minimize a cost function that consists of locally and adaptively weighted

Super-resolution (SR) is an approach to obtain high-resolution (HR) image(s) from a set of low-resolution (LR) images. In recent SR algorithms [

Utilizing

For these reasons, we propose a mixed-norm-based SR algorithm by minimizing a cost function consisting of

Assume that

Due to the fact that motion operator is not known,

As the Gaussian and Laplacian distributions are the two major candidates for modeling of this mixed noise, we propose to use adaptive mixed

Define

The choice of

The regularization parameter,

The cost function,

To demonstrate the efficiency of the proposed algorithm, the simulation results of Car sequence [

In the experiments, motion operators are estimated from the LR images by using gradient-based registration technique based on affine motion model [

The results of the Car sequence are shown in Figure

Car sequence: (a) the 1st LR image, (b) mixing parameter

Car sequence: reconstructed HR image using, (a)

Finally, a plot of the normalized cost function,

Convergence of the cost function for Car sequence.

An algorithm for image SR is presented based on adaptive mixed-norms. The proposed algorithm adaptively and locally weights

The nonlinear function,

Since