An Enhanced Triadic Color Scheme for Content-Based Image Retrieval

e complexity of multimedia content, particularly images, has risen dramatically in recent years, and millions of images are shared on social media every day. Finding or retrieving an appropriate image is becoming more dicult due to the increase in the volume of shared and archived multimedia data. Any image retrieval model must, at a bare minimum, locate and classify images that are visually related to the user’s query. e vast majority of Internet search engines employ text algorithms that fetch images using captions as input. Even though there is a lot of study being done to increase the eectiveness of automatic image annotation, retrieval errors can occur due to dierences in visual perception. Content-based image retrieval (CBIR) addresses the aforementioned issue because visual analysis of the content is included in the query image. On the other hand, feature extraction is signicantly challenging because of semantic gap. is work proposes a strategy for eective retrieval in similarity images using the triadic color scheme RGB, YCbCr, and L∗a∗b∗ based on reranking. We want to increase image similarity and encourage more relevant reranking. As a result of the ndings, it can be concluded that a triadic color scheme improves precision by 5% more dramatically than existing schemes and also eciently improves retrieved results while reducing user eort.


Introduction
Recent developments in big data technology have produced a sizable number of image databases. A capable visual search tool is required for all of these image libraries. ere are two methods for conducting a search. Keywords are used to annotate images in text-based image retrieval [1].
is approach has a number of drawbacks: (1) It is impossible to manually annotate large databases (2) e technique must be annotated by the end user, making it sensitive to human perception (3) Only one language is covered by these annotations To address the above challenges with multimedia research, CBIR analyses large-scale images employing local and global features in digital image processing [2].
ere are several ways in which this method di ers from text-based image retrieval systems. e CBIR system's most crucial element is feature extraction [3]. Because color extraction is usually straightforward and retrieval performance is quite high, it is frequently used in CBIR systems [4]. ere are yet to be published complete and accurate de nition of form features. e Fourier descriptor, aspect ratio, and circularity are all common ways for acquiring geometric form information. Furthermore, three texture-description approaches are used: statistical, structural, and spectral methods [5,6].
Image processing applications including image compression, edge detection, and picture retrieval have all exploited wavelet features, a sort of texture. A technique for locating and eliminating pointless and unnecessary feature components is feature selection [7,8]. Low temporal complexity and excellent system accuracy could be attained as a result. For a variety of applications, including face identification, data mining and pattern recognition, automatic speaker verification systems, and image processing applications [9], a number of feature selection algorithms have been developed. Finding similar images using visual cues including shape, color, texture, and edge detection is the primary objective of CBIR [10,11]. e primary flaw is that images with low-level attributes may differ from the queried image in the user's perception of semantic locations. e major purpose of this research is to integrate color features in order to obtain images that are more similar. Color images can be represented as RGB or indexed images. e RGB information of an image is found using color moments. RGB, YCbCr, and L * a * b are some of the color methods utilized for a color histogram. Color approaches employed in image processing are quite sensitive to these strategies [12]. e fundamental color scheme is YCbCr, which is expressed as one color brightness and two different colors in photos and movies. Brightness (luma), minus luma (B-Y), and red minus luma (R-Y) are all represented by the letters Y, Cb, and Cr, respectively [13]. Cb and Cr, two colordifference components, are used to hold color data. Differences between the blue and reference values are denoted by the letters Cb, while the differences between the red and reference values are denoted by the letters Cr [14]. e L * a * b * method for color space translation separates grayscale data from color data like red, green, and blue [15,16]. Table 1 shows the various conventional methods along with its performance metric.
L * a * b * has two main processes as follows: (1) e color channel's gray-scale information is more clearly separated. e character L * denotes more gray-scale information. e characters a * and b * provide color information.
(2) e L * a * b * generated by the Euclidean distance difference between colors is perceptually consistent.
e main contributions of the study are as follows: (1) To construct a triadic color scheme, and this is a mixture of low-level elements such as RGB, YCbCr, and L * a * b * and are defined as visual information content (2) We proposed a new framework that can be utilized with a variety of primary color approaches to capture and classify integrated heterogeneous image data.

System Model
e proposed method is hybrid features extracted from an individual RGB, YCbCr, and L * a * b * . To decrease the significant feature vector length of a recommended descriptor, the hybrid features were extracted from color and use gray level and color information from a color map to compute numerous color properties. A color histogram is made using RGB, YCbCr, and L * a * b, among other digital color approaches. ese tactics have a very high sensitivity of color recognition cells in human visual information. RGB to YCbCr and RGB to L * a * b conversions are used in digital picture processing. For RGB images of type unit 8 and unit 16, the range of values is [0, 255] and [0, 65535]. We present a set of integrated color attributes in this paper that can be used to generate more relevant images. Feature extraction methods identify the most important features and simplify image retrieval computations. e combined color channel includes RGB, YCbCr, and L * a * b * features, which are based on K-means classification and then reranking features. Certain images have greater color traits, while others have more sensitive image features. e RGB, YCbCr, L * a * b * , and optimal color attributes are integrated to make image retrieval easier.
In order to increase image retrieval rate and simplify computation of image retrieval methods, we conducted a number of analyses and comparisons on integrated color information. By deriving ideal combination features from original features, the retrieval rate is enhanced. RGB, YCbCr, and L * a * b * are triadic color schemes that use gray level and color information from a color map to compute numerous color properties. Our goal is to make reranking picture results more relevant by maximizing image retrieval of mixed color features. e proposed color algorithms can be applied to medical imaging, face recognition, and form matching, to name a few. Figure 1 depicts the framework for combined color approaches.

Feature Extraction.
Extraction of visual features to a considerable extent in order to improve categorization is known as feature extraction. e CBIR uses a number of different feature extraction approaches. Our analysis is based on a mix of color and frequency prominent properties extracted from the input images.

Reference
CBIR classifier Method Accuracy (%) [17] Random forest Multikey image hashing 72 [18] Support vector machine Two-step strategy 81 [19] Naive bayes Statistical approach 65 [20] K nearest neighbor Discrete wavelet transform 68.7 [21] Fuzzy rule Discrete wavelet transform 84 (1) Magenta � red + blue (2) Red plus green � yellow (3) Cyan is made up of two colors: blue and green Color auto-correlogram features are the likelihood of color Qi pixels in a query image, and they are calculated in various sections of the image. It locates spatial information, which is then utilized to include color and histogram data. e autocorrelogram of image I with distance d for color Qi is defined as Equation (1) shows the autocorrelation of color adjusted fluctuation with distance space information. In the autocorrelogram, color information and spatial information are integrated. In the same way, each pixel in the image might travel through all of its neighbors. e computation complexity is calculated as where n is the number of neighbor pixels, and d is defined as pixel distance.

YCbCr Color
Space. e basic color approach is YCbCr, which is included in images and videos and is expressed as one color luminance and two color-difference signals. e letters Y, Cb, and Cr stand for brightness (luma), minus luma (B-Y), and red minus luma (R-Y). Two color components, Cb and Cr, are used to store color information. Cb indicates the difference between the blue and the reference value, while Cr indicates the difference between the red and the reference value.
2.4. L * a * b * Color Space. Gray-scale information is represented using the L * a * b * approach. More gray-scale information is indicated by the letter L * . Both a * and b * are used to indicate color information. e L * a * b * formula is dubbed perceptually uniform since it is designed by the Euclidean distance difference between colors: Color feature extraction and concatenation were conducted on each of the original color image channels, such as the R, G, and B channels. RGB, YCbCr, and L * a * b * are applied to the color image channels. Each channel's distance is calculated separately, and the best values are used. To match a completely RGB, YCbCr, and L * a * b * image, the Euclidean distance is employed. is method is applied to all color-characteristics channels. Single feature extraction does not provide optimal performance, which is one of the main reasons for missing the proper feature values. e combined RGB, YCbCr, and L * a * b * three channel characteristics are a great compromise for long-range tasks because they are exceptionally fast to retrieve.

Distance Metrics.
ere are two points, q and t, in the Euclidean distance. e database images are identified by the letters t and q, respectively. At points t � (t 1 , t 2 , t 3 , t 4 , t n ), q � (q 1 , q 2 , q 3 , q 4 , . . ., q n ), measure the Euclidean distance. e reranking algorithm is used after the Euclidean distance.   Table 2 and are taken from Kaggle website. e performance of the proposed technique is measured in terms of precision, recall, and accuracy. When compared to other similar color approaches that are already in use, our combined color feature approach outperforms them.

Experimental Evaluation and Analysis
Color images are used to populate entire databases. Humans, animals, flowers, food, and other objects are included in the database. WANG Datasets, Holidays Datasets, and other CBIR approaches rely heavily on them. When compared to several database images, the retrieval result was high. Over 50 image query tests are put to the test using the approaches we have proposed. ese tests assist us in determining the outcome and level of achievement. Finally, we used 80% datasets for training and 20% datasets for testing in our Corel database experiment.

Performance Analysis.
ree important metrics are analyzed for performance evaluation. e measure values where its efficiency and accuracy are evaluated over the Corel Database using precision (Pr), recall (Re), and F-score (Fs) using equations (5) to (7): Recall � Relevant images fetched Total relevant images , e proposed method's overall performance evaluation has shown to be satisfactory. Figure 2 displays the outcomes of finding the top images. e triadic color method, also (1) Choose the most important inquiry images (2) e input image is in RGB color space (3) Detection of important objects (4) Utilize the segmentation methods (5) Look for the detected object (R, G, and B) (6) Calculate the correlogram for each RGB-based color feature extraction frame using (1) (7) Apply YCbCr techniques. For each YCbCr-based color feature extraction image channel, calculate the mean and standard deviation (8) Use the L * a * b * methods (9) Construct the final feature vector (auto-correlogram, L * a * b * for color moments and YCbCr for color moments) that will be used to represent the image with numerical values (10) Utilize the C-means algorithm to group the database's image content (11) Using Euclidean Distance, calculate the distance between the input image and each cluster's centroid, and then apply reranking methods to find the shortest distance (12) Returns the top n images ALGORITHM 1: Triadic color procedure.   known as the hybrid method, explains how to get an upgraded top comparable image in the right order. Figure 3 depicts an image performance metric comparison, whereas Figure 4 depicts a dataset performance metric comparison.
When compared to a current method, the suggested method significantly increases the average value of the performance metrics, as shown in Figure 5. From Figure 3, it can be observed that the tire image is recognized with 85% accuracy which is higher than other images. Guitar is the least identified image with 78% accuracy. Chair and tire recognizing accuracy is almost the maximum difference by 1%. F-score for airplane and chair equals 70%. e comparison analysis for various datasets is depicted in Figure 4. e accuracy of the Corel Dataset is 88% in comparison with other datasets. Coil-100 dataset gives the lowest F-score 86% which is 7-8% lower than the Corel dataset. From Figure 5, it is shown that image partitioning has the lowest accuracy of 83% and highest accuracy of 93% in comparison with other methods.

Conclusion
In this study, a novel model of combined color features is presented that computes distinct color features from color information with a limited number of chosen pixels, making it computationally appealing. e suggested method performed better than earlier detectors and was also compared to other color-characteristics techniques, such as RGB, YCbCr, and L * a * b * , which are less noise-sensitive and result in more comparable images. e Corel Dataset was used to test this method, and the results showed that most of the images had issues. e results of the evaluation revealed that the suggested triadic color features method outperforms the existing method in terms of accuracy.
Data Availability e article contains the data that were utilized to support the study's findings.

Conflicts of Interest
e authors declare that they have no conflicts of interest.