Application of K-Nearest Neighbor Classification for Static Webcams Visibility Observation

. Visibility observations and accurate forecasts are essential in meteorology, requiring a dense network of observation stations. Tis paper investigates image processing techniques for object detection and visibility determination using static cameras. It proposes a comprehensive method that includes image preprocessing, landmark identifcation, and visibility estimation, mirroring the observation process of professional meteorological observers. Tis study validates the visibility observation procedure using the k-nearest neighbors machine learning method across six locations, including four in the Czech Republic, one in the USA, and one in Germany. By comparing our results with professional observations, the paper demonstrates the suitability of the proposed method for operational application, particularly in foggy and low visibility conditions. Tis versatile method holds potential for adoption by meteorological services worldwide.


Introduction
Observing visibility is a critical component of aviation meteorology, representing a key safety concern that is closely monitored both by human observers and sophisticated instruments. Te World Meteorological Organization [1] and the International Civil Aviation Organization [2] have standardized the observation of visibility. Visibility is infuenced by factors such as the number of particles in the air, observed precipitation, the angle of the sun, or a combination of these factors. Forecasting visibility is challenging, making it one of the most difcult variables to predict accurately, and it heavily relies on an extensive and precise observation network. However, visibility can exhibit signifcant local variations, particularly in areas such as valleys near watercourses, vegetation, or human settlements. Terefore, it is essential to maintain a dense network of observation points to ensure comprehensive coverage. Operational meteorologists often leverage various sources, such as webcams, to identify signifcant local variations in visibility.
Te signifcance of this topic is underscored by the considerable number of studies addressing the issue.
Tese studies explore the analysis of moving camera images or provide a comprehensive assessment of image conditions, including clouds, phenomena, and surface conditions. For instance, a research paper conducted by Minnan Normal University [3] addressed the technical aspect of assessing the degree of image fogging, which could later be employed to identify boundary scenarios where only the contour of an object is visible. Another study [4] focused on extracting the fog efect from images to recognize objects such as road signs or lane markings in trafc. Te authors aimed to develop a real-time processing algorithm applicable to moving camera-based trafc scenarios. Although these studies do not directly determine visibility values, they ofer intriguing technical insights into handling foggy images.
In their work, the same authors [5] primarily concentrated on measuring visibility in fog using cameras installed in moving vehicles, applying the principles of Koschmieder's law. Tey divided the problem into two or three subproblems to be addressed progressively, with a particular emphasis on daytime fog conditions. Previous studies [3,5] have utilized images captured under specifc lighting conditions, predominantly during daylight hours. However, in practical situations, we encountered varying lighting conditions, including daylight, nighttime scenes, and transitional periods such as dawn and dusk, where visibility can be perceived diferently.
Te work conducted by Palvanov and Cho [6] goes beyond previous studies by proposing a visibility forecasting procedure that leverages deep integrated convolutional neural networks. Tey trained their model using an extensive dataset comprising 3 million outdoor images. Te authors claim that their approach achieves superior performance compared to classical visibility prediction methods. However, it is important to note that their work is restricted to daily imagery, which imposes signifcant limitations on the scope of the research problem.
A study closely related to the research presented here has been published focusing on the Alaska region [7]. Te authors aimed to determine the prevailing visibility and assimilation possibilities using a method that involves processing images captured by 360-degree range cameras. Teir approach closely resembles the work conducted by professional aviation observers. Te authors also concluded that camera observations are particularly efective in monitoring very low visibilities, specifcally under conditions of low instrument fight rules and instrument fight rules.
Te research question formulated for this study aims to investigate the feasibility of using ordinary cameras, whose outputs are accessible on the Internet, for reliable visibility observations in order to enhance the density of the station network. Te objective is to identify suitable image processing methods for determining visibility during both day and night and to establish a methodology for quantifying visibility. Te proposed method must meet the requirement of providing clear and interpretable results, ensuring its potential presentation to the meteorological community and operational usage.
Te purpose of this research is to utilize these image data in a manner consistent with the practices of professional meteorologists and propose an automated method that can replace their eforts across a dense network of webcams. Te proposed method should be applicable, have low computational requirements, and allow for easy modifcation if necessary.

Data and Processing
Te reliable assessment of visibility for an object in a camera image necessitates the fulfllment of specifc requirements, which include the following: (i) Static camera: Te camera should be stationary, ensuring that the object being observed does not change its position within the frame. (ii) Reliable provider: Te camera's provider should be dependable, guaranteeing that the acquisition date of the image is accurate and promptly addressing any malfunctions that may occur. Any issues encountered should be reported and resolved promptly.
(iii) Object stability: Te object being observed should not undergo signifcant changes over time. Tis includes factors such as the object not moving, remaining consistent throughout diferent seasons (e.g., trees or seasonal installations), and not being obscured by other objects (e.g., trafc signs or lower portions of buildings). (iv) Night visibility: Te object should be visible at night, either due to illumination or the presence of lights, enabling observation during nighttime conditions. (v) Object location on the map: Te ability to locate the object accurately on a map is necessary to obtain the distance between the camera and the object.
Tese conditions are often fulflled by high-rise buildings, as they typically possess aviation safety lights and undergo fewer alterations compared to residential buildings. Images from static cameras are commonly provided by national meteorological services such as the Czech Hydrometeorological Institute (CHMI) or the German Meteorological Service (Deutscher Wetterdienst, DWD).
By adhering to these requirements, the research aims to establish a reliable method for assessing visibility using static camera images from selected locations.

Imagery Processing
Options. From a meteorological perspective, several objectives can be set to analyze visibility in camera images. Tese objectives include the following: (i) Determination of phenomena reducing visibility: Identify and analyze specifc weather phenomena that contribute to reduced visibility, such as fog, haze, rain, or snow. Tis involves detecting and quantifying these phenomena within the camera images. (ii) Precise visibility determination: Develop a method to accurately measure and quantify visibility in the camera images. Tis requires comparison to real professional and verifed observations. (iii) Shading of landmarks: Assess the impact of shading on the visibility of landmarks or prominent objects within the camera images. (iv) Determination of visibility limits: Determine the maximum range of visibility in the camera images, taking into account factors such as atmospheric conditions, lighting, and object characteristics.

Advances in Meteorology
From a broader image analysis perspective, the tasks can be divided into three basic situations: (i) Unreduced visibility: Analyze images where visibility is not signifcantly reduced. Tese images serve as reference points for evaluating visibility conditions and establishing baseline characteristics of the scene. (ii) Reduced visibility: Analyze images where visibility is visibly reduced due to atmospheric conditions. Tis involves quantifying the extent and severity of the visibility reduction, providing insights into the presence and intensity of weather phenomena. (iii) Night: Analyze images captured during nighttime conditions. Tis entails developing techniques to handle low-light conditions, ensuring proper visibility assessment and analysis during nighttime hours, when the scene is completely diferent and landmarks might be invisible.
In line with the observer's approach when visually assessing visibility, the following three parameters play a crucial role: Tese parameters form the basis for exploratory analysis and preprocessing of the data, as they help uncover relevant patterns and variations in the camera images.

Color
Histograms. From a general perspective, certain assumptions can be made about the appearance of camera images under diferent visibility conditions: (i) Unreduced visibility: In images with unreduced visibility, colors are expected to be vibrant, diverse, and exhibit a wide spectrum. Te image will contain a range of colors representing various elements within the scene. (ii) Reduced visibility: In images with reduced visibility, colors tend to be predominantly grey. In specifc cases, such as sand obscuration, the dominant color may lean towards ochre. Te overall color palette will be limited and subdued due to the presence of atmospheric particles or weather conditions that hinder visibility. (iii) Night: Nighttime images will primarily consist of dark or black scenes with prominent bright areas. Te contrast between dark and bright regions may be more pronounced. Te limited amount of available light during nighttime conditions can lead to a diferent distribution of colors in the scene.
In order to provide a basic overview of the color representation under diferent visibility conditions, histograms of color distribution in RGB space have been created. Tese histograms can ofer insights into the prevalence and distribution of colors within the image.  Figure 2).
Te histograms exhibit certain readable characteristics, but they can also be ambiguous and challenging to interpret on their own. Tus, it is important to acknowledge that histograms are an essential part of the analysis but cannot solely serve as a classifcation parameter. Tey provide a general understanding of the image's distinct characteristics before undergoing preprocessing. In more intricate procedures, histogram analysis can serve as an initial classifcation method, enabling the application of diferent procedures tailored to specifc situations such as day, dawn, night, or other scenarios that can be efectively captured by the histogram data [24].

Color Scale Operations.
In general, the visibility observation relies more on the contrast of objects rather than their specifc colors. Terefore, it is important to explore techniques that can enhance contrast, such as isolating single channel values, inverting colors, fltering raster values, or adjusting the image's color properties [22]. In addition, the method should be applicable for both daytime and nighttime conditions.
As previously mentioned, visibility observations primarily involve identifying objects with optimal contrast against the background sky. Terefore, methods that maximize contrast regardless of sky conditions are preferable.
One approach to enhancing contrast is contrast stretching, which scales the image values from a limited interval to the entire scale of the color band [8]. Several contrast accentuating methods were tested and evaluated (see Figure 3).
Overall, these techniques aim to improve the visibility and distinguishability of objects by enhancing their contrast against the sky background.
Te purpose of using these methods was to identify suitable techniques for separating the sky and the horizon or objects on the horizon. From the analysis, it appears that the second and third methods (contrast stretching and histogram equalization) exhibit promising potential. However, it should be noted that these two methods are quite similar in their efects. On the other hand, adaptive equalization did not demonstrate signifcant contrast diferentiation. Moreover, histogram equalization amplifed noise in the image, which could be problematic in scenarios involving precipitation, mist, or reduced lighting conditions.
Another method examined was image decolorizing, which involves converting the image to grayscale. Tis technique can be useful for enhancing contrast between elements such as the sky, buildings, or light, as it eliminates diferences in color bands. Consequently, the resulting image primarily focuses on brightness and contrast.
To efectively address contrasting surfaces, the power law transformation method was used. Tis technique adjusts the image to align more closely with human visual perception, resembling the observations made by human observers [9]. Tis is due to the nonlinear relationship inherent in human eye perception, contrasting with the linear relationship associated with camera lenses. In this case, two specifc methods were tested: logarithmic and gamma transformations. Gamma correction is a nonlinear operation that is used to encode and decode luminance in a manner that aligns with human visual perception [10]. Te transformation function of gamma correction can be expressed by the following equation: It represents the transformation of pixel intensity values, denoted by V, for output, maximal, or initial values. Te gamma power parameter, chosen by the user, is utilized in the transformation.
Gamma correction has been shown to help separate objects on the horizon and increase their contrast ( Figure 2). Tis is especially useful when viewing an object against the sky. Tus, the method appears to be suitable for extracting landmarks.

Edge Detection.
To complement the image decoloring and histogram modifcation methods, it is important to incorporate techniques for detecting visibility edges and areas with contrast variations. In this regard, several readily available methods in Python packages can be utilized.
One efective method for object detection is contour detection based on the marching cubes algorithm [11]. For testing purposes, the skimage.measure.fnd_contours method [8] was used, and the results were visualized using separate colors in Figure 4. Te results obtained using skimage.measure.fnd_contours show promise, even during dawn hours when the nature of objects transitions from being visible to detectable by light. Te method efectively separates contrasting objects, as evident from the picture. However, it is important to note that the performance of this method is highly dependent on lighting conditions and may yield diferent results at varying angles of the sun or moon [23].
Considering the previously mentioned preprocessing methods where the image values were converted to greyscale, the next step is to test the edge detection procedure. Tis approach capitalizes on the characteristics of the preprocessed image and identifes areas with high contrast. Te Sobel (also known as Sobel-Feldman) or Roberts operators appear to be suitable choices for this purpose. Tese operators enhance the detection of edges and provide valuable information for further analysis [12].
Although the results of both the Sobel and Roberts operators appear similar (Figures 5(a) and 5(b)), it is evident that the Sobel-Feldman operator efectively distinguishes edges within the image. However, it is important to highlight that without prior preprocessing steps such as greyscale conversion or histogram equalization, the Sobel edge detection method would be challenging to apply to night imagery. Figure 5(c) specifcally showcases the results obtained solely through the Sobel edge detection technique. Figure 5 demonstrates the efectiveness of the Sobel operator in highlighting edges and detecting areas of contrast. By incorporating appropriate preprocessing techniques, the Sobel-Feldman method can be utilized to enhance edge detection and facilitate further analysis, particularly in daylight or well-lit conditions.
Testing the flter on unadjusted images shows a useful procedure for object detection in day and night. Edges are detected better in black and white images with increased contrast.

Object Detection
Te primary objective of this research is not solely limited to image processing but rather focuses on determining visibility conditions and identifying the current situation depicted in the image. Tis includes detecting phenomena such as fog, mist, or nighttime fog. A signifcant challenge in classifying the entire image dataset arises from the potential variability of objects present. Factors such as changing facades of buildings, moving vehicles, and temporary obstructions necessitate the identifcation and tracking of reference points within the image. Te proposed preprocessing method will be evaluated on these landmarks to assess its efectiveness in detecting them both during the day and at night.
Tis approach aligns with the workfow of a weather observer and ofers advantages in terms of intervention and classifcation. Unlike neural networks, we can actively participate in the classifcation process by identifying multiple landmarks and determining if any changes or disappearances have occurred. Tis level of control is possible due to our precise knowledge of the specifc landmarks being tracked.

Object Determination.
For testing image from Brno camera, the building on the left side was chosen as a visible landmark ( Figure 6). Te chosen location meets the specifed criteria of visibility, being well-lit and unlikely to be obstructed by temporary objects. However, the measured horizontal distance of 1500 m falls below the desired threshold from a meteorological standpoint. Nonetheless, it still provides valuable information regarding visibility reduction.
In Figure 7, the efectiveness of the preprocessing techniques can be observed. Te image is transformed through greyscaling (using the viridis colormap for improved contrast), gamma adjustment, and Sobel-Feldman flter for edge detection. Tese methods successfully detect object features even in conditions of both good and poor visibility. In addition, the image texture undergoes signifcant changes in situations of reduced visibility, resulting in a distinct texture that can be efectively captured by the algorithms.
By employing this method, points will be identifed on each tested image using a consistent procedure. As the reference images exhibit relatively clear characteristics, they can be utilized for classifying the tested images. However, this approach requires the supervision of an individual who must determine appropriate coordinates of the object within the image. Tis step is crucial as the object needs to be consistently cropped with the same margins in order to achieve the highest possible accuracy.

Conditions Classifcation.
A fully supervised classifcation approach could be employed using the image subtraction method [8]. In this method, the resulting histogram or image should ideally be an empty array or consist solely of black color values. Tis indicates that there is no signifcant diference between the tested image and the reference image, allowing for accurate classifcation ( Figure 6).
Te method demonstrated is generally efective under normal conditions. However, variations in lighting or other changes in appearance could lead to higher values in the histograms. Terefore, using image subtraction only, it becomes necessary to establish threshold values to distinguish between random infuences and changes in weather conditions. Figure 4: Contours detection using the sci-kit image package [8] during dawn. Te fgure showcases the results of the contour detection algorithm, highlighting the detected contours within the image. 6 Advances in Meteorology 3.3. K-Nearest Neighbor Classifcation. By selecting a segment from the images, we obtained representative reference images. Tis allows us to use the K-nearest neighbor machine learning classifcation method. Te K-nearest neighbor method is highly efcient, with a short training time, and is easily interpretable by all users [25]. It is widely utilized in various domains, including monument care [13,14], medical science [15], or fnancial analysis [16]. Te method aims to identify the most similar set of points (objects, in our case vectors, since the matrix of pixels is transposed into one dimension) from a training set of labeled objects [17]. Based on a specifed parameter, a certain number of nearest points (nearest neighbors) are selected for consideration ( Figure 8). A distance calculation is performed to determine the nearest neighbors. When the parameter k is set to 1, the method classifes based on only the single nearest neighbor.
Due to the signifcant diferences among individual image segments, the parameter k was set to k � 1. In situations where we have more controlled labeled data, a higher value for k could be utilized. Tis would be benefcial when the object's appearance varies under similar conditions, such as changes in illumination during nighttime. However, it should be noted that there is no assumption that these highly distinct images (e.g., clear, foggy, or nighttime) would be misclassifed.

Proposed Method
In accordance with WMO regulation number 8 [1], each station is required to create a plan detailing the objects used for observation, including their distances and bearings from the observer. Tis requirement was crucial and led to the establishment of a comprehensive workfow procedure, consisting of four fundamental steps (Figure 9): data acquisition, preprocessing, object segmentation, and result determination.
It is important to highlight that the manual aspect of the work does not have to be laborious. For instance, Python provides the plotly package [18] for interactive image visualization, allowing the display of frame coordinates as tooltips when hovering the cursor. Tis facilitates the swift determination of object coordinates within the image. After thorough review, testing, and careful consideration of the strengths and limitations of each annotated method, the subsequent processing procedure was established ( Figure 10).
Te entire process is characterized by its transparency and simplicity. Te code is concise and easily adaptable, allowing for convenient modifcations as needed. In addition, the process is highly efcient and can be further optimized through pretraining and model saving techniques. Tese enhancements contribute to a streamlined workfow and improved time efciency.

Results
Te proposed method underwent testing on various objects located in diferent areas. In addition to the Brno campus building, which was previously used as an example, three other images from Skalky, Kobylí, and Klínovec were utilized for testing purposes. Tese images had a resolution of 1600 × 1200 pixels. Furthermore, two foreign cameras in Salt Lake City, US, and Ofenbach am Main, Germany were employed for special testing and validation in conjunction with regular meteorological reports. Te individual images of the frst selected landmark, a radio tower situated 1500 m away from the Skalky radar station, are displayed in Figure 11.
Te third station, located in the village of Kobylí, South Moravian region, did not have any lights. However, the image preprocessing method demonstrated remarkable success under all conditions. As orientation points, the power line post and the cycling path were selected (Figure 13(a)). Teir visibility under nighttime conditions is illustrated in Figure 12. It is worth noting that being situated at the edge of the village, there might have been some residual light rays aiding in the identifcation of individual objects.
In the event of classifcation failure during the testing period, an alternative preprocessing fltering method can be applied for this specifc case. Since the power line column is vertical, the Prewitt vertical flter (Figure 13(c)) can be utilized, which is capable of detecting objects that are particularly challenging to observe. Te Prewitt flter is specifcally designed for extracting the vertical gradient using a 3 × 3 kernel. Its results indicate how the image changes at a specifc point, providing insights into the presence and orientation of edges [20].
In Figure 12(d), there are visible lights; however, these lights are not consistently turned on, making them unsuitable for classifcation. Tey can be considered as a backup point in case the lights are on.
If the classifcation process is successful, there is no need to incorporate an additional branch of the algorithm to apply the Prewitt flter. However, it is always important to have contingency options available, particularly in the feld of meteorology.   Moving on to the fourth station, Klínovec ski resort, the landmark chosen for observation was the cable car column ( Figure 13). Despite being located at a distance of 200 m, which limits visibility, this case efectively demonstrates that image preprocessing can successfully resolve objects against the sky background when there is residual light present.
Te classifcation testing was conducted on the available landmarks, with an additional labeled training image featuring a cable car seat hanging in front of the column to account for changes in its shape. However, it is expected that the column without the seats would be the most similar nearest neighbor.

Testing Classifcation.
Given the high resolution and visibility of the landmarks in the classifed images (from the training set), extensive testing of similar situations was not necessary. It is assumed that the classifcation will accurately evaluate most common scenarios but may incorrectly assess situations with signifcant changes in the appearance of the landmarks. To simplify the classifcation process, the following categories were established, considering the varying distances to the landmarks: While the use of meteorological terminology has been partially abandoned, with fog traditionally defned as visibility below 1 km, it was assumed that full obscuration of a landmark at a distance of 1500 m indicates low visibility. Te classifcation output primarily indicates the presence or absence of the landmark. Te test set imagery was carefully selected to encompass all the diferent situations for thorough evaluation. Te classifcation categories depend on the quality of the observed points or the absence thereof. Te results of the KNN classifcation for k � 1 are presented in Table 1.
It is indeed a notable achievement that the classifer attained a 100% success rate. However, it is crucial to recognize that the efectiveness of a classifer relies heavily on the quality of its training set. Situations such as building reconstruction, camera malfunction, or the absence of lighting (which is uncommon in high-rise buildings for safety reasons) can potentially pose challenges for the classifer's performance.
One signifcant advantage of the current approach is that the objects are selected by a human operator. Tis human involvement ensures accuracy and helps mitigate potential inaccuracies. While the introduction of automatic detection or increased autonomy may reduce the amount of human work required, it could also introduce more inaccuracies.

Image acquisition
Object location Image processing Object segment classification Manual Automatic Figure 9: Basic workfow incorporating initial human preprocessing, involving the identifcation of object positions and data retrieval.
Read file -to greyscale  Figure 10: Image loading, preprocessing, and classifcation process using Python. Te image is preprocessed and classifed using various methods in Python, including slicing indexes (u, d, l, and r) to specify the region of interest. Te KNN classifer from the sklearn package [19] is used for the classifcation task.
Hence, the current level of human intervention provides a valuable advantage in maintaining the reliability of the classifcation process.

Complex Situation
Testing. Te next image tested was obtained from a south-facing camera operated by the Department of Atmospheric Sciences at the University of Utah, Salt Lake City, as part of the Mesowest project [21]. Tis camera captures images with a resolution of 1280 × 960 and features both illuminated objects and a relatively rugged horizon.
Two main landmarks were selected for testing: the Social and Behavioral Sciences building located 500 meters away and the Rice-Eccles Stadium situated within 650 meters ( Figure 14). In addition, a third landmark, the suburb and ridge located 5 kilometers away, was chosen to assess potential mist conditions. Tese objects were tested to determine the three categories of fog, mist, and good, which do not diferentiate between day and night as all three segments contain illuminated objects.
Each frame in the image was assigned a number and name, along with visibility values for night and day conditions. Te visibility status indicates whether the object is visible or not, and the corresponding phenomenon it represents, such as mountains obscured (HOBSC), mist (BR), or fog (FG). In addition, the position of the boundary pixels was recorded (Table 2).
In the classifcation process, mountain ridges were assigned a visibility value of zero as it represents the lowest guaranteed visibility. Terefore, when mountain peaks are visible, the correct statement is that the visibility is greater than or equal to 0 m.
Despite the addition of this information, the classifcation process remained unchanged. All points were classifed in all images, but the results were not considered if the landmark visibility was not reliable.
Te algorithm produced only two misclassifcations out of 80 tested images (from November 2022 to February 2023) that covered all the target classes. In these cases, the algorithm failed to recognize the building as visible, despite being manually labeled as such. However, these two errors are subject to debate and require further analysis (refer to Figure 15 for visual reference). Tese classifcation errors can be considered marginal situations since the absence of illumination at the landmark can be attributed to the low quality of the observation point.
In the latter case, where the visibility is borderline, even a human observer may estimate the visibility marked by the building to be less than 500 m. Tis highlights the need for a coherent methodology for labeling.
Considering the distance between the observation site and Salt Lake City International Airport (12 km), a manual comparison revealed signifcant diferences in the observed visibility due to varying local conditions. In addition, the airport's location to the west of the south-facing webcam contributed to the contrasting results.
Although the automatic comparison of observed visibility and webcam results was deemed unrepresentative, the relatively promising outcome encouraged further research to enhance the observations by introducing new classes of objects.
Given the captivating nature of the scene, the observer naturally contemplated other orientation points (frames 1, 2, and 5 in Figure 14). Consequently, the possibility of observing additional phenomena and situations was explored. Examining the image, it becomes evident that there may be instances where the mountains are obscured while visibility remains high, or where the base of the mountains is  obscured but the peaks remain visible, among other complexities. Tis introduces additional characteristics to consider in more intricate scenarios. Out of the 80 images used, only one new error occurred. One image from early night was mistakenly classifed as a good visibility situation, although it is more likely to be a mist (Figure 16).
Te low quality of the landmark area should be taken into consideration in the analysis. To determine the mist at night, the lights in the suburbs were used as a reference. Terefore, it would be more appropriate to use this landmark only during the day. Alternatively, the classifcation model should be refned and trained more accurately for comparable situations.  Table 2). Frames 1 and 2: top of the mountains for "hills obscured" determination. Frame 3: suburbs and mountain ridge base, including Jacks Peak and Pencil Point, situated 5 km away. Frame 4: social and behavioral sciences building, located 500 meters away. Frame 5: daylight visibility assessment. Frame 6: Rice-Eccles Stadium, situated 650 meters away. Te minus sign indicates a lower intensity of the phenomenon. Te boundary pixels of the frame in the image are indicated by U, D, L, and R.
(a) (b) Figure 15: Two errors made by the KNN algorithm: both images were labeled as having visibility of at least 500 m but were classifed as having lower visibility. Te left image (a) could be attributed to a possible electricity outage in the buildings. In the second image (b), the visibility of edges is disputable, leading to uncertainty in labeling.

Advances in Meteorology
Regarding this camera, it is important to note that the image quality is not high, and the quality of the observed points, especially at night, is also limited. However, the diverse nature of the scene contributes to the identifcation of diferent phenomena in the daylight. It can be considered a success that out of the 80 cases tested, there were only three errors in the full testing scenario, and only one error when using only high-quality landmarks.

Standard Reports Visibility Comparison.
A camera located in Ofenbach am Main, Germany, was selected for testing the visibility classifcation against standard weather reports. Tis camera is operated by the reliable German Weather Service (DWD) and is oriented towards the west, capturing views of Frankfurt am Main city center and the International Airport (ICAO code: EDDF). Te horizon in the camera's feld of view ofers prominent landmarks in the form of high-rise buildings, allowing for direct comparison with ofcial observation reports issued at the airport. In this evaluation, METAR reports were utilized for comparison, which are issued twice an hour (at the 20th and 50th minute), providing a test set comprising approximately 190 measurements.
To serve as orientation points, the Central European Bank building (distance � 3 km; positions in the original imagery: 1502, 1705, 1997, and 2099) and the Commerzbank building (distance � 5 km; positions in the original imagery: 1549, 1699, 1935, and 1985) were chosen ( Figure 17). It should be noted that one potential complication when examining the nighttime images is the slight blurring caused by the pollution. However, this blurring should not signifcantly afect the diferentiation between fog and good visibility.
In the initial testing phase, where only one image was used for training and each situation (visible/obscured; day/ night) had four corresponding images, errors were observed  Figure 19: Misclassifcation of the visibility of the Commerzbank building (a), which is due to the low contrast to its background of other shining buildings (b). 16 Advances in Meteorology in the classifcation of boundary situations in the test set from February 15th to 18th, 2023 ( Figure 18). For the second testing, the model was retrained and the erroneous image was added to the training set to correct the misclassifcation. After this adjustment, only one mismatched value remained. Tis discrepancy occurred on 16th February 2023 at 0:50 UTC (METAR time code 202302160050), where the reported visibility from the airport was 3400 meters, but it was manually verifed to be lower in the city. Tis observation further highlights the algorithm's potential beneft in recognizing diferent visibility values from various locations.
Regarding the second building (Commerzbank), which serves as the 5 km threshold, the same testing approach was applied. Tere were more instances of discrepant values, primarily on 16th February during the day, but the majority of them were correctly classifed. Hence, the algorithm successfully detected the diference in visibility compared to the airport. Te only notable error occurred in the night image of 17th February at 20: 20 UTC, where the character of the lights may have changed due to heavy pollution or alterations in the city center's lighting ( Figure 19).
It is interesting to note that the algorithm rated the image as more fog-like due to the lack of contrast between the building and its surroundings. Despite being the tallest building in Frankfurt, choosing another landmark that contrasts more with the background, such as the sky, would have yielded better results.
Te near error-free determination of object visibility can be considered a success, as there was only one error in three days of observation during three tests (two for the Central Bank building and one for the Commerzbank building). It is worth mentioning that this error can be rectifed through model retraining or by selecting a diferent landmark for classifcation.

Discussion and Conclusions
Te proposed research aimed to explore the use of static cameras for visibility determination in meteorology. Te methodology involved several technical steps, including image decolorizing, Gamma correction, the Sobel-Feldman flter, and the KNN classifer. In addition, several other preprocessing methods are demonstrated to explore additional possibilities and enhance the applicability of the proposed approach across various conditions. Steps of preprocessing and object identifcation were carefully selected to mimic the process of human professional visibility observation and to achieve accurate results.
Evaluation of the method was conducted at various locations using cameras provided by diferent institutions, such as the National Weather Institute (CHMI) in the Czech Republic and the German Weather Service (DWD) in Ofenbach am Main. Te results demonstrated a high success rate in scene determination, indicating the suitability of the method for visibility determination using static cameras.
Alternative approaches, as discussed in the relevant literature, include the use of neural networks, intensity evaluation of detected edges, and contrast assessment, among others. Tese approaches ofer potential avenues for further research and may yield alternative insights and improvements to visibility determination using static cameras. It is worth noting that while the proposed method requires initial human efort in data labeling and object selection, this manual supervision enhances the algorithm's reliability, making it particularly suitable for aviation purposes. Importantly, it should be highlighted that a subset of  In Table 3, an overview of the advantages of the proposed method is provided. Tese advantages include its close resemblance to the process of human professional visibility observation, applicability in day and night conditions, the possibility of using multiple objects, distinct categories with low error probability, a highly understandable process, adjustability for each individual station, low memory and computing demands, and short training and prediction (classifcation) time. However, the proposed method does have limitations, as summarized in Table 4.
To address the aforementioned disadvantages, potential solutions can be explored in further research. While certain limitations related to landmark quality, light conditions, and positioning may be inherent, other issues can be mitigated.
Dealing with varying threshold values can be partially resolved through improved visualization and interpretation techniques. For instance, certain points may be disregarded during nighttime when they are not discernible. In addition, users could have the fexibility to set a threshold distance according to their specifc needs. Once this threshold is reached, the output can signify visibility falling below the set threshold.
To alleviate processing-related drawbacks, an automatic algorithm could be developed to suggest landmarks based on image recognition or geographical methods. Tis would reduce the need for manual landmark selection and improve efciency.
Furthermore, the utilization of moving cameras could enable the detection of extreme situations such as dense fog. While the reliability level may be limited, the algorithm could also identify areas with notably high grey color content in the image, indicating potential visibility challenges.
Tese proposed measures aim to address the mentioned disadvantages and enhance the overall performance and versatility of the method. Further research and development in these areas can lead to improved outcomes and expanded capabilities.
In conclusion, the proposed algorithm ofers a practical solution for visibility determination using static cameras. Te research has demonstrated the method's applicability in day and night conditions, the ability to use multiple objects, and its low memory and computing requirements. Te advantages outlined in Table 3 confrm its potential as a reliable and efcient visibility determination tool. However, certain limitations identifed in Table 4, such as directionality and threshold variations, highlight areas for further improvement.
To enhance the method, future work could focus on incorporating a larger number of quality control points in the data, expanding the training dataset for improved accuracy, and addressing challenges related to result visualization and interpretation. Te algorithm will be deployed in the application phase, where eforts will be made to optimize the visualization process and overcome any interpretational challenges. Overall, the proposed method makes a valuable contribution to visibility determination in meteorology, providing practical utility and opportunities for further enhancements in the feld.

Data Availability
Te imagery data used to support the fndings of this study have been retrieved from CHMI: https://www.chmi.cz/fles/ portal/docs/meteo/kam/, last access: 20

Conflicts of Interest
Te author declares that there are no conficts of interest.