Urban Lawn Monitoring in Smart City Environments

Universidad Politécnica de Madrid, Calle Ramiro de Maeztu 7, 28040 Madrid, Spain Instituto de Investigación para la Gestión Integrada de Zonas Costeras, Universitat Politècnica de València, C/ Paraninf 1, 46730 Grao de Gandia, Valencia, Spain Departamento de Teoría de la Señal, Telemática y Comunicaciones, ETS Ingenierías Informática y de Telecomunicación, Universidad de Granada, C/ Periodista Daniel Saucedo Aranda, s/n, 18071 Granada, Spain IMIDRA, Finca “El Encin”, A-2, Km 38, 2, 28800 Alcalá de Henares, Madrid, Spain


INTRODUCTION
The water is a scarce resource; less than the 3% of the worldwide water is freshwater and only 1% is available in rivers, lakes and aquifers [1] and can be used for irrigation, industry and human use.However, the increase of water consumers, the floods and droughts due to climate change and the water pollution endangers the continuity of the current water use models.The efficiency in the water use is nowadays crucial.The Food and Agriculture Organization of the United Nations (FAO) estimates that in 2050, there will be enough water to ensure the food production for worldwide population [2].Nevertheless, the water availability can diminish in some areas.For this reasons, it is necessary to promote new techniques to ensure the water efficiency in the most possible areas.The optimization of irrigation techniques is a vital process to improve the sustainability and the rational use of the water in agriculture.Many techniques have been developed for different crops [3].Most of these techniques are applied in agricultural areas.Nevertheless, the urban lawns demand high amount of water and no technique have been designed for this special issue.
We can define the urban laws as the group of green areas in the city.These green areas include domestic or private gardens, the public gardens, the recreational green areas dedicated to sports, and the roundabouts.Some of the urban lawns can be composed only of grass while others can have shrubs or trees.In this paper we focus our efforts on the grass classification for its irrigation.The shrubs and trees are irrigated by other methods.So that, it is necessary to promote the precision gardening in order to improve the sustainability in water usage.The precision gardening implies the use of Information and Communication Technology (ICT) for monitoring the plots and achieving a more sustainable culture [4].In smart cities, the monitoring of the water requirements in urban laws can be used to define the irrigation process.In the smart cities, many other processes are monitored and it is possible to identify the best moment to irrigate according to the water and energy use in the grid.Different technologies can be used for monitoring the grass.The main ones are based on remote sensing systems, the use of drones and the wireless sensor networks (WSNs).The use of satellite images for remote sensing is useful for monitoring the changes in the land coverages [5].Nevertheless, the spatial resolution of the current available images is too low.The highest spatial resolution nowadays is offered by the WorldView-4 satellite sensor which has a precision of 1.24m.This satellite has a period of 4.5 days [6].These characteristics do not accomplish with our needs.For a continuous grass monitoring, we need to monitor the state of the grass at least once per day.In addition, we should consider that in some days the cloudy conditions may cause that the image may not be useful.Mulla [7] indicates that the use of remote sensing based on satellite images is not useful for the precision gardening.The second option is the use of drones with camera to take pictures of the whole garden.The spatial resolution will depend on the cameras characteristics and on the flight height.In this case, the main disadvantages are that the drone is not able to fly in windy days and the legislation in some countries may limit the flight in urban and inhabited areas.The use of drones for monitoring purposes is increasing and they can be used even for emergency rescue systems [8].The last option for grass monitoring is the use of Wireless Sensor Networks (WSNs) with RGB sensors [9].The system is based on one small automated vehicle (SAW) that moves along the garden taking data about the grass coverage with the RGB sensors.However, it is necessary to evaluate the time consumed to cover the garden and estimate the required time in different gardens and its viability.The use of WSN for environmental monitoring is widely used and many examples can be found in [10].For our objectives, the satellite remote sensing cannot be used because we need a daily controlled and more precision than the current satellites offer.The use of soil moisture sensor will not indicate when the grass needs to be replanted.To know when it is necessary to plant, we must use sensors that measure the electromagnetic radiation (cameras).The use of sensors based on fixed cameras assumes that many cameras must be placed which means a big cost.For this reason, we need place the sensors in a vehicle.An airplane is discarded because of the high cost of daily airplane flight.Another option is the use of SAW but this may damage the grass.Therefore the only option is the use of a drone.This paper presents a smart system able to monitor the state of the grass and decide the irrigation needs and the planting needs.The system is capable of classifying the grass into different categories, that is, high coverage, low coverage and very low coverage.The proposed system is composed by an Arduino node with a CMOS sensor.Our system is based on the idea developed by [9].We will verify this system and will compare it with the proposed solution based on a drone.This proposal is part of a bigger study where the images will be locally processed by the drones and they will only send the tag for a specific area.Thus, this paper will present the design, implementation and verification of the drone operation and how it collects the pictures.After collecting the images, they will be processed to analyze the color composition and finally our designed algorithm will classify them.In the next step of our study, it is planned to add some moisture soil sensors to help us to decide the irrigation regime.Our proposal will include the deployment of two moisture sensor placed at 5 meters at east and west of each sprinkler.The number of used moisture sensors will depend on the size of the monitored area.Each pair of moisture sensors will be connected to a wireless node.The wireless node will be in charge of sending the data gathered by the moisture sensors to the base station by a WiFi connection.In order to ensure that all the nodes can reach the base station, or the sink, a multi-hop protocol is proposed.With the moisture soil sensors, it is possible to monitor the remaining water in the soil and with the CMOS sensor is possible to identify the grass coverage using the green histograms of the obtained pictures.Further studies will integrate these functions.
The main beneficiaries of the system proposed in this paper are the different cities that can use this proposal to plan the irrigation of their urban lawns.
The rest of the paper is structured as follows; Section 2 shows different work similar our system Section 2 presents the material and methods.In Section 3, the entire proposal is detailed.The verification of the proposed system to classify the grass coverage with the camera pictures is performed in Section 4. Section 5 presents the results of our proposal applied at different garden sizes.Finally, conclusion and future work are shown in Section 6.

II. RELATED WORKS
This section presents some of the current systems focused on monitoring gardens and crops.
Many authors proposed different solutions for monitoring the needs of gardens and crops.Firstly, we will talk about the WSN.Tripathy et al. [11] proposed a system with temperature, light and water sensors for urban gardens.This system required the deployment of different sets of sensors for monitoring big areas, i.e., to detect a small area with needs of replantation inside a big area.So, a large number of sensors would be required.
There are some other systems include the use of cameras together with other sensors.Macedo-Cruz et al. [12] used a picture taken with a CCD-based technology camera.Authors used a combination of three thresholding strategies (the Otsu method, the isolate algorithm and the fuzzy thresholding method) for determining the frost damage.Lloret et al. [13] designed a WSN based on the use of cameras for detecting unusual status in the leaves of vineyards.The camera took images and the sensor node processed them for detecting anomalies and reported them to the farmer.These studies are based on the use of cameras on the soil.Therefore, we can conclude our system present the same problem as we explain in the case of WSN.For monitoring a big area, we will require the use of aerial pictures There are three alternatives in remote sensing.These are the use of Unmanned Aerial Vehicles (UAVs), aircraft and satellite image.Matese et al. [14] compared the use of UAVs, aircraft and satellite image in vineyards.They concluded the economic break-even exists between 5 and 50 ha in the case UAVs versus the other systems studied.Moreover, the different system provided comparable results in coarse vegetation gradients and large vegetation clusters but on the contrary situation, the satellite images fail.Torres et al. [15] used the analysis of satellite pictures captured by Quickbird.The different wavelength images are used to obtain the vegetation indexes green, Near-Infrared spectroscopy (NIR), Normalized Difference Vegetation Index (NDVI), panchromatic and Ratio Vegetation Index (RVI).These indexes are used to characterize the size and potential of each olive.Xu et al. [16] used Moderate Resolution Imaging Spectroradiometer (MODIS) for determining the production of grasslands in China.They measured the NDVI of different areas of China.In the case of the satellite image, this technology has different gaps.The cost of satellite images is expensive and as we said before, it is only economically viable in large areas.Another problem is the periodicity of images that prevent a diary control of the area that we want to monitor.Finally, the satellite image has low resolution.
To solve the problem of monitoring big areas and the needs of better resolution than satellite remote sensing, different authors proposed the uses of UVAs and aircraft.Yang [17] designed an airborne multispectral digital imaging system.This system is based on 4 cameras that captures images in blue (430-470 nm), green (530-570 nm), red (630-670 nm) and near-infrared (NIR, 810-850 nm).The results confirmed that this system is suitable for monitoring the crop pest, growing conditions, mapping invasive weeds and assessing wetland ecosystems.Mutanga and Skidmore [18] studied the variation of N in the grass of the Kruger National Park, in South Africa.They used the images obtained from HYMAP MKII scanner (a type of spectrophotometer in an aircraft) and a neural network for classifying the images.They concluded that the 60% of variation can be explained by the image of their system.
The use of drones is currently a very popular method to obtain aerial images because it is an economical option (in areas smaller than 5 ha [14]) and easier to manage than an aircraft.Candiago et al. [19]) used a drone equipped with a Tetracam ADC Micro camera for acquiring images in the red (R), green (G) and near-infrared (NIR) bands, allowing to calculate the NDVI, the Green Normalized Difference Vegetation Index (GNDVI) and the Soil Adjusted Vegetation Index (SAVI).Cambra et al. [20] proposed another system with the use of drones.The system is a network consisting of a drone and a pressure sprayer.The videos captured by drone are transferred to a PC which will perform an analysis of them through the OpenCV library.The system enables a set of sprayers in a determined area with weeds.We can observe in these cases, the UAVs can be used for monitoring an area smaller than 5 ha.
Finally, Kumar et al. [21] presented a Smart Autonomous Gardening Rover that is able to identify and classify different species of plants using extraction algorithms and a neural network.Once the plant is identified, the rover introduces its arm containing the sensors and according to the measurements, it takes its sprays water and fertilizers from this arm.In this case, the author does not use aerial vehicles.For our proposal, we cannot use terrestrial vehicles because they could damage the grass and flowers of gardens which are more sensitive than grass.
As a summary, the use of WSN is not the best option for monitoring big areas since many sets of sensors are required to identify problems in the small areas.The use of cameras on soil presents the same problem because it is not possible take pictures of big areas with resolution enough.For solving this problem, we can use the remote sensing.Satellite image have important gaps and it is not possible the use of this technology in our case (low precision and high round-trip time to the same point [7]).Aircraft presents the best resolution and the periodicity of taking pictures is better than satellite.However, the cost of this alternative is very high for monitoring small areas.Finally, in this paper, we presents a drone that has a camera to measure the reflectance of grass and our algorithm allows identifying the areas that presents low coverage of grass or requires water.Additionally our system stores information in a database for statistical analysis and further uses.

III. SCENARIO AND SYSTEM DESCRIPTION
This section details the employed material, including the vegetal species and the electronic elements that compose our designed and developed sensor.The methodology followed to process the data is also presented.

A. Vegetal material to verify the grass coverage classification
In this subsection, we are going to describe the vegetal material used to verify the proposed classification system [9] in previous work.
The vegetal material has been obtained from a country estate called El Encín.This space is placed in the IMIDRA research center where the agri-food and agro-environmental research projects of the Community of Madrid (Spain) are carried out.It is located in Alcalá de Henares, Madrid (Spain) (see Figure 1).Currently, the IMIDRA is developing a study of the water demand of different grass species.The plots of these experiments are employed to find a relation between the coverage and the response of our developed device.Different combinations of grass species are used in the plots.Each plot has and extension of 1.5m 2 .
Figure 1.Plots from where the vegetal material is obtained, in the IMIDRA facilities.

B. Scenarios used to test the developed system
This subsection shows the description of the gardens used to test the system.The aim of using different sizes of gardens is to evaluate the feasibility of using a type of system or another to monitor each garden, as well as the required energy consumption and consumed bandwidth for each scenario.
In order to test our system, four different gardens have been used.The selected gardens do not have any inclination or irregularities in the terrain.The smallest garden have a surface of 180 m 2 and the biggest one has a surface of 160,000 m 2 .The rest of the gardens have a surface of 900, 4,600 and 7,000 m 2 , respectively.All of them are covered with only grass, i.e., there are no trees or shrubs.The gardens of 900, 4,600 and 7,000 m 2 have a rectangular shape and the other two gardens have a "L" shape.The selected gardens have good grass coverage in the entire area.

C. System for image capturing
In order to gather the different images of grass, we have developed a camera-based system which will be installed on the drone.The System for image capturing is composed by an Arduino module and an OV7670 camera able to take pictures with a Resolution of 640×480 VGA.It presents a high sensitivity for low-light operation and requires a low operating voltage which makes the OV7670 camera module suitable for embedded portable applications.
Figure 2 shows a basic schematic of the camera connection.The camera module works with a single + 3.3V power supply.This camera needs an external oscillator to generate the clock signal (XCLK Pin) of the camera.We can select different communication protocols, although the use of the I2C protocol is recommended.Through the I2C bus we can control and update both the pixel clock signals (PCLK) and the camera data (data [9: 0]).If integrated camera modules, as MCU STM32F2 or STM32F4 series, are selected, no additional module is required.For hosts that does not have a camera interface, additional hardware is needed to store a complete file before reading them with low-speed MCUs.
The System for capturing the images of grass must be installed in a drone, so we should choose modules of small size but capable of performing the tasks of image capture and processing, i.e., the final goal of our system is to locally perform the image processing in the drone while it is covering the trip.There are different devices specially designed for the development of integrated systems and Internet of Things (IoT) deployments.In our case, we are going to use an Arduino model.Arduino is an open source platform that provides both hardware solutions and its own Integrated Development Environment (IDE).Arduino modules are characterized by their simplicity in programming and systems management.Table 1 shows a comparison of characteristics of some of its simplest and most used modules that would suit our needs.In our case, we are going to select an Arduino UNO Rev. 3 module.Arduino Uno is an electronic platform based on the ATmega328 processor.It has 14 digital pin inputs / outputs (6 analog inputs, a 16 MHz crystal oscillator).It allows programming through its USB connection and can be powered through the USB connection, from a PC or using a Li-ion battery.The reason of selection this modules is because its price which is the cheapest one and its weight.This module is the second with the smallest weight with 25 g.When we are working with drones, the total weight of the system is important since this factor influences the flight autonomy.
Additionally, we will provide our system with an ESP-01 wireless module which can be deactivated if we do not need its use, and a microSD memory module which allows us to save data and even images, if we wish.Figure 3a shows the complete system and the main connections among them while Figure 3b shows the 3D design of the support to fix the camera at the bottom part of the drone.The camera is directed towards the ground.This section presents the proposed system to gather the information about the grass described.First, the sensor and the node are described, the SAV and its components are shown.Finally, the operation process of our system is detailed.

A. General description of the architecture
The proposed architecture is based on a programmed drone to make a crossing along a field to be analysed (See Fig. 4).The crossing must be previously designed using software compatible with the chosen drone model.
At the same time the drone moves, it periodically takes pictures of the lawn.For each image taken, the capture system processes each image and decomposes it into its 3 RGB components.From this process, 3 data matrices are obtained, one per component, with information about the red colour information, green colour information and blue colour information values of each pixel that form each picture.From each matrix, we can extract the histogram from which we can determine the status of that parcel.Finally, after applying our classification algorithm, each picture will be labelled as parcel of high coverage, a parcel of low coverage plot or a parcel of very low coverage.We can have a unique base from where the drone takes off and lands.However, to optimize the battery lifetime, we opted for a 2-base system.The first one will be the base from the drone will take off and the second will allow the drone land.
On the other hand, to reduce battery consumption due to data transmission and possible packet loss due to the drone movement, the data regarding to parcel information will be transmitted when arriving at the landing base.That is, the system will take the images and will locally process them, and after arriving to the landing base, the data will be wirelessly transmitted through a WiFi connection.
The information collected by each database will be sent to a central server located in the cloud.Finally, the owners will be able to see the status of their fields in real time.

B. Drone and Flight planning
To implement our system, we have selected a commercial drone with capacity to support our small electronic device to collect the images.Table 2 summarizes the main features of some commercial models that could be used to implement our proposal.To implement our proposed system, we have selected the DJI Phantom 4 Pro which is considered as one of widely used devices for taking aerial images for semi-professional purposes.This model incorporates an advanced Visual Stereo Positioning System (VPS) that allows the drone performing a precise stationary flight, even without satellite positioning, making flying easier and safer.
Although the drone can be manually controlled, to monitor the surfaces and collect the pictures, we have used a flight planning software.To plan the flight of a drone, there are several applications with support for different operating systems.In our case, we have selected for a free specially designed for Android devices.DroneDeploy [22] is a software platform designed for drone flight planning.The DroneDeploy application provides a simple interface for data capture and automated flights that allows you to explore and share high quality interactive maps directly from our mobile device.DroneDeploy allows you to generate high resolution maps and 3D models.
DroneDeploy is compatible with several commercial Drones models such as: Matrice 600 • Among others… For drones equipped with cameras, the application allows exploring interactive maps, measuring distance, area and volume, analyzing elevation, NDVI images and share maps and annotations through instant messaging applications.Figure 5 shows the example of a planned fight in a real scenario and Figure 6 shows the drone during a flight.

C. Control algorithm
To start taking measurements by the drone, we must take into account that the device is going to move from a coordinator node 1 to the coordinator node 2 which is the one that has the possibility of transmitting the data to the cloud or to a server.It is also important to consider that the drone has to have sufficient autonomy to cover the entire route.Therefore, these checks must autonomously be carried out before starting the flight.
As Figure 7, before starting the flight, the drone should receive the data related to the field under the study and check if its battery allows full field coverage.If its energy autonomy allows it, the drone will take off and will start taking pictures.For each image taken, the drone analyses the image and processes it in its RGB components.After that, the drone keeps the green component and saved the results with the relative position of the extracted data from the flight plan.After taking the image, the drone checks if it has not reached the end of the route and keep moving forward for the next measurement.When the drone completes its flight, it lands on the basis of the coordinator node 2. Being at the base, the drone wirelessly connects to the coordinator node 2 and transmits all the data obtained from the field.After finishing its function, the drone will switch to standby mode.
On the other hand, after receiving the data of the flight plan and the size of the field to be analysed, the drone determines if it has energy enough to complete the route.If the battery levels are not enough, the drone sends a message to the user asking for the flight acceptance.If the user does not accept the flight, the drone will remain in standby mode in the basis of the coordinator node 1.However, if the user accepts the flight, the drone will start its flight and the image capturing.After each measure, the drone checks if its autonomy is sufficient to take one more measurement and reach the coordinator node 2. As long as this condition is maintained, the path will be followed.When the condition is not maintained, the drone will leave the flight plan and will directly go to the base of the coordinator node 2. After that, the drone will wirelessly connect to the coordinator node 2 and will transmit all the data obtained from the field as well as the position where it left to take measures.After finishing its operation, the drone will switch to standby mode.In this section, we present the verification process to apply the system developed by [9].In order to verify it, we use new grass plots and using the picture obtained with the Arduino camera, we extract the desired values used to perform the comparison.

Picture
Different pictures were taken to the grass plots (see Figure 8 a).After obtaining the picture; it is cut in order to extract the part related to the grass and ensuring that the number of pixels of the pictures was 1500x1000 pixels (see Figure 8 b).Then, the resolution of the picture was reduced at 10%.The picture has consequently 150x100 pixels (see Figure 8 c).
Once we have the picture with a size of 100x150 pixels, we can obtain the values of brightness from each pixel.To obtain it, we use the Matlab software (See algorithm 1).
An image can be understood as a matrix of of Rows x Columns pixels.In order to analyze each pixel, we should go through each row, accessing to each cell that represents the columns.There are several ways to do this task but the simplest one is to use 2 nested "FOR" loops, so that the outer "FOR" loop locates the cursor to the beginning of a row and the inner "FOR" loop allows the cursor to go through all the squares of that row until reaching the last position of column.Finally, we created a vector of 256 positions that correspond to the brightness levels of each color and for each level of brightness, we counted how many pixels contain that brightness color.Finally, we save the result in the variable His_G that is used to store the results of the histogram." Once we have the matrix of the green component with the values of brightness, it is possible to apply the methodology described by [9].So, firstly, we obtain the green histograms shown in Figure 8.As we can see, all the new histograms follow the trend of the mean histograms from different grass coverages.After that, we can obtain the number of pixels with brightness values between 40 and 60.We selected this range based on the results shown by [9].Finally, because the flight height of the drone is fixed with respect of the ground, the focus of the camera is manually set before the flight.

V. RESULTS AND DISCURSION
This section shows the results and the discussion about the extracted values.First, the grassland classification method for analysing pictures instead of RGB sensors is presented.Then, the results of the simulations to apply the proposed system (with Drone) or our previous system (with the SAW) in 5 gardens with different size are evaluated.Finally, a comparison between our system and the current proposals is discussed.

A. Grassland clasification
In order to carry out our classification, we only need to sum the number of pixels with brightness values between 40 and 60 in the green component of the picture.Then, we will analyse the classification assigned to each picture to check if the classification process assigned correctly the tags.
After processing the pictures, the matrix with the data of green brightness is used.The pictures were not previously tagged according to their type of coverage; they are just named as New Sample (NS) 1 to 12.They are named based on the summation of the pixels with brightness values between 40 and 60.
In the previous work, the plots were assigned according to three categories: High Coverage (HC), Low Coverage (LC) and Very Low Coverage (VLC).Figure 9 represents the obtained histograms of the NS 1 to NS 12 and the average value of the obtained histograms of HC, LC and LVC by [9].In solid colours, we can see the average value of the tagged histograms: HC in green, LC in orange and VLC in red.In black dashes the data from the SN1 to SN12 are shown.It is possible to see that all the histograms follow the same behaviour of one of the average value from the previous work.The summation of pixels with brightness values between 40 and 60 are compared with the results obtained in the previous work [9].In the previous work the ranges of values to tag the different pictures were set.Results can be seen in Figure 10.The HC plots, with good grass coverage, have a summation lower than 500.Then, the plots named as NS 1 to NS 4 are classified as HC plots.The NS 5 to 9 have a summation lower than 1500, but higher than 500.They are classified as LC.Finally, the NS 10 to 12, which have a summation higher than 1500, are classified as VLC.Taking into account the 12 pictures under study, 4 of them were tagged as HC; 5 as LC and 3 as VLC.
The next step is to verify if the classification was correctly done.Figure 11 shows the pictures and their classification according to our proposed algorithm.The results show that the classifications have been correctly done.The plots tagged as HC present grass coverage of 100%, see Figure 11 a) to d).On the other hand, the plots classified as LC present lower grass coverage and most of the grass presents yellowish colour which indicates a pour grass state.Those plots (see Figure 11 f) to i)) present an irrigation deficit.Finally, the plots tagged as VLC (see Figure 11 j) to l)) present very low coverage and most of the plot has no grass and only brown soil is observed.In those plots the irrigation is not immediately required.However, a seeding process will be necessary to restore the grass coverage.Thus, we can indicate that the methodology presented by [9] with RGB sensor can be used to evaluate the grass state in picture.This is because the operation of the sensors inside the cameras and the image post-processing is similar to the operation of the RGB sensors.
The only limitation is that the system must operate with matrixes of 100 x 150 values of brightness.However, we can divide the summation of pixels and the total number of pixels.If the result is a value lower than 0.03, the assigned category will be HC.The plots with values between 0.03 and 0.1 will be tagged as LC.Finally, the plots with values higher than 0.1 will be classified as VLC.In this way, it is possible to apply this method with pictures with different size.

B. Study of feasibility of using this method in different garden
sizes In this subsection, we are going to detail the simulations of using our proposal (with a drone) in gardens with different sizes that were presented in Section 2. The results are compared with the simulation results of using a SAW.The parameters evaluated are the time required to gather the data from the entire garden and the volume of information generated.The amount of gathered data, the number of turns and the total distance travelled are also considered for these simulations.
To calculate the number of turns (P), it is necessary to divide the shorter side (SS) of the field between the wide of each turn (WP) (Eq.1).On the one hand, the width of each turn with the SAW (WP SAW ) is the SAW width (WI SAW ).Sensors are located covering the width of the vehicle (Eq.2).On the other hand, the witdh of each turn in the case of the Drone (WP DRON ) depends on the flight height (FH) and on the focal aperture of the camera (FA), see (Eq. 3).In our examples, the WP SAW is 0.5 m and the WP DRON is 6.6 m.The area contained in each picture gathered with the drone is 4.95x6.6m.The FH must be set according to the resolution need in the pictures, in our case the FH was 15m.The values of P, for each garden, are shown in Table 3. P value for the drone, is much lower than the P value for the SAW because their different WP.
Eq. 1 Eq. 2 Eq. 3 Once the number of turns is calculated, the next indicator is to calculate the total distance travelled to cover the field.In order to simplify the simulation, the travelled distance (TD) is calculated as the distance travelled in each turn (the number of turns along the longer side (LS)) plus the distance travelled to change from one turn to another turn (the number of turns minus 1 and multiplied by the witdh of each turn) (Eq.4).The TD for each garden can be seen in Table 3.The TD is lower when using the drone instead of using a SAW.The TD with the drone is lower than a tenth part than the TD with SAW.To complete the comparative, we need to calculate the time consumed (TC) to collect the data from each garden.The time consumed (Eq.5) is calculated as the travelled distance at mean velocity (MV) plus the lost time (LT) in the deceleration and acceleration at the end and the beginning of each turn, multiplied by the number of turns.
There are some considerations that must be taken into account to select the mean velocity.The time that takes SAW to gather and process each recorded data (TGD) and the area covered in each record (CA) must be considered to calculate the mean velocity of the SAW (MV SAW ) (Eq. 6).To calculate the mean velocity of the drone (MV DRONE ) (Eq. 7), we should consider the pictures per second (PPS) the camera should perform and the distance of the shortest side of each picture (SSP).The shortest side of the picture is defined as the number of pixels of the shortest side of the picture (NP SSP ) by the width of each turn between the number of pixels of the longest side of the picture (NP LSP ) (see Eq. 8).The PPS must be set by the user according to the camera features.The consumed time for each garden is shown in Figure 11.It is possible to see that the TCs with the SAW for the gardens are much higher than the TCs with the Drone.In the biggest garden, the TC for the SAW is up to 180 h while for Drone is 25 min and 30 sec.The SAW is only useful for small gardens like garden 1 and garden 2 with TC of 0.22 h and 1.08 h, respectively.For gardens with more than 1,000m 2 the SAW is not recommendable due to the TC.The mean flying time of the employed drone is 30 minutes; the largest space that can be monitored by a single drone depends on the shape of the area and the number of turns that have to be done.But to give an example, a full charged drone can cover an area of 200,000m 2 with one side of 400m and the other of 500m.Eq. 7 Eq.8 From this point, we will only continue with the simulation for the case of using a drone.Finally, the number of pictures (TP) can be calculated as the number of pictures per second multiplied by the total distance and divided into the mean velocity (See Eq. 9).The TP in the selected gardens are 5, 27, 139, 212 and 4,909 for gardens 1 to 5, respectively.To calculate the volume of information generated if we want to send all the pictures (VI PIC ), we should take into account the number of pictures and the weight (in Bytes) of each picture (WPi) (see Eq. 10).However, if we want to send the green band of the picture (VIG PIC ), we will transmit the matrix with the values of green band of picture, i.e., the volume of useful data will be the third part of the volume (Eq.11).Moreover, it is possible to only send the label classification of each picture (VIC PIC ).That is, we will only consider the number of pictures and the weight of each category (WC) (see Eq 12). Figure 12 shows the comparison between the VI PIC ; VIG PIC and VIC PI C in each garden.As it is expected, the transmission of the VIC PIC supposes the lighter transmission.Sending the VIG PIC supposes a reduction of two-thirds, or 66.7% of the total volume of data compared to sending the VI PIC .Sending the VIC PIC supposes a reduction of 99.8% of the data volume compared to sending the VI PIC.So, taking into account our results, it is demonstrated that the best option of data transmission is to only send the label of the plot characteristics together with its plots identification or position.Finally, this label is locally calculated by our system and stored in the SD card in order to be wirelessly transmitted to the landing base.Thus, the only information transmitted from the drone to the base station is one label per gathered picture.In this way, we are reducing the energy consumption due to not maintaining the wireless connection continuously enabled.In order to know the position of each picture, in the database we have include the drawn of each drone.Then, it is possible to relate the label of each picture with the position of the drone according to the number of the picture.The GPS is not useful in this case to identify the pictures due to the small distance between the drone position.
In this section, we are going to analyze the gaps in our system and we will explain why our alternative is better than the existing ones.
The drone-based systems have three important issues.I) Drones cannot fly during windy.II) Some countries have a much restrictive legislation in the use of the drone.III) The change of environment illumination.
Regarding to the first issue, the number of days with the wind is usually small compared to sunny days, although this fact depends on the geographical region.In addition, the changes in the grass are not usually so abrupt and therefore, the fact of not performing the daily monitoring is not significant.In the second gap, the legislation of the drones has been very restrictive because most of countries did not have previous legislation and they wanted to avoid problems limiting the use of drones.However, they are currently adapting the new laws to the drones' evolution.Finally, the illumination can have negative effects on the classification of grass.The illumination can change because I) the sky is covered with clouds; II) shadows of buildings, trees, etc; III) the time of day when the monitoring tasks are performed; and IV) the season of year when the monitoring tasks are performed [23].To reduce the problem with shadows, we will fly the drone in sunny noon to reduce the size of the shadows and in future works, we would like to include a lux meter in the drone to include this parameter in the classification algorithm as a correction factor.
Finally, we compared our system with other systems (see table 4).The needs of irrigation can be monitoring with remote sensing (satellite or airplane [24]), SAWs, smart sprinkler (WSN with weather information for calculating the evapotranspiration), and our system.Some existing solutions include sensors to detect electromagnetic radiation to determine the coverage of the vegetation.As we saw in Section II, the NDVI, NIR, and others indicators related to the infrared can be used for monitoring the vegetation and are very common in remote sensing.In this paper, we demonstrated that use of visible light waves can be used without the need to pay an infrared camera.
All systems that use the electromagnetic sensor will be affected by the shadows and changes of environmental light.In the case of satellite sensing, the clouds can cover the image and therefore cannot be used to monitor the urban laws.This does not happen with airplanes and drones because they fly below the clouds.Finally, the remote sensing cannot be used for a daily monitoring due to the revisit time and we cannot have a schedule to take pictures on a daily for an urban garden.So, we only have the option of SAWs or drones for monitoring the grass.As we have previously seen, the SAW requires a lot of time to cover a large surface and it is not recommended for urban laws greater than 1,000 m 2 .
To monitor the irrigation needs, we can use the smart sprinkler (the use of remote sensing for managing the irrigation is not very common).The smart sprinklers are programmed according to the moisture soil and the calculation of the evotranspiration of the plants by means of the meteorological data.We decide to use moisture sensors because they are cheaper than smart sprinklers.Finally, table 4 shows a summary of this discussion.In this paper, the use of a Drone equipped with and Arduino module and a camera for urban lawns monitoring has been evaluated.Prior to evaluate our proposal, we have used the proposed methodology to classify the grass quality based on RGB sensors explained in our previous work.The algorithm proposed in the previous paper [9] obtained the 100% of hits.Besides, we have evaluated the performance of employing a Drone or a SAW to cover gardens with different size.The results show that for gardens bigger than 1,000 m 2 the use of SAW is not recommended.Finally, we compare the possibilities of sending the entire picture to be processed in a remote server, the green band of the picture or just the category of each picture.Sending only the category of each picture instead of sending the entire picture, we obtain a reduction in the volume of information of 99.8%.The total cost of our system is € 30 without including the price of the drone.The same system could be installed in cheaper drones with lower flight autonomy, but with similar results.This proposal is part of a bigger study where the images will be locally processed by the drones and they will only send the tag for a specific area.Thus, this paper has presented the design, implementation and verification of the drone operation and how it collects the pictures.After collecting the images, they will be processed to analyze the color composition and finally our designed algorithm will classify them.As future work, further studies will integrate this function in the drone in order to locally process them.It is also planned to add moisture soil sensors to control the irrigation regime.The moisture sensors will be connected to a wireless node.The wireless node will be in charge of sending the data gathered to the base station.With the moisture soil sensors, it is possible to monitor the remaining water in the soil and with the CMOS sensor is possible to identify the grass coverage using the green histograms of the obtained pictures.Moreover, it will be interesting to test the possibilities of detecting and classifying different plant diseases.In addition, we pretend to extend this work including the analysis of pictures of other plant species.Finally and to solve the problems related to different light conditions, we will include a light sensor in the drone and perform several tests under different conditions in order to have different ranges for different light conditions.supported by the "Conselleria de Educación, Investigación, Cultura y Deporte", through the "Subvenciones para la contratación de personal investigador de carácter (Convocatoria 2017)".Grant number ACIF/2017/069.Finally, the research leading to these results has received funding from "la Caixa" Foundation and Triptolemos Foundation.

Figure 2 .
Figure 2. Basic schematic of the camera connection

Figure 3 .
Figure 3. System of camera.a) complete system and the connections; b) support for camera

Figure 6 .
Figure 6.Our drone during a flight with the developed system for gathering pictures.

Figure 11 . 10 0. 1 •Figure 12 .
Figure 11.The TC for different gardens with SAW and Drone been partially supported by the pre-doctoral student grant "Ayudas para contratos predoctorales de Formación del Profesorado Universitario FPU (Convocatoria 2014)" with reference: FPU14/02953 by the "Ministerio de Educación, Cultura y Deporte, This work has been partially

Table 1 .
Characteristics of different nodes

Table 3 .
P and TD with Drone and with SAW

Table 4 .
Summary of different techniques