Compilation of Smart Cities Attributes and Quantitative Identification of Mismatch in Rankings

One practical way to define a “smart city” is to look at the specific qualities listed in a ranking study about smart cities. Such method gives de facto guidelines for classifying a city as being smart or not. Building upon this rationale, the current work in its first objective presents features adopted in evaluating the “smartness” of cities in seven evaluations and in its second objective arranges them in a suggested structure of six scopes with forty-three keywords. With these two objectives, the current study serves as a summary of various ranking studies in terms of being a collection place of many evaluation criteria. Four of the considered studies are the 2018 and 2019 annual editions from two sources, and comparing these criteria shows some changes over one year, and these updates are highlighted. A third objective of this study considers analyzing assigned ranks and utilizing a normalized score (limited to a maximum of unity) derived from the raw scores given in six ranking studies (out of the seven considered, with one ranking study excluded as it does not give raw numerical scores) to the six cities that commonly appear in all of them. This part shows with details the existence of mismatch not just in a one-time ranking, but also in a year-to-year trend, where a city appears to be improving according to one evaluator while appears to be degrading according to another evaluator. As a fourth objective, statistical analysis of the evaluation results was conducted, with quantitative assessment of rankings mismatch.

e term was introduced in 2008, referring to adopting technology and managing data e ectively in an integrated way to solve challenges of a modern urban community [4]. A smart city may be de ned as an urban society whose members collaborate using information and communication technology (ICT) to better reach performance targets, improve the quality of life, and have more open governance [5]. Measuring outputs is an important stage of improvement [6]. In this regard, evaluation studies attempt to assess a number of cities for their attained level of smartness. Such evaluations give feedback to the administrative body of the city, to its inhabitants, and to the global public at large. ey also provide valuable data and case studies to those interested in realizing the characteristics of a smart city. However, a number of gaps were identi ed in the assessment tools for smart cities, such as the lack of including temporal change (as compared to a one-time static evaluation), the inability to adapt to the city size when comparing small and large cities, and missing the stakeholders' engagement component during both the development phase and the implementation phase [7].
While contradiction in smart cities rankings was reported before for two studies of the same year of 2019 [8], the present work represents a deeper look into this issue, considering not only two ranking studies but also seven studies. e presented work does not merely report qualitatively an instantaneous mismatch of smart cities rankings, but also contributes proposed quantitative analysis methods that helps reach a fairer assessment of cities performance when comparing their status of smartness to each other, as well as when interpreting the evolution of their status over time. e present work utilizes a (normalized score) concept that alleviates the impact of the pool of cities included in a particular ranking study. It also groups various criteria of city smartness in seven (smartness scopes).
In the present work, we analyze seven evaluations of smart cities that are available publicly, where we both examine the criteria used in judging how smart a city is, and examine the coherence among these evaluations, which is related to the reliability of published rankings of smart cities. For two sources, the 2018 and 2019 editions of the evaluation are compared. is work is motivated by a desire to develop an attribute-based definition of smart cities, as compared to a traditional textual description. e former reflects multiple views of what makes a city a smart city, by independent third parties specialized in assessing the level of smartness observed by a city through measurable qualities, whereas the latter may suffer for an overly subjective narrow view, and high projection onto a local region or one set of national norms.
e proposed characterization of the smart city presented here is based on smart city qualities by entities from Sweden (1st and 2nd ranking studies in Table 1), Singapore (3rd ranking study in Table 1), Spain (4th and 5th ranking study in Table 1), the UK (6th ranking study in Table 1), and Russia and the USA (7th ranking study in Table 1). e problem statement of the present work might be formulated as a number of questions that the present study addresses, as follows: What to look for when classifying a city as smart or as "smarter"? Should the rankings data of smart cities given by a single independent party be assumed to be roughly compatible with other parties? In case of discrepancy in the ranking of the same city by two ranking parties, how big the gap can be, quantitatively? Which metric is more important in smart cities ranking, the positional rank or the absolute score? How does a normalized score (a processed numerical value that always has a maximum attained value of 1.0 by the best performing city in any pool of compared cities) behave in comparison with an absolute raw score while interpreting ranking data of smart cities? Can the ranking contradiction between different parties ex-tend to the trend of change over two years, or is it likely to be limited only to same-year evaluation? If two cities are ranked consecutively (like 9th and 10th) in terms of smartness, does this mean a big performance difference between them? e present work can be viewed as having four objectives: compiling in one place various attributes used to measure smartness of a city, processing these attributes and proposing a structure of indicators for a smart city, performing statistical analysis of a normalized assessment score assigned to all cities that appear commonly in different ranking studies (which ideally should be equal across ranking studies), and finally attempting to explain the reason of observed discrepancy in the ranking received by the same city in more than one ranking study. Table 1 lists some key properties of the seven ranking evaluations we consider here, such as the year and the number of ranked cities in each study. e 2019 version of the Smart Cities Index evaluation is the 3rd annual edition, while the 2018 version is the 2nd annual edition. e 2019 version of the Cities in Motion Index (CIMI) evaluation is the 6th annual edition, while the 2018 version is the 5th annual edition. Other evaluations are not regularly published. We point out that for the Smart Cities Index 2018 evaluation, the publisher refers to 24 ranking factors. However, counting the actual number of factors given in the detailed scoring table gives only 22 factors. e number of factors is 24 in the 2019 edition of that evaluation source. e number of distinct ranks is not necessarily the number of ranked cities due to occurrence of repeated scores. ere are other related ranking studies that are not included in the analysis here, such as the Digital Economy and Society Index (DESI) [16], which is limited to the European Union (EU) member states, and is not strictly targeting smart cities but is focused on digitized performance in five areas: connectivity, human capital, Internet services, technology integration, and public services.

Evaluations Considered
Another related study is the United Nations (UN) E-Government Survey [17], which is focused on digital government development of the UN member states. It is not at the level of cities and is not well oriented toward smart cities.  rough its factor of TRACK RECORD, the (Smart City Governments) evaluation pays attention to the past performance of city's government in terms of successful initiatives related to city smartness. e (Cities in Motion Index) evaluation is distinguished from other evaluation studies by its INTER-NATIONAL OUTREACH category (having for example an indicator about the number of McDonald's chain restaurants). On the other hand, the BASIC INDICATORS category in the (Smart Cities-What's in it for citizens?) evaluation includes special factors such as the city's whole population (as compared to the size of a specific segment) and gross value added (as a city-level version of the GDP, reflecting possibility of economic advancement and also life quality).

Suggested Scopes of a Smart City
is section provides a suggested 6-scope structure with suggested 43 attributes that define smart cities and can be used in benchmarking smart cities. is suggested attributebased definition is summarized in Table 2. e list is adapted from those criteria collected from the various evaluation studies considered in the present work, based on personal view and guided by recent articles in the literature of smart cities [18][19][20][21][22][23][24][25][26].
A list of the assessment factors (indicators) used in each evaluation study is provided in Appendix A. Any grouping of factors under categories in that appendix is done as per the evaluation study itself.
Governance orientation (determination and commitment for transition to a smart city) plays a key role into driving a city toward smartness. Also, a smart city is not just about intensive use of high technology devices (although this is an expected feature of a smart city), but this term extends and overlaps with other socially desirable features, such as satisfaction of the public [27], leading to a true passion for the city. is reflects an emphasis on the (Human Capital) scope.
Equipping members of the city with awareness and training programs in order to appreciate the benefits of the transition to a smart (or a smarter) city is important for a collective collaboration. e electric bicycles (or e-bikes) are added explicitly under transport scope, as an alternative transportation option for intercity commuting with a favorable environmental impact over private vehicles powered by gasoline or diesel (while not practical for daily roundtrip distances beyond 40 km). A city that caters for bicycling (electric or not) and promotes it as an alternative environment-friendly means of transportation (by having a network of bicycle lanes for example) helps construction projects earning one credit point (out of 110 total attainable points for new projects or projects with major renovation) according to the LEED (Leadership in Energy and Environmental Design) rating system for green buildings, through fulfilling the credit (Bicycle Facilities) under the credits category (Location and Transportation) [28] in its v4 (fourth version), which is currently active. LEED is managed by the U.S. Green Building Council (USGBC), which describes LEED as the most common system for rating green buildings worldwide [29].

Cities in Common
Among the seven evaluations of smart cities considered here, there are six cities that appear in all of them. ese cities (in alphabetical order) are as follows: Having such shared cities in different evaluations oriented to the same scope helps realizing the coherence of these evaluations. One may expect a similar rating or a similar trend over time across the various evaluations. Table 3 compares the rankings given to the common cities by the different evaluations. ese values neglect the effect of duplicate scores, so repeated scores are counted as different ranks. is is a minor issue because the number of repeated scores is relatively small compared to the number of ranked cities, with the exception of the last evaluation (Smart Cities Prospects), where the number of repetitions (8) is comparable to the number of ranked cities (20). erefore, two ranking values are given for that evaluation: one where repeated scores are counted as different ranks and another where repeated scores are counted as a single rank.
ese adjusted rankings are more proper than the ones that neglect the occurrence of repetitions. e ranking of a city highly depends on the pool of cities assessed in the respective evaluation. erefore, it is not easy to use it when analyzing the matching across different evaluation studies. Instead, the scores given to the common cities provide a better pool-independent measure for coherence across evaluations. e scores were normalized to have a maximum possible value of unity by dividing the published scores by the maximum attainable score, and the values are presented in Table 4. A similarity of the values of these normalized scores for the same city would be an indication of coherence among evaluations. e evaluation (Smart Cities-What's in it for citizens?) does not publish scores, but only rank cities relative to each other.
ere is notable scatter among the values of the normalized scores. For example, New York received a normalized score of 1 (by Cities in Motion Index, 2018) but also received a score of 0.626 (by Smart City Governments, which is also dated 2018).
However, a fair comparison of the normalized scores should be done for evaluations belonging to the same year. As a result, the normalized scores of the three evaluations with a common year (2018) are repeated in Table 5.
e same table also shows the mean and the sample standard deviation for each city. e arithmetic mean or simply the mean (x) of the 6 normalized scores for each city is calculated as follows: x � is follows the given formula [30]: e standard deviation (s) of the 6 normalized scores for each city is calculated using the sample formula as follows: is follows the given formula [31]: where (n � 6) is the sample size, as mentioned before. It is worthy of clarifying that the division by (n − 1 � 5) not by (n � 6) in equations (3) or (4) is intentional. e division by (n − 1) happens when the calculation is for a sample, while the division by (n) should happen if all cities in the world (which is referred to as "the population") are included [32], but this is not the case here. e standard deviation measures the spread (scatter) of normalized scores, and it is zero in the very special case of identical scores. However, one can see that it goes as high as 0.2042 for New York, which is 26.7% of the mean value for that city. e smallest standard deviation (0.0684) corresponds to Dubai, and it is 13.9% of the mean normalized score for that city.
Comparing the changes in the rankings and the normalized scores for a city by two different evaluation sources over the same period helps revealing the coherence or mismatch between them. is is done for the 2 sources with annual evaluations, and the change from 2018 to 2019 is examined in Table 6 for the rankings and in Table 7 for the normalized scores. e two evaluating sources are the Smart Cities Index (SCI) and the Cities in Motion Index (CIMI). e normalized score is a more appropriate measure as the ranking position can be highly influenced by the other peer cities in the evaluation. e case of Dubai appears to be surprising, where according to the SCI evaluation, its ranking improved by 26 positions. On the other hand, its   Table 6: Comparison of the change in ranking position between 2018 and 2019 for 6 common cities as given by 2 ranking publishers.

Ranking publisher City Berlin
Chicago From the qualitative view, the direction of change for a city (improving by increased normalized score or degrading by decreased normalized score) should ideally be the same for both evaluation sources (SCI and CIMI). However, this is not the case for Chicago and Dubai. However, for the other four cities, the trends of change are consistent in the two evaluation sources. Despite this, the changes quantitatively differ by two orders of magnitude in the case of Berlin (improvement case), and by one order of magnitude in the case of New York (degrading case).

Statistical Analysis
is section gives a summary of some statistical features of the seven evaluations of smart cities considered in our work. e analysis excludes the (Smart Cities-What's in it for citizens?) evaluation because it does not report numerical scores. Table 8 presents the range, maximum, minimum, mean, and median of the normalized scores for all cities assessed in each evaluation (not just the six common ones). e range is the difference between the maximum value and the minimum value. e median is the value separating the upper half subset of data from the lower half subset. In calculating the median, a pair of duplicate scores is counted as two different ranks. e mean and median values are close to each other in all studies. is indicates high symmetry of data around the mean value. Moreover, these mean and median values are similar across all evaluations except for the (Smart Cities Prospects) evaluation, where they are noticeably higher. is is related to the high value of the minimum normalized score in that study, being 0.74.
is is not very far from the maximum normalized score in that study, which is 0.89. e evaluations of Smart City Governments and Smart Cities Prospects have relatively narrow ranges of 0.27 and 0.15, respectively. One should keep in mind that these two studies have the smallest number of cities assessed, being 50 and 20, respectively. Between 2018 and 2019, the mean value of the normalized score has increased in both evaluations of the Smart Cities Index and the Cities in Motion Index. is does not necessarily mean an overall improvement because the cities assess in the two editions are not the same.
In addition to the range, Table 9 presents two additional measures of the spread of the normalized scores, which are the distance between the median and the maximum or the minimum. e sum of these two distances is the range. In perfectly symmetric data, the two distances are equal. is is roughly the case here except for the Smart Cities Index (2018) evaluation, where the minimum normalized score is nearly twice as far from the median as the maximum normalized score. Table 10 presents the average increment in the normalized scores in each evaluation. We calculate this increment as follows: With the exception of the evaluation (Smart Cities Prospects), the average increments are at a similar level of about 0.004. is is so small and indicates that the difference   On the other hand, the difference when the cities are ordered in ranks is always unity (no matter how small the score difference is). is observation may help justify the existence of mismatch across evaluations and calls for attention when interpreting published rankings of smart cities. It is thus suggested to consult normalized scores and not rely just on the ranking. For the evaluation (Smart Cities Prospects), the average increment is about twice those in the other evaluations. Despite the small range of that particular study (which favors a smaller average increment), the number of ranked cities is also small (which favors a larger average increment). e influence of the few cities is stronger than the influence of the narrow range.

Conclusions
Published rankings of smart cities worldwide are valuable studies that not only assess the smartness level of various cities in the world but also give guidelines about the features of a smart city and help reshape its definition over time. For someone curious about the reliability of a ranking report, this work can be useful. It considered 7 evaluations of smart cities from 5 sources, spanning the years 2017-2019, and examined their consistency when assessing a common set of 6 cities. e work also provides fundamental statistical analysis of the scores given by these evaluations. ere is some mismatch in rankings. For example, one evaluation may suggest that a city has improved and became smarter, while another suggests that the same city has become less smart. It is noted that cities can have tiny differences in their scores and this can justify a lack of robustness of the ranking results. is work also compares the criteria used by the considered evaluations and groups them into 7 scope areas.
Based on the present study, the following recommendation can be made: smart cities should excel in six scopes (groups of attributes), which are Municipality Orientation, Human Capital, Transport, Outdoor Environment, Internet and Technology, and Infrastructure. When comparing the smartness of several cities, the comparison criteria should be clearly communicated along with the ranking results. A positional rank of a city relative to other cities is not a good metric to judge the smartness of that city due to the heavy dependence on the pool of other cities of comparison. A normalized numerical score should be used to examine the level of smartness of a city, and it is computed from the raw score by assigning a score of 1.0 to the best city in the comparison list. Looking at two ranking studies rather than only one and taking the average normalized scores of cities give a more reliable view of how well different cities are performing relatively.
Possible directions of extension for the present work by others include regular monitoring of features of smart cities as considered in independent evaluation studies, with attention paid to those features that either appear or disappear over time. Another extension is to conduct an expert survey where personnel with immediate and ongoing interaction with smart cities planning, development, or operation give their priority list of smart city attributes, as well as some smart city challenges. A third possible extension is to derive a numerical index based on analysis of some ranking studies such that it shows more consistency across these studies. It may, for example, be weighted by factors such as specific criteria included in the ranking evaluation, the number of cities included, or the ranking score in the previous year (the trend, rather than the one-time status).

4G LTE:
Fourth-generation long-term evolution μm: Micrometer (one millionth of a meter) CIMI: Cities in Motion Index CO 2 : Carbon dioxide DESI: Digital Economy and Society Index DG: Distributed generation (small-scale electrical energy production near consumers, using wind and solar stations for example) E-charge: Electric

A. Factors Defining a Smart City from Analyzed Ranking Studies
A list of the assessment factors or criteria used in each evaluation study is provided here. When the factors are grouped in categories by the evaluation itself, this is also indicated.
For the Smart Cities Index evaluation, and the Cities in Motion Index evaluation, where two editions are considered (2018 and 2019), the changes in the evaluation criteria are highlighted with underlined bold text between parentheses.

A.1. Assessment Factors for the Smart Cities Index Ranking
Studies.

A.2. Assessment Factors for the (Smart City Governments) Ranking Study.
(i) Vision: a clear and well-defined strategy to develop a "smart city" (ii) Leadership: dedicated City leadership that steers smart city projects (iii) Budget: sufficient funding for smart city projects (iv) Financial incentives: financial incentives to effectively encourage private sector participation (e.g., grants, rebates, subsidies, and competitions) (v) Support programs: in-kind programs to encourage private actors to participate (e.g., incubators, events, and networks) (vi) Talent readiness: programs to equip the city's talent with smart skills (vii) People centricity: a sincere, people-first design of the future city (viii) Innovation ecosystems: a comprehensive range of engaged stakeholders to sustain innovation (ix) Smart policies: a conducive policy environment for smart city development (e.g., data governance, IP protection, and urban design) (x) Track record: the government's experience in catalyzing successful smart city initiatives

A.3. Assessment Factors for the (Cities in Motion Index)
Ranking Studies. the EGDI reflects how a country is using information technology to promote access and inclusion for its citizens (11) Democracy ranking: ranking where the countries in the highest positions are those considered more democratic (12) Employment in the public administration: percentage of population employed in public administration and defense; education; health; community, social, and personal service activities; and other activities (new in 2019) (v) Environment (1) CO 2 emissions: CO 2 emissions from the burning of fossil fuels and the manufacture of cement, measured in kilotons (kt) (2) CO 2 emission index: CO 2 emission index, on a scale from 0 (best) to 100 (worst) (3) Methane emissions: methane emissions that arise from human activities such as agriculture and the industrial production of methane. Measured in kt of CO 2 equivalent (4) Access to the water supply: percentage of the population with reasonable access to an appropriate quantity of water resulting from an improvement in the supply (5) PM2.5: the indicator PM2.5 measures the number of particles in the air whose diameter is less than 2.5 micrometers (μm), annual mean (6) PM10: the indicator PM10 measures the amount of particles in the air whose diameter is less than 10 μm, annual mean (7) Pollution index: a number on a scale from 0 (best) to 100 (worst). It accounts for the overall pollution in a city, but the largest weight is given to air pollution. en comes water pollution/accessibility. Other pollution types (like noise) contribute with a small weight (8) Environmental Performance Index (EPI): this measures environmental health and ecosystem vitality, on a scale from 1 (poor) to 100 (good) (9) Renewable water resources: total renewable water sources per capita (10) Future climate: percentage of the rise in temperature in the city during the summer forecast for 2100 if pollution caused by carbon emissions continues to increase (11) Solid waste: average amount of municipal solid waste (garbage) generated annually per person (kg/year) (vi) Mobility and transportation (1) Traffic index: consideration of the time spent in traffic, the dissatisfaction this generates, CO 2 consumption, and other inefficiencies of the traffic system (2) Inefficiency index: estimation of traffic inefficiencies (such as long journey times) (3) Index of traffic for commuting to work: index of time that takes into account how many minutes it takes to commute to work (4) Bike sharing: this factor depends on the level of development of a bike sharing system (if it exists) with automated services for the public use of shared bicycles that provide transport from one location to another within a city (5) Length of the metro system: length of the metro system per city (6) Metro stations: number of metro stations per city (7) Flights: number of arrival flights (air routes) in a city (8) High-speed train: binary variable that shows whether the city has a high-speed train or not (9) Gas stations: number of gas stations per city (was in 2018; but removed in 2019) ( (1) Average vehicle speed: peak time congestion and time-benefit potential indicator (2) Private vehicles per capita: congestion driver (3) Cycle scheme roll-out: congestion reduction and health improvement driver (4) Mobility-as-a-service: congestion reduction driver (5) Congestion charge: air quality improvement and congestion reduction driver (6) Road accident injuries per capita: public health reduction driver (7) Air quality: public health reduction driver (8) Electric vehicle charging stations: next-generation transport preparedness (9) Public transport journeys per capita: network performance, availability, and uptake (10) E/M-payment infrastructure transport: transport payment convenience and time-benefit indicator (11) Autonomous vehicle testing: next-generation transport preparedness (12) Smart transport initiatives: smart traffic light phasing, smart parking, open data for transport, strategy to reduce motor vehicle use, strategy to increase public transport use, citizen information dissemination solutions, interagency collaboration strategy, and road safety strategy (iii) Health care (1) Hospital beds per capita city: bed availability and time-benefit indicator (2) Hospital bed occupancy rate: bed availability and time-benefit indicator (3) Congestion charge: air quality improvement and congestion reduction driver (4) Cycle scheme roll-out: congestion reduction and health improvement driver (5) Public transport journeys per capita: network performance, availability, and uptake (6) Road accident injuries per capita: public health reduction driver (7) Violent crime rate: public health and safety reduction driver (8) Police force size: public health and safety improvement driver (9) Higher education: public health and safety improvement driver (10) City terrorist attacks since 2013, Domestic and Foreign Initiated: public health and safety reduction driver (11) Public Safety Index: general safety and health indicator (12) Air quality: public health reduction driver (13) Electric vehicle charging stations: public health improvement driver (14) Autonomous vehicle testing: public health improvement driver (15) Smart healthcare initiatives: telehealth/remote healthcare services, digital health portals, chatbot services, digital health care for elderly strategy, transparent healthcare KPIs, active lifestyle strategy, and road safety strategy (iv) Public safety (1) Smart street lighting: public safety improvement indicator (2) Intelligent video surveillance: public safety improvement and time-benefit indicator (3) Congestion charge: public safety/road traffic safety improvement indicator (4) Cycle scheme roll-out: public safety reduction indicator (5) Emergency services response co-ordination: public safety improvement and time-benefit indicator (6) Violent crime rate law enforcement: public health and safety reduction driver (7) Police force size: public health and safety improvement driver (8) Predictive crime software: public safety improvement and time-benefit indicator (9) Fire/flood prediction software: public safety improvement and time-benefit indicator (10) Higher education: public health and safety improvement driver (11) City terrorist attacks since 2013, Domestic and Foreign Initiated: public health and safety reduction driver (12) Public Safety Index: general safety and health indicator (13) Smart public safety initiatives: emergency services integration, road safety strategy, disaster plan, crime reduction strategy, and cybersecurity strategy (v) Productivity (1) Project funding sources: service expansion and productivity improvement indicator (2) Public-private partnership incentives: service expansion and productivity improvement indicator (3) Talent acquisition incentives: service expansion and productivity improvement indicator (4) Ease of doing business: time-benefit potential (5) Digital education policies: productivity improvement indicator (6) City governance: regulatory complexity and time-benefit indicator (7) City chief technology office/equivalent: service expansion and productivity improvement indicator (8) Smart city conference hosting: engagement and productivity improvement indicator (9) Smart city Hackathons: engagement and productivity improvement indicator (10) Smart productivity initiative: digital services access, smart education projects, cybersecurity and privacy strategy, equality strategy, and retail and city services cashless payments

A.5. Assessment Factors for the (Smart Cities Prospects)
Ranking Study.

Data Availability
Previously reported ranking data were used to support this study and were made available publicly online. ese prior studies (and datasets) are cited at relevant places within the text, such as references [9][10][11][12][13][14][15].

12
Journal of Engineering