Methodology

What does the Conference Board’s report card measure?

The report card measures how well Canada and its provinces and territories are meeting the goal of creating a high and sustainable quality of life for all Canadians.

What is meant by “a high and sustainable quality of life for all Canadians”?

The Conference Board considers a high and sustainable quality of life for all Canadians as being achieved if Canada and the provinces and territories record high and sustainable performances in six performance categories:

  • Economy
  • Society
  • Innovation
  • Environment
  • Health
  • Education and Skills

The word “sustainable” is a critical qualifier. It is not enough for Canada to boost economic growth if it is done at the expense of the environment or social cohesion. For example, to take advantage of high commodity prices by mining and exporting all our natural resources may make the country rich in the short term, but this wealth will not be sustainable in the long or even medium term. The Conference Board has consistently argued that economic growth and sustainability of the physical environment need to be integrated into a single concept of sustainable national prosperity—what we call here a “high and sustainable quality of life for all Canadians.”

While there are many definitions and approaches to the notion of sustainability, the most widely used is from the United Nations’ Brundtland Commission: “Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs.”1

How does the Conference Board choose indicators?

How Canada Performs: A Report Card on Canada attempts to grade only “outcome” indicators—indicators that tell us what Canada is achieving, rather than what efforts it is making. In the Innovation category, however, have used several input indicators as proxies for outputs.

How Canada Performs also focuses on indicators that can be influenced by public policy—that is, factors contributing to quality of life that can be modified by individual, organizational, or public efforts. Policy may influence outcome indicators directly (such as a law requiring everyone to vote) or indirectly—by influencing inputs, which in turn affect output (such as a policy that attempts to change smoking rates in order to reduce mortality rates). Some indicators emphasize a gap in performance (i.e., differences in levels among countries); others emphasize progress toward closing a gap (i.e., differences in growth rates among countries).

All indicators used to measure performance meet the following criteria:

  • The indicator provides valuable information on the performance or status of a particular category.
  • The indicator can be affected by policy and is relevant to policy.
  • The indicator data are reliable and have timely availability.
  • The data are sufficiently consistent to permit benchmarking over time and across countries.
  • There is general agreement that a movement in the indicator in one direction is better than movement in the other.

Where does the Conference Board obtain the data?

About 80 per cent of the data used for the international benchmarking report are supplied by the Organisation for Economic Co-operation and Development (OECD). The rest come from other reliable sources, such as the United Nations, the World Bank, and the Yale Center for Environmental Law and Policy.

Statistics Canada is the source of almost all of the provincial and territorial data. For the complete list of data sources, please see the “Data Sources” pages.

The most recent year of data is used for each indicator.

How does the Conference Board choose the 16 peer countries?

We begin with the countries deemed “high income” by the World Bank; this is the group of countries likely to have achieved a high and sustainable quality of life, and would therefore serve as a worthy peer group. We use three filters to determine which of these 38 countries would stay in our analysis:

  • Population: We eliminate countries with populations of less than 1 million. Nine countries drop from the list—Luxembourg and Iceland, for example.
  • Geographic land mass: We eliminate countries of less than 10,000 square kilometres to restrict our analysis to countries that are more than city-states. Singapore drops from the list.
  • Income (gross domestic product) per capita: We rank the remaining countries using a five-year average of real income per capita and include only countries that rank above the average. Thirteen countries drop off the list—Spain, New Zealand, and Israel, for example.

Using these criteria, 16 countries, including Canada, form our comparator group:

  • Australia
  • Austria
  • Belgium
  • Canada
  • Denmark
  • Finland
  • France
  • Germany
  • Ireland
  • Japan
  • Netherlands
  • Norway
  • Sweden
  • Switzerland
  • United Kingdom
  • United States

The comparator group remains constant across performance categories. This ensures that the best-performing country (in each category and on each indicator) is included in all analyses and allows us to compare performance over time.

When timely data are not available for a country for a particular indicator, we do not include that country in that particular report card.

How does the Conference Board assign a grade to each indicator?

We use a report card–style ranking of A–B–C–D, to tie in with the title A Report Card on Canada. We assign a grade level to each indicator using the following method:

  • For each output indicator, we calculate the difference between the top and bottom performers and divide this figure by 4, into four quartiles.
  • A country receives a report card rating of “A” on a given indicator if its score is in the top quartile, a “B” if its score is in the second quartile, a “C” if its score is in the third quartile, and a “D” if its score is in the bottom quartile.

For example, on the Innovation indicator “Scientific articles per million population,” the top performer (Switzerland) produced 1,176 scientific articles per million population in 2005 and the bottom performer (Japan) produced 424 articles. Using our method for ranking, the ranges for A–B–C–D are:

A: 988–1,176 scientific articles
B: 800–987 scientific articles
C: 612–799 scientific articles
D: 424–611 scientific articles

(Note: In this example, a high score indicates a high level of performance. For indicators where a low score signifies a high level of performance—such as scores on poverty in the in the Society category—the ranking levels are reversed.)

One indicator is not graded using the standard methodology—inflation—and two indicators—GDP growth and labour productivity growth—used a modified methodology during the 2008–09 recession.

Inflation

We award an “A” grade to inflation that falls within the Bank of Canada’s inflation-control target range, which is between 1 and 3 per cent. Inflation outside this target range (either above or below) is awarded a lower grade. The further away from the target range, the lower the grade. Countries with inflation between 0 and 1 per cent or between 3 and 4 per cent earn a “B” grade. We grade inflation between 0 and 1 per cent to be a “danger zone” because it may signal that a country is slipping into deflation. The one exception is when inflation between 0 and 1 per cent is due to currency appreciation or strong productivity growth—these countries are awarded an “A” grade. Inflation between 0 and −2 per cent (deflation) or between 4 and 6 per cent is given a “C” grade. The lowest grade, “D,” is given if inflation is above 6 per cent or if prices are falling by more than 2 per cent, an indication of more severe deflation.

GDP and Labour Productivity Growth

During the 2008–09 recession, most countries recorded negative GDP and labour productivity growth, and so we modified the grading formula for that time period to ensure that no “A” or “B” grades were given to a country with negative growth.

Using the following formula, “A” or “B” grades are awarded to countries with positive growth and “C” or “D” grades to countries with negative growth:

  • We calculate the difference between the top performer and zero, and divide this number by 2. A country receives a report card rating of “A” on growth if its score is in the top half of this number and a “B” if its score is in the bottom half.
  • We calculate the difference between zero and the bottom performer, and divide this number by 2. A country receives a report card rating of “C” on growth if its score is in the top half of this number and a “D” if its score is in the bottom half.

How does the Conference Board assign a grade to each overall performance category?

We first convert the individual indicator data to a common unit by normalizing each data point using the following formula:

Normalized value = (indicator value – minimum value) x100
(maximum value – minimum value)

Using this formula results in a data series where the best-performing country has a score of 100 and the worst-performing country has a score of zero.

A composite index for each country is then calculated by averaging all the normalized indicator values. No attempt is made to give explicit differential weights to indicators according to importance; we are implicitly giving equal weight to each indicator. This is the standard approach used by most organizations in the absence of any compelling reason to apply different weightings.

We assign a grade level to the category performance using the following method:

  • We calculate the difference between the category composite index of the top and bottom performer and divided this by 4, into four quartiles.
  • A country receives a report card rating of “A” for the category if its score is in the top quartile, “B” if its score is in the second quartile, “C” if its score is in the third quartile, and “D” if its score is in the bottom quartile.

How does the Conference Board benchmark the provinces’ performance?

For most indicators, the provinces are benchmarked to the 16 peer countries using the same A–B–C–D grading range as used in the international benchmarking. This allows us to gauge provincial performance in an international context and ensures that the grades awarded to the provinces fairly reflect performance.

For example, for the indicator assessing high-school attainment, the worst-performing province—Newfoundland and Labrador—gets a “B” grade because its high-school attainment rate of 82.4 per cent is higher than eight international peer countries. In the worst-performing country, Belgium, only 71.3 per cent of the adult population graduated from high school. If we were to award A–B–C–D grades to the provinces solely based on provincial scores, Newfoundland would get a “D” grade even though it performs relatively well in an international context. Similarly, on the indicator measuring the number of PhD graduates, all provinces perform poorly. Yet if we were to assign grades based solely on provincial scores, the top-performing province, Quebec, would automatically get an “A” grade, even though its performance is weak compared with international peers.

To benchmark the provinces in an international context, we calculate the A–B–C–D grades using the data for the 16 peer countries.

  • For each indicator, we calculate the difference between the top and bottom international performers and divide this figure by 4, into four quartiles.
  • A province receives a report card rating of “A” on a given indicator if its score is in the top quartile, a “B” if its score is in the second quartile, a “C” if its score is in the third quartile, and a “D” if its score is in the bottom quartile.
  • If a province performs better than the top-performing peer country, it receives an “A+” grade.
  • If a province performs worse than the worst-performing peer country, it receives a “D–” grade.

If comparable international data are not available for a given indicator, then the provincial data are used to compute the A–B–C–D quartiles. The inflation indicator is graded using a different method as described above.

The category grades are assigned in the same manner as is done for the international benchmarking. Peer country data are included in the calculations. For a given category, each data point for each indicator is normalized and a composite index for each province and country is then calculated by taking the average of all the normalized indicator values. A–B–C–D grades levels are then assigned to each of the provinces’ and countries’ composite index scores.

In addition to ranking the provinces against Canada’s international peers, the provinces have been compared with each other and placed into three categories: “above average,” “average,” and “below average.”

  • For each indicator, we first determine the average score and standard deviation of the provincial values. The standard deviation is a measure of how much variability there is in a set of numbers. If the numbers are normally distributed (i.e., the distribution is not heavily weighted to one side or another and/or does not have significant outliers), about 68 per cent will fall within one standard deviation above or below the average.
  • Any province scoring one standard deviation above the average is “above average.” Provinces scoring less than the average minus one standard deviation are “below average.” The remaining provinces are “average” performers.

How does the Conference Board benchmark the territories’ performance?

To date, the territories are included only in the overall Health report card because data are not available for many of the indicators included in the other report card categories. The Conference Board is, however, committed to including the territories in our analysis, and so we produce separate territorial indicator report cards when data are available, as was done for the Economy category. Indicator and category grades are computed for the territories the same way as for the provinces.

Footnote

1    Gro Harlem Brundtland, Our Common Future: World Commission on Environment and Development (Oxford: Oxford University Press, 1987), 43.