Data processing

Goal: Process the collected data to:

  • bring the data into the right format (annual weight unit) and the right scale (city)
  • determine which data is still missing and needs to be collected
  • create individual dataset visualisations (on the Data Hub), to better understand the data
  • fill in Column I and J in tab “1. Domestic extraction” and Column J and K in tab “2. Imports&Exports” in sheet “A. UCA - Data collection”. This enables you to build the basis for the analysis through indicators and the Sankey diagram.

Approach

Data processing focuses on steps 4b, 5 and 6 and continues the work of the other UCA MFA steps that focus on data collection. The documents that were collected (and uploaded) in its various data formats, such as Excel, PDF, shapefiles, CSV are taken and converted to a standardised format for two reference years.

Three general substeps have to be carried out in the data processing step. Since their nature is the same for the SCA (Sector-wide Circularity Assessment) and the UCA, they are therefore explained in the overarching section of “How to process data”.

Where:

  • As you continue, you can follow the steps related to data processing, as shown in the scheme.
  • Make sure to work in the excel spreadsheet template “B. Data processing” if you need to focus on downscaling and process the collected data on the CityLoops Data Hub, if you wish to visualise your data and possibly write your UCA report there.
    • We recommend working in two processing sheets, one for each reference year, so you should make two copies of the template. (Those will then more easily feed into the two analysis sheets).
  • Work also in your additional sheets, perhaps in a copy or an additional excel tab of the raw files, to keep the original, but to work directly with the data there to not make any mistakes through copy pasting to other files. (In general, we recommend linking data within different tabs or amongst each other in sheets through the IMPORTRANGE function of google sheets.)

Step 4b: Downscale the data from Step 4a with the correct proxies

Step 5: Fill in your final values in tonnes for two years

Step 6: Note meta information of the data