Skip to content

Instantly share code, notes, and snippets.

@phargogh
Created January 27, 2022 21:02
Show Gist options
  • Select an option

  • Save phargogh/ba66ebc18fe23ce4ca4d6b2925818db4 to your computer and use it in GitHub Desktop.

Select an option

Save phargogh/ba66ebc18fe23ce4ca4d6b2925818db4 to your computer and use it in GitHub Desktop.
Dyna-CLUE Pseudocode
This gist reflects what I understand to be the workflow for Dyna-CLUE,
in the interest of getting the core algorithm down on paper.
This is based on :
* Verburg & Overmars (2009): https://link.springer.com/article/10.1007%2Fs10980-009-9355-7
* The CLUE Manual: http://environmentalgeography.nl/files/data/public/cluemanual
* The provided source code for CLUE: http://www.environmentalgeography.nl/files/data/public/dyna_clue
* The IEEM Dyna-CLUE manual: https://publications.iadb.org/publications/english/document/The-Integrated-Economic-Environmental-Modeling-IEEM-Platform-IEEM-Platform-Technical-Guides-User-Guide-for-the-IEEM-enhanced-Land-Use-Land-Cover-Change-Model-Dyna-CLUE.pdf
Algorithm: (see
The-Integrated-Economic-Environmental-Modeling-IEEM-Platform-IEEM-Platform-Technical-Guides-User-Guide-for-the-IEEM-enhanced-Land-Use-Land-Cover-Change-Model-Dyna-CLUE.pdf,
section 2.6, Allocation Procedure)
* User-provided inputs are denoted by square brackets.
Set the [initial landcover configuration], which represents the simulation start year.
Set the [land use history] raster, which is either user-provided, random (seeded), or random (not seeded).
For each year between the [simulation start year] and [simulation end year]:
Determine the set of pixels S that may transition this year:
* Exclude all pixels that are within a protected area. These pixels cannot change land use classification.
* Exclude all pixels that cannot change this year due to a timed setting in the transition table.
This comparison is based on the transition table and the year of last LU change for the pixel.
For pixel i in S:
For each land use code u in the simulation:
Determine the total probability TPROPi,u for this pixel to transition to land use code u, where:
* TPROPi,u = Pi,u + ELASu + ITERu
* Pi,u = the suitability of location i for land use u (based on logit model)
Suitability is calculated by a regression function that is
parameterized by the user using a variety of spatial
[static and dynamic factor] rasters, along with their
coefficients.
* ELASu = EITHER:
* 0 if pixel i is already under land use u
* Otherwise, the user-defined [elasticity] for the "from" land use.
* ITERu = the iteration parameter for this "from" land use. Starts at 0.
For each land use class that is driven by regional demand (e.g. ag, urban, etc.):
For each pixel i, sorted in descending order by TPROPi,u:
Allocate pixels so that pixel i has the highest TPROPi,u for all u.
Pixel transitions that are not allowed are not converted.
Track the area converted for each land use type.
For each land use classification:
If the land use class was underallocated relative to demand:
Increase ITERu by a [constant iteration value]
If the land use class was overallocated relative to demand:
Decrease ITERu by a [constant iteration value]
If the allocation equals demand:
No further changes of this LU class need be made.
Until allocation for all land use classes == demand (or 20000 iterations):
Recalculate TPROPi,u for each land use code u in the simulation.
Allocate pixels, sorted in descending order by TPROPi,u.
Recalculate under- or over-allocation and adjust ITERu.
The resulting landscape is now used for the next year's land use
configuration.
## EXPECTED CHALLENGES
1. Iteration.
The core of this algorithm is iteration-until-convergence, which on large
landscapes will simply involve a lot of iteration, which takes time.
After an initial implementation with the algorithm as described, I wonder
if there might be an optimization in the pre-allocation sorting that would
allow us to eliminate some of the iteration.
2. Expansive memory usage (M pixels x N LU classes)
With a large landscape (e.g. Colombia) and a lot of land use classes, the
TPROPi,u calculations will end up with a very large data structure that
might exceed physical memory. Managing this may be tricky.
3. Lots of disk accesses
Aside from the starting landcover, there's the year of most recent
modification, an arbitrary number of user-defined status and dynamic factor
rasters, and then the potential for a whole lot of disk reads and writes
when allocating and potentially deallocating and reallocating pixels as we
seek to converge on the demand.
* Some of this deallocation and reallocation may be avoided by
preprocessing the demand tables so that the demand is always
represented in an area that can, in fact be achieved. So if we have
100 5ha pixels but a demand of 22ha for an LU class, then we should
reduce the demand to 20ha in order to avoid unneeded iteration for
something that cannot be achieved.
4. Reference material is lacking.
There are 2 code implementations: Verburg's C++ implementation and BC3's
Java implementation. Verburg's is missing the main() function and some
core logic that actually runs the simulation. BC3's uses their own
abstractions of how to represent data and has to be run (as usual) within
their K.Lab environment. Plus, there are obviously some TODOs in there and
it looks like implementation was never finished. Thus, these two can be
used for some technical questions, but not all.
## Plan:
Because of the iteration required, I think there are four discreet parts to
this work:
1. Pure-python validation of user inputs. This is stuff like making sure that
all of the LU classifications have entries in the various tables, and that
the appropriate columns are all present. Onil said that the current
version will happily run if your inputs are slightly malformed, and so you
won't actually know that something is wrong for hours and hours.
2. A cython function for the core allocation. I suspect this will be for a
single year's allocation, but it could end up being for multiple years. A
single year would allow for easier testing.
3. User Interface. They need to be able to use this fairly easily, and Onil
would be very happy with an InVEST-style user interface.
4. Distribution. A zipfile with the binaries may be sufficient, but I think
it would also be wise to ship a docker container.
1 and 2 are critical.
3 would be nice to have.
4 is not so important.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment