Spatially distributed environmental models generate a lot of spatio-temporal data. To work with these data effectively it is usually necessary to aggregate it and to visualize it in maps.

This article shows how JAMS can be used to aggregate spatio-temporal data and to save the result in a shapefile. To demonstrate the work flow, a map of yearly mean precipitation is generated for the Wilde Gera catchment with the model J2000. This model is shipped with every JAMS installation.

Preparing and running the model

Start by opening the model in the JAMS User Interface Configuration Editor (JUICE) and have a short look at the model structure. The model has a J2k model context, which contains the temporal context “TimeLoop”. A spatial context named HRULoop is nested in the TimeLoop. The precipitation values are stored in the attribute “precip” within the HRULoop context (see figure 1).

Folie1

Figure 1: Structure the model J2000 applied on the catchment Wilde Gera

Now ensure that the spatially distributed precipitation data, calculated by J2000, is actually saved in a file. For that purpose, use „Configure Modell Output“ button1 from the toolbar. The datastore editor dialog will show up, which allows you to configurate the model outputs (see figure 2). On the left side of the dialog you see a list of all contexts within the model. By selecting a context in the list, its attributes are shown in the list on the right side of the dialog. To enable outputs for the spatial context tick “HRULoop” on the left side. With the plus and minus button you may add the precip and area attribute to the list and delete all other attributes as it is shown in figure 2.

Folie2

Figure 2: Datastore editor

Now start the model. The execution will take a moment (23 seconds on my maching), because saving the large amount of data requires some time. The results are saved in a file named HRULoop.dat which is located in the output directory.

Using JAMS Data Explorer (JADE)

After the model finished its simulation open  JAMS Data Explorer (JADE) button2 from the toolbar. When JADE showed up, you should see the HRULoop.dat on the left side already. Double-click on HRULoop.dat to open it. Now you should see a data view as shown in figure 3. Region (1) shows all attributes within the selected file. In our case it contains only two attributes namely area and precip. Region (2) shows for which time steps data is available. J2000 is a daily model and in my case the simulation was carried out from 1996-11-01 to 2000-10-31. Therefore the lists contains 1461 entries, one for each day. Region (3) lists the IDs for all spatial modelling entities. For the Wilde Gera catchment 614 entities are used. In Region (4) several functions are available to process the data. Select all entities and select a time step. Then press the button “Time Step”. This will generate a table in Region (5). The first column of the table shows the IDs of all entities. The second column shows the precipitation value for each of these entities.

Temporal Aggregation

Usually, it is not sufficient to inspect a single timestep. Instead you may want to aggregate the data. This can be done by using the Time Filter feature. You can type SQL expressions with wildcards to select dates automatically. SQL knows different wildcards listed in the table below:

Wildcard Description
% A substitute for zero or more characters
_ A substitute for a single character

This can be used to aggregate various time steps. The next table shows some examples:

Expression Description
____-01% Selects all time steps in January
1997% Selects all time steps of 1997
199% Selects all time steps from 1990 to 1999

To get the yearly mean of precipitation for the year 1997 we type

1997%
into the Time Filter field and use the Temp. Aggr. button.

Export data to a shapefile

At last the aggregated data can be exported to other software. You may simply drag and drop the data to e.g. Microsoft Excel. Alternatively you can export the data to a text file or to a shapefile.

To save the data in a textfile just use the “Save Data” button. A dialog will show up, allowing you to select the location and the name of the output file.

To save the data in a shapefile you may use the button “Write to Shapefile”. To use this function you need a suitable shapefile datastore in your input directory. This shapefile datastore should contain the spatial modelling entities. If you have several shapefile datastores in your input directory you can select the right datastore in the combobox below the buttons.

When saving the data to a shapefile a message will show up telling you that JAMS is trying to create that shapefile (see figure 4). If this was successful, you will see a message about the fields attached to that shapefile and finally a success message in the end (see figure 5).

Folie7

Figure 4: JAMS is trying to create a new shapefile

Folie9

Figure 5: Shapefile export was successful

After that you can work with the newly created shapefile in a GIS software of your choice (see figure 6).

arcgis

Figure 6: Opening aggregated data in ArcGIS

Cheers

Christian

Parameter files are usually read  in JAMS with the StandardEntityReader component which is part of the J2000 repository. This component interprets all data as Doubles or String values. It was not possible to read other types of data with this component. Since parameter files consists almost entirely of numeric data, this is usually no problem. An problem arises if the data should be passed to a component which expects a different data type such as an integer.

This happens for instance when sub-catchments should be linked to specific runoff measurements. Assume a parameter file of sub-catchments.

#gauged subcatchments.par
ID gauging station    subBasinIDs             obsQcolumn
...
0  Arnstadt           [91,92,93,94,98,99,100] 1
1  Gehlberg           [98,99,100]             2
2  Graefinau Angstedt [64,65,66]              3
...

Each sub-catchment is characterized by an ID, the name of its gauging station, a number of sub-sub-catchments and a column. The column id links the sub-catchment to an observed runoff time-serie in an additional file. This parameter file can be read with a StandardEntityReader into an EntityCollection, which we will name “gauges”. A simple SpatialContext can be use to iterate over all gauges and to calculate for each gauging station the simulated stream-flow and the corresponding efficiency measures. This can be done with the following sub-model:

blog1_01

Figure 1 – GaugeContext

 

The GaugeContext iterates over all sub-catchments. For each sub-catchment the simulated runoff is computed with a SubBasinFlowAggregator. This component sums up the runoff from several sub-sub-catchments into a total runoff volume which is stored in the attribute totQ.

blog1_02

Figure 2 – SubbasinFlowAggregator

The obsRunoff component is an instance of StandardTSDataProvider. This component selects a specific index within a DoubleArray. We pass the column id from the parameter file to get the desired runoff data for the specific sub-catchment and store it in an obsQ attribute.

blog1_03

Figure 3 – StandardTSDataProvider

With totQ and obsQ the Efficiency of the model for each sub-catchment can then be calculated.This approach works fine except for one thing. StandardTSDataProvider expects an integer while the parameter file provides a double value. There are several possible solutions to this problem:

  • The column id can be converted from double to integer. Therefore a new component is needed, which is not yet existing.
    pro: This component may be useful for several other issues.
    con: The model will be enlarged by components converting attributes to one type or another.
  • Writing an additional StandardTSDataProviderDoubleVersion component, which accepts double values for the column index.
    pro: it works
    con: This will create a new component, which is only slightly different to the existing one. This will make blow up the components repository and make things difficult to maintain.

The real problem is the EntityReader which cannot read other types then doubles. For this reason I added type support to the EntityReader. Since version 3.0_b12 you can add a

#TYPED

comment to your parameter file. This will tell the EntityReader to expect an additional line in the header of the parameter file, which defines the type of the data. Allowed types are
Double, DoubleArray, Float, FloatArray, Integer, IntegerArray, Long, LongArray, String, Calendar, TimeInterval, Boolean and BooleanArray.
The new parameter file will look like that. Now you can build up the context as shown in the first figure without any problems.

#gauged subcatchments.par
#TYPED
ID     gauging station    subBasinIDs             obsQcolumn
Double String             DoubleArray             Integer
...
0      Arnstadt           [91,92,93,94,98,99,100] 1
1      Gehlberg           [98,99,100]             2
2      Graefinau Angstedt [64,65,66]              3
...

  

In the last weeks the JAMS framework were extended by some new exciting functions. The JAMS-Cloud feature was introduced to the JAMS Component Editor (JUICE) and the JAMS Data Explorer (JADE). These functions offers remote model simulation within the JAMS-Cloud. Additionally, very useful components are constantly added to the JAMS component repository (e.g. a component for arbitrary aggregation of temporal data).

The JAMSBlog will inform developers, modelers and model users and people interested in the JAMS framework about new functions and components. Small tutorials will show how to use those functions and components. The blog will give best practices to solve frequent problems and it will present interesting applications and discussions.

If you like to see a specific issue in this blog, please let me know.  If you want to contribute to this blog with your own JAMS experiences, it’s highly appreciated.

Christian