We aim to answer all the following questions over 2019-2020. In some cases we know there are answers but haven’t yet documented them, or we need to do more research. In some cases we will simply give direction to easy solutions that already exist, while in others we will develop solutions. Questions we have not yet documented solutions for are greyed out.


The best place to begin with TLCMap is the self paced Gazetteer of Historical Australian Places Guide

It should take only an hour or a few hours for the average non-technical computer user, depending on how much you want to do, and how much experience you have. It really won’t take long to do some simple searches, and to then create an account and put a few dots on a map.

Take a look at some examples of maps people have created with TLCMap.

There are now countless digital humanities mapping projects to be found on the web. Some of the early maps that influenced the creation of TLCMap are:

Anterotesis provides a long list of geohumanities projects generally: http://anterotesis.com/wordpress/mapping-resources/dh-gis-projects/

You can use the search function to find places in many varied ways, such as by fuzzy matching, or limiting to a certain locality. See the GHAP Guide for how to do nuanced and faceted searches within and across both official gazetteers and humanities layers. Some examples can indicate how useful this can be:

  • The gazetteer and user contributions include many obscure Australian places you are unlikely to find anywhere else, such as the names of old homesteads or missions, which can be very useful for Australian history.
  • Fuzzy search can be used to catch variant spellings, especially useful for historical and Indigenous names.
  • Searching for anything in a particular area can lead to serendipitous discoveries of related information you didn’t previously know existed.

Creating layers and multilayers can be used in different ways for research. It’s a good idea to start with an understanding of why you want a map. This can determine whether:

i) You have already answered a research question want to create a map to illustrate the results. Eg: You already know there were bushfires in Gippsland in a certain historical period, but to present a history of bushfires in Gippsland, you want a map of them, perhaps linked to information about each one. You don’t need to map bushfires in order to find out they were in Gippsland, but may be an important way to illustrate your findings, “As you can see on the map and timeline, bushfires this time impacted many homesteads in the region leading to…”. You just need to select the relevant bushfires and map them.

ii) You have research question which can only be answered by mapping the information. Eg: You don’t know where bushfires most impacted people in history. To find out you need to map them and observing the intensity or frequency of their occurrence in different places, perhaps by mapping newspaper reporting of them. This answers your question, perhaps by showing you they were especially intense in Gippsland in a certain period, and other places at other times. To answer a question you need to get a comprehensive set of information, not just a curated selection of it.

iii) Speculative play. You may just want to manipulate, combine and explore information to see what you find.

In practice research may be an ongoing iteration of all three. Or there may be subtle overlaps (Eg: you may need a selected map of the bushfires in Gippsland to answer the question whether they would have effected certain homesteads at certain times).

Some of the ways creating maps can be used for research are:

  • To create an interactive user interface to information, for public engagement. It can be used as an illustration of your information, or subject area. Such as a map of where someone traveled, or where certain events happened.  It offers a way to present the wider evidence base for claims that are made in a publication.
  • To answer a research question. Maps can be created to see with your own eyes, or by mathematics, or a combination of both, the spatial and temporal patterns of the information that you were unaware of before. They can be used to show the material real world manifestations of social or mental phenomenon (such as where prejudice leads to people living on one side of the tracks). This may demonstrate or counter a finding or assumption, or reveal unexpected findings . This applies not only to creating maps of one kind of information, but combining maps of different information to see how they relate.
  • To create a research tool. The map can be created as a way of making data discoverable and navigable. It’s a way for other researchers to find and access information related to a place.

Digital productions, in the form of websites, software applications and datasets (TLCMap is all of these) are now recognised as publications or ‘non traditional outputs’ and so in an academic context should be cited as such. If you create a map layer your work also becomes citable. If you have carefully created a reliable dataset or layer, you should take the extra step of depositing it in a research data repository, and getting a DOI to ensure it is accessible, reliable and citable in the long term.

See the following sections on how to cite TLCMap and how to deposit TLCMap research data in a repository.

TLCMap does research and development for digital humanities mapping. TLCMap is a software infrastructure project, including a website with multiple applications and data for different purposes.

In a publication you may wish to reference:

  • the TLCMap infrastructure project and it’s website
  • the application used to work with information, such as the Gazetteer of Historical Australian Places
  • a specific layer or visualisation or other kind of data or content that you or another person has created.

Citing TLCMap example:

Time Layered Cultural Map University of Newcastle et al, http://ec2-13-210-15-31.ap-southeast-2.compute.amazonaws.com/tlcmap (accessed 31/12/2023)

Citing a TLCMap application example:
How to cite TLCMap applications, such as the Gazetteer of Historical Australian Places.

Gazetteer of Historical Australian Places Time Layered Cultural Map, https://ghap.tlcmap.org (accessed 31/12/2023)

Citing TLCMap Content, such as map layers, data or visualisations:

Morgan, Fiannuala ’19th Century Australian Bushfire Reporting’ in Gazetteer of Historical Australian Places, Time Layered Cultural Map, 31/01/2023 https://ghap.tlcmap.org/layers/170 (accessed 31/12/2023)

TLCMap does not aim to compete with any existing mapping or GIS systems and we don’t wish to replicate existing functionality too much. As most GIS systems are focused on science, commerce and government we aim to provide functionality that either makes it easier to work with Humanities and cultural information or develop new functionality that addresses our specific needs. This means that there are other systems you might find equally useful to get started with, or that already do advanced things that you can’t do in TLCMap. Our aim is to be interoperable so that it is easy to move information to and from other systems and TLCMap systems, depending on your projects specific needs, using open standards (KML, CSV, GeoJSON). Some commonly used systems are:

  • Google Earth (online or desktop) for easily adding points, lines and polygons on a map. While a great way to get started, and good for quick and simple maps, you are likely to run into limitations.
  • QGIS is a free desktop application for more advanced mapping, but it will require working through some online tutorials to learn how to use it. It also doesn’t create web maps.
  • Story Maps are a way to carefully curate text images and maps with each other for a sophisticated and interactive web reading experience. They may be created by various means, but can be expensive and difficult to archive and reproduce, and may depend on subscriptions.
  • Your institution may provide access to advanced GIS and mapping systems through subscriptions.

There are many ways to answer this, so here’s just a few points. Most GIS and mapping systems are designed for science, government or commerce. For example, it’s easy to find businesses using Google Maps, which is very useful, but you cannot as easily find all the culture that exists in those same streets, hills and creeks. Science produces highly structured well organised and precisely measured information because without it, the science is invalid. Science is limited to phenomena that is repeatable and measurable in this way. Humanities may use scientific methods but is not limited to the phenomena that can be studied with them, we need ways to deal with vagueness, ways to study outliers, ways to see relationships not only in terms of correlations of repeated and repeatable observations in a ceteris parabis experiment, but interactions between unique and highly complicated circumstances involving complex interactions. Time is usually an important factor, which is why we always factor the temporal dimension into our development – but time is not only dates. We also need to find ways to map journeys with and without dates, without and without quantities and cyclical time (still in development). We need to deal with maps that aren’t a proportionate picture of the world with latitude and longitude, but are stored in any media – pictures, stories, songs, dance, and so on.

The proposed ‘Time Layered Cultural Map (TLC Map)’ is intended to be infrastructure. An apt definition of infrastructure is “Infrastructure can be described as that which creates the conditions of possibility for certain kinds of activities.”1 TLC Map will provide and improve the conditions of possibility for digital mapping in the humanities.

The TLCMap approach is ‘No project without a platform and no platform without a project‘. This means we only work on software for project that makes re-usable functionality for other projects, and that we do not produce software without projects to prove it’s real-world usefulness (ie: no solutions looking for problems). In this way development is always driven by researchers needs, and makes digital mapping possible for us as a community.

  1. Brown, Susan; Clement, Tanya; Mandell, Laura; Verhoeven, Deb; Wernimont, Jacque Creating Feminist Infrastructure in the Digital Humanities DH2016 2016-03-06 http://dh2016.adho.org/static/data-copy/531.html

In short, yes, one way or another, but as with any system, make sure you export and back up your information, and deposit research data in an official repository.

An adequate longevity plan is difficult in the absence of institutional support or commitment to post project maintenance for eResearch generally, including digital humanities, even though this would be a small cost to protect thousands or millions of dollars in investment. This absence appears to be the across the tertiary research sector. There are legitimate concerns that new systems won’t last long or that effort put in and data might be lost. We address these concerns in a variety of ways, and recommend actions you can take to be confident your research and work survives any eventuality. Ideally more funding would be available through further grants, partnerships and institutional support but we know this might not happen, so we have the following strategies to ensure work remains whether ‘TLCMap’ continues to exist or not:

  • Independant Systems and Software ‘Ecosystem’: Where ever possible we are developing on platforms that have an existence independent of our project, including finance and user communities. These systems will carry on without us.
  • Data export in standard format: It is a requirement of participating systems that data be exportable in standard, interoperable formats. Individual researchers will have exported and saved their work and will be able to utilise it, perhaps in degraded forms, in other systems, or continue development with it in different directions, perhaps including enhancements.
  • Open Source: Using open source systems mean they won’t suddenly disappear because someone forget to renew a subscription, licence agreements change, or the provider goes out of business or discontinues service. A the very least in the event of complete failure an open source system can be run up elsewhere and backed up data (you did export and back up your data didn’t you?) can be restored.
  • ROCrate: ROCrate is adopted as a standard for archiving research data. This especially addresses a need where information may be useful in the long term but project funding and institutional interest cannot continue to fund maintenance and upgrades. All TLCMap systems should enable export in ROCrate format.
  • Spread Risk: TLCMap is not about developing a new system that attempts to be all things for all people, and reproduces functionality, and so competes against established system. Rather than a single system that risks complete failure if it fails to be adopted, risk is spread across a diversity of development streams and established software platforms.
  • Research Data Deposit: Research data should be deposited in an official repository and registered with relevant bodies. This also helps others find it. If you ant to archive your data, refer to How to digitally archive TLCMap layers/multilayers in this guide.


If you can create a table and fill it in, you can create well structured data.

One of the most important things when starting many Digital Humanities projects is maintaining consistent, well structured data.

One common difference between humanities and STEM, is that humanities isn’t limited to repeatable phenomena. Scientific method depends on repeated observations, and repeatable experiment. While humanities’ unlimited scope includes unique, or highly complex and historically contingent situations, it can be usefully informed by ‘data’. This doesn’t necessarily mean reducing humanities or trying to justify humanities by making it more scientific.

Where to Begin?

Think about what types of information about your ‘objects’ of study need to be recorded and presented, ideally before you begin. Don’t let worries about data structure stop you from starting though. Often the structure becomes clear as soon as you start gathering the data, so it’s good to make a spreadsheet, try it out on a few examples and adjust. While it’s best to avoid late changes to structure so you don’t have to go back to the library or the field, you can always add a column if you missed something important. If it is important, or if on the other hand you are trying to gather so much data it’s not practical, you’ll probably realise early on, so just get started.

How do you make information well structured? Often it’s not as complicated as it seems. The simple answer is, “Just put it in a table under column headings.”

This is not well structured data:

The Mona Lisa by Leonardo Da Vinci, between 1503 and 1506, maybe 1517. Most famous painting.
Last Supper, 1495 – unknown, Da Vinci. Often referenced in popular culture, this work was…
Michelangelo, c. 1511–1512, Sistine Chapel. Commissioned by…

The artists, painting titles and dates are in different orders, the dates are stored in different ways, and sometimes the name of a single individual is stored differently. The descriptions are just notes and you’ll want to edit them later (that’s ok, but save yourself some trouble by making it as finished as possible).

This is well structured data:

PaintingArtistStart DateEnd DateDate ExactnessDescription
The Mona LisaLeonardo Da Vinci15031506c.The most famous painting in the world, etc.
The Last SupperLeonardo Da Vinci1495c.Often referenced in popular culture, this work was…
Sistine ChapelMichelangelo15111512c.This ceiling decoration was commissioned by, etc.

That’s not hard to understand. That’s the main point but there’s a few more things worth bearing in mind:

Numbers, Dates and Text… and Notes

Software usually handles different kinds of data differently. The main distinctions are numbers, text and dates.

Store numbers as numbers without adding any text to them. Eg: if there is a column for ‘Quantity Of Grindstones’, don’t put ‘About 32′. Put ’32’. This means we can use those numbers to arrive at (estimated) totals and averages. In humanities we are often dealing with ‘data’ that isn’t measured strictly or consistently as in science. Text can’t be added and subtracted so leaving it as a number allows calculations to be made, which you can add any caveats and explanations to later. (eg: a column called ‘notes’ that says, ‘Values are conservative estimates only, based on Emerson’s diaries*…’)

Text allows for anything at all, but sometimes you want to use it for consistent named categories.

Dates and times are tricky to handle so keep to a consistent format and also don’t add extra text to them. Eg: stick to the dd/mm/yyyy HH:MM:SS or some other common format.

Be consistent

Always write the same thing or sort of thing in the same way. Eg: decide if you want to just write ‘da Vinci’ or ‘Leonardo da Vinci’ and always write it that way. If you record a date in the format 29/04/2020 don’t change to 29 April 2020.

What To Gather?

You may want to break this up differently, specifying whether the first or second date is uncertain, or using only the finishing date if that is all that is relevant, and adding whatever other columns are pertinent. What information you put in depends on:

  • the needs of your project
  • the time, money and effort you can put into collecting it
  • the input requirements of the system you want to add it to

More Is Better

If you can gather more details do. It’s easier to take out subsets of information than for you to revisit every data item.

Don’t use MS Word

Avoid MS Word for recording data. Use it for writing letters and essays. Although you can make tables in MS Word, and they are better than just notes, they will ultimately need to be copied to some other format that a computer program can more easily handle. The most commonly used tool, and much easier for a computer to handle, is Excel. If you make columns in Excel you are off to a good start and will save everyone, including yourself, a lot of time and headaches later. This is because Excel files can be saved as .CSV files which are easy for computers and programmers to work with. (Note you can still make a mess of an Excel or .CSV file, just keep all the data broken up in columns with only one type of information in each column)

Structure As You Go

It’s easiest to gather your information in the right structure as you go, rather than transcribe it later.

Just Ask

If possible, ask someone what fields (or column headings) are required, or if your data structure is good. If you intend your data to go into a particular system, check what requirements it has. Eg: If you want to put your data into Google maps, for example, even if you’re not sure about the technical standards of KML and other acronyms, you can see that you should at least have a ‘longitude’, ‘latitude’, ‘name’ and ‘description’ for every point you want to plot. If you at least have that in a spreadsheet, it can be converted to the right format.

One Type Of Information, One Column

If types of information can be distinguished, split it up into more columns. Eg:

Grace Cossington SmithThe Bridge in Curve (1930)
Katsushika HokusaiThe Great Wave off Kanagawa (1833)


ArtistPainting TitlePainting Date
Grace Cossington SmithThe Bridge in Curve1930
Katsushika HokusaiThe Great Wave off Kanagawa1833

This structure or that?

There can be a bit of an art to designing structure. Depending on the nature of your research and the data you can find, you might organise it one way or another. Eg: Let’s say its about artists and places they are associated with. You could do it like this:

ArtistBirth PlacePlace of Death
Sydney NolanCarlton, MelbourneLondon

or like this

ArtistPlacePlace Relation
Sydney NolanCarlton, Melbournebirthplace
Sydney NolanLondonplace of death
Sydney NolanHeidilived
Sydney NolanBirdsvillephotographed

The first is more suitable if you are only specifically interested in places of birth and death, but would result in too many columns, many with empty data, if you wanted a column for every type of possible place. The second allows for any kind of place associated with the artist, but if possible, the ‘Place Relation’ should still use consistent categories.

Complex Structures

Information structure can sometimes get a bit complex. Let’s say you want to have some extra information about the Artists, such as when each was born and died, whether they were sculptors and/or painters, what cities they worked in, who their patrons were etc. You don’t want to add all that information to every row in your table of paintings. You need a seperate table that just stores the artist information once for each artist. You can then relate this back to the painting by the artist’s name. This is the structure of a ‘relational database’. You can still gather this data in Excel for convenience, but make sure you are consistent in using the artists’ names, so that it will match up across tables. Keeping these tables makes it possible to convert the information into a proper database, which can then be used to mix and match, filter and display the data in all manner of ways, including for the web.

A Paradox of Structure and Flexibility

Why structure information this way? It seems rigid and inflexible but well structured data is what enables computers to be flexible. A computer doesn’t care if there a few or a million records, if they are structured in the same way it will process them quickly. It can filter and mix and match the information, change formats, run calculations, and pass the data to visualisations. Without a consistent format the computer can only display it the way you put it in – it can’t do anything with the data. You lose the ability to manipulate it, and so badly structured data, while flexible in your terms, is inflexible for a computer.

In the badly formatted art information above, the computer has no way of knowing which text it should treat as an artist, which as a painting name and so on. If it is in columns, the computer can treat everything in the first column as an artwork, everything in the second as an artist, and so on. ‘Structured data’ is part of working with computers as a medium – you don’t normally work clay with a paint brush, and you don’t normally spin paint on a potter’s wheel. To work with computers, use structured data.

So while lots of different systems require different formats, the most important thing is to be consistent and structured. Even if you don’t know what specialised formats it might have to be in later, if it is well structured it’s much easier to write a small program to convert it all into the right format for any system.

If it’s too late and you only have badly structured information, even if it takes hours or days to convert your notes into well structured data, it’s a small effort for the benefits of being able to query it to identify relationships, extract subsets for other purposes, generate lists for publications, run it through statistics applications to generate graphs, plot it on a map, make an online gallery, turn it into social network diagrams, display it on the web and whatever other relevant thing a computer can do.

Link a single place to an external site with a customised web page about that place.

We can consider two ways of structuring information that is to be mapped. Each relates to how the information appears and is interacted with on the map. These two different approaches touch on some conceptual issues in Information Technology, but can be explained in plain language, and those who enjoy critical theory in humanities will note how it relates to themes such as decentering, non-linear mixed media reader constructed narrative, and ‘rhizomes’. The following includes both the simple explanation and example, and a slightly more detailed account for those interested.

Single Point

One way attempts to display all the information (text, images, links, videos, etc) accessible through a single point that is clicked, and which then shows this information set out like a prepared document or curated collection. This is often how people first imagine it working – you click the point, and in the pop up window might be a description and some related images, and perhaps a video and a list of relevant links, etc. So for example, lets say you have a series of places relevant to a persons life, you want to map all those places, and attach all the photos, letters and other materials to it.

You would want to set this out in a coherant narrative related to that place. This makes sense for some cases, but quickly runs into problems and has some drawbacks. Perhaps there are a hundred or more images, or for one site a 3000 word account is required, not just a paragraph? These are too large to fit into the small pop up or side bar when you click the dot. It would not be user friendly.

At present TLCMap, focused on mapping, doesn’t allow uploading of images or other media into a collection, which would be required for embedding in a little popup window. Collections such as these belong in collections systems. None the less, such a collection system, if it allowed geo-coding of images, could provide a map, which when clicked, provides a link to any of those geo-coded images – which is precisely how we recommend TLCMap be used – to provide the map, linking to the page (as in this approach) or the specific item (as in the below approach).

There are many scenarios where you do wish to set out in a neat narrative, perhaps with embedded images, or other cases where you wish to provide access to hundreds of photos of a place through a single dot on the map. There are some ways to achieve a curated presentable multimedia story with maps, such as Story Maps, that you may use instead of TLCMap to achieve that. The way to achieve this with TLCMap though is to set up your project website seperately, and integrate with TLCMap for the mapping component. Eg: if you have a collection of images, set up a web collection management system. If you want a nicely formatted web page with text and images about a places set that up on your website. Then create a TLCMap that simply links the dot for that place to the collection or to the web page. The TLCMap can also be embedded in your website, so it is seamless within your site.

Eg: You have a website on Goldfields. You have webpages for major sites such as Bendigo and Kalgoorlie. Each of these gives a history of the town with pictures and a breif documentary video. This would be too much to cram into a pop up window on the map. Simply create a map for each town and use ‘linkback’ to link to the relevant webpage. For example, an Excel file, to be saved as a CSV for import, might look like this:

1851-37.5601143.8549Gold Found in BallaratBallaratBallarat wasn’t the first place in Australia where gold was found, but it began the first major gold rush that created a population boom and changed Australian society.https://en.wikipedia.org/wiki/Ballarat
1851-36.7568144.278Gold Found in BendigoBendigoFour people claimed to have first found gold in Bendigo. Bendigo became one of the major centres for early gold mining.https://en.wikipedia.org/wiki/Bendigo
1857-23.0533150.1853Gold Found in CanoonaCanoonaThe first payable gold found in Queenslad was at Canoona, however after the gold rush little gold was found, leaving many destitute.https://en.wikipedia.org/wiki/Canoona
1893-30.7487121.4654Gold Found in KalgoorlieKalgoorlieThe gold find at Kalgoorlie lead to another major gold rush.https://en.wikipedia.org/wiki/Kalgoorlie

This method works best if you want to carefully control the presentation and order of information related to a place – simply link to a web page on your site that does that.

Directory or ‘Inverted Heirarchy’

Associate each and every resource (photos, text, events, etc) with a place.

If we have a large collection, such as hundreds of photographs that we want to geolocate and then map, or we have a collection of information of many different media or types that we want to map, such as scanned letters, photographs, transcripts, videos etc, rather than putting a dot on the map for each place and providing a single page that puts all that information together, we can turn this upside down and attach a place to every one of these resources. You would list every photo, every letter, every video, along with its coordinates.

This treats the information more like ‘data’ and although some control is lost over the order of the presentation it is much better for making information discoverable, queryable and presentable in different ways.

In information design this might be thought of as an ‘inverted heirarchy’ because instead of starting at the top, and thinking about all the things that come under that heading, and its subheadings, we start at the bottom, with all the things, and attach information attributes to them. This can also be thought of as a ‘directory’ as opposed to a taxonomy.

A good way to explain how this the structure can solve problems is the platypus. People in Europe thought the platypus must be a hoax because it didn’t fit neatly into any biological category – it has fur and suckles its young like a mammal, it has a bill like a bird, it lays eggs like a bird or reptile, it has a pouch like a marsupial, it’s venomous like a reptile or insect, and is amphibious. So what phylum or class should it be put in? All of the above? They had to create a special category for it, along with the echidna – ‘monotreme’. Platypuses and echidnas are an exceptional case in biology, but in the world there are often scenarios where there are many things like platypuses, and if a category needs to be created for every exception the whole point of a categories, where things must be in mutually exclusive branches, breaks down. To deal with this we look at each thing and ask what ‘attributes’ we use to describe it. These are analogous the columns in a spreadsheet, eg: for the platypus the ‘attribute’ ‘lays eggs’ is ‘yes’. The attribute ‘habitat’ with a choice of land/water/amphibious, like the frog, will be ‘amphibious’. The attribute ‘Outer layer’ might be ‘fur’, etc. In this way rather than looking in the category for ‘mammal’ containing all things that suckle their young, and failing to find a platypus because it’s in another box called ‘monotreme’ we can ask queries like, ‘Show me all things that suckle their young.’ and get dogs, cows, whales, people and platypuses, while at another time we can ask ‘Show me all the things that lay eggs’ and get a list including parrot, albatross, emu, platypus, etc.

So what does that have to do with mapping in the humanities? When designing TLCMap we had to find a way to handle a common requirement for mapping across many disciplines and diverse and idiosyncratic projects. One thing these will all have in common, if they are to be digitally mapped, is coordinates (and ideally dates, if they are to be mapped in time). All of the things that are to be mapped are of different kinds, with some similarities and some differences – the attributes are inconsistent, but the attributes of having coordinates, and possibly dates is something in common we can design a system around (though yes, we are also working on systems for ‘maps’ that don’t use geo coordinates). TLCMap has been designed to focus on the ‘directory’ or ‘inverted heirarchy’ way of doing things.

What it means for your project is that you can look at all things that you want to be mapped and attach the attributes lat,lng and optionally dates and placenames to them. This also means you can design subsets of things that you want to be mapped much more flexibly. Instead of having one map of the places, and everything associated with that place trying to fit under it, you can get subsets from the data to, for example, show a map of all the Goldfields which failed, all those in a certain state, or a map of just the photographs, or a map of just the goldminer’s letters, or a map of all the goldfield materials related to goldfields where there was organised uprisings and activism (assuming you have recorded all that information in your data). It also enables much more nuance in viewing the location over time. For example, when using the TLCMap timeline, you could see the different paintings, photos, letters in a single place over time, rather than have a single place, with a single start and end date, to contain all that information.

As a practical example, your data might look like this instead of the above:

18511851-37.5601143.8549Gold Found in BallaratBallaratBallarat wasn’t the first place in Australia where gold was found, but it began the first major gold rush that created a population boom and changed Australian society.Eventhttps://en.wikipedia.org/wiki/Ballarat
18531854-37.5601143.8549Painting of the diggingsBallaratPainting by Eugene von Guerard of Ballarat’s tent city in the summer of 1853–54.Mediahttps://en.wikipedia.org/wiki/Ballarat#/media/File:Ballarat_1853-54_von_guerard.jpg
18541854-37.5601143.8549Eureka Stockade PaintingBallaratPainting of the Eureka Stockade by John Black Henderson (1827-1918)Mediahttps://en.wikipedia.org/wiki/Ballarat#/media/File:Eureka_stockade_battle.jpg
18511851-36.7568144.278Gold Found in BendigoBendigoA brief historical account of the town of Bendigo – four people claimed to have first found gold in Bendigo. Bendigo became one of the major centres for early gold mining.Texthttps://en.wikipedia.org/wiki/Bendigo
18581858-36.7568144.278Gold Found in BendigoBendigoUnknown artist – McPherson’s Store, Bendigo (c.1858). Watercolour.Mediahttps://en.wikipedia.org/wiki/Bendigo#/media/File:Charing_Cross_Bendigo_1853.jpg
1853-06-061853-06-06-36.7568144.278Bendigo PetitionBendigoSigned by over 23,000 miners the Bendigo petition was an attempt to get representation and reasonable taxation from the British Government.Eventhttps://ballaratheritage.com.au/article/the-1853-bendigo-goldfields-petition/

This is suitable for mapping many things of a single kind, or a variety of different types of resources, possibly in different ways. If you wanted you could break the spreadsheet down in different ways, for example to create a layer of just paintings, and a layer of just ‘events’, and a layer of ‘letters’ and then combine them into a multilayer, all of which would enable these to be viewed over time, presuming they have a date. This can create a lot of dots all in the same place – such as a great many all in Bendigo – but this can be handled with the ‘cluster’ style viewing option, which would show a large dot showing there are, perhaps, 43 photos located in ‘Bendigo’ which can then be expanded to show each individual thing located in Bendigo.

Which of these approaches you choose is up to you and the needs of the project. The former gives you more control over the narrative and order things are presented on your site, but the latter gives more data flexibility, and more discoverability by others doing a TLCMap search and each thing can still be linked to your site. Depending on the context it could be an advantage or disadvantage to dismantle the heirarchy and let people explore the items in their own non-linear way, making their own connections. TLCMap is more designed for the ‘directory’ structure though, and tends to be the most useful and recommended approach.

TLCMap is not a digital repository or permanent archive for data, but we do make it easy to backup and archive your data, and we work to ensure all the work you put in you can get out again. We expect we’ll be around for a while, but cannot make any guarantees that your data will be retained for 10 or 50 years. As with any IT system you should keep your own copies and backups. To ensure your valuable research remains for posterity you should deposit it in a research data repository.

Most research data archives are science focused. At timing of writing (2023) the best options for humanities digital mapping are:

When considering a research data archive, some important questions are: 

  • Can you archive something like this?  
  • Will you provide DOIs? (TLCMap doesn’t mint DOIs). 
  • How long will my data be archived? 
  • What format do you require my data in? What metadata do you need? (You can download your data from TLCMap in open standards, KML, GeoJSON and CSV, and metadata in the ROCrate format.)  
  • Who is able to access it?  
  • What access permissions can I put on it?
  • Are there restrictions/permissions in place? 
  • How can I access my data in the future? 

The process for digitally archiving TLCMap layers is: 

  1. Package up the data. 
  2. Archive the data. 
  3. Attend to any licensing agreements.
  4. Add DOI to TLCMap / GHAP metadata.

Packaging your data

Where-ever you deposit your data you will want to have it in a format that is:

  • open standard
  • widely adopted
  • abides by FAIR principles
  • human and computer readable
  • error free
  • has metadata describing it
  • that has proved itself by being around for a long time
  • is likely to remain accessible and re-usable for a long time to come (for all the above reasons)

By creating map layers in TLCMap GHAP you have already ensured all these, you only need to download it.

The short answer to preparing your data for archiving is to download the RO-Crate of a layer or multilayer, as it contains everything, including metadata.

To package up your map data for archiving: 

  1. Log-in to GHAP.
  2. Click on My layers or My Multilayers on the horizontal navigation bar at the top of the screen. 
  3. Click on the layer/multilayer you would like to archive. 
  4. Below the ‘View Layer’ heading, click on the Download button and choose RO-Crate
  5. The file will be downloaded as a zipped file that you can unzip to view all the files described above. 
  6. Your data is now in a convenient single file that can now be deposited in an archive/repository. 

You don’t necessarily need to know the details about the files in the ROCrate but if you are curious, RO-Crate is a community effort to establish a lightweight approach to packaging research data with their metadata. The ROCrate is a zip file containing the map layer information in open standard formats CSV, KML and GeoJSON (see FAQ on mapping file formats), plus the metadata you added to describe the layer.

Indigenous Data

For Indigenous research data, get in touch with AIATSIS about depositing them them.

Archiving with the Australian Data Archive (ADA) 

After checking the options in 2023, the best option for archiving is probably the Australian Data Archive (ADA) so we have provided these instructions.

Depositing with ADA involves working through a procedure with compliance checks, but if your data has taken a lot of effort and is of lasting value to others, it’s worth a bit of effort to safeguard it for posterity.

ADA is based in the ANU Centre for Social Research and Methods. Like TLCMap, ADA prefers open data CC licence but more restrictive licences can be arranged if needed. See the FAQ on licencing and copyright.

The material to be deposited needs to meet the ADA Collection Policy requirements as detailed in the ADA Public wiki. The metadata supporting the material being deposited will be harvested and will appear on Research Data Australia (RDA) & Datacite

ADA uses Dataverse open-source research data repository software. It ensures that you receive credit for your data through formal scholarly data citations and helps satisfy data sharing requirements from funders and publishers. It also generates a Digital Object Identifier (DOI) once your data is published on the site. 

  1. Go to ADA’s sign-up page.
  2. Email ADA <ada@ada.edu.au> introducing yourself as a TLCMap user who’d like to archive data. Provide them with your ADA login details and the name of your map layer/multilayer. Also ask for a unique ID number (this is a collection number for easier matching if files go astray). They will generate a data shell for you to upload your data into and send you the link.  
  3. Download your Ro-Crate folder from TLCMap (as described above). It will be zipped. Don’t unzip it. 
  4. Amend the name of the .zip folder providing a meaningful name. Include the name of your project and your unique ID no. (For example, ghap-ro-crate-layer-154-20231108094835_2.zip will become JapanesePoW_CampsWW2_ghap-ro-crate-layer-154-20231108094835_100147_2.zip (bold font indicates name of layer and the unique ID). Retain original TLCMap layer number. Use underscores ( _ ) instead of spaces. 
  5. ADA will automatically unzip any uploads, but we want the archive to remain a zipped ROCrate file, so we need to ‘double zip it’. Right click the ROCrate and zip it, so that it’s a zip file in a zip file. (if you don’t know how to zip or a folder it can be different whether you use a Mac or Windows, etc, but it’s a common thing to do so just Google ‘How to zip a file?’).
  6. Upload the double zipped folder to the Dataverse shell provided. In the Description box add, “This is an RO-Crate folder. Please unzip the RO-Crate and open HTML document first for a description of folder contents.” 
  7. In the metadata tab, add as much info in the metadata field as you can. Make it as rich as possible so that it is easy for others to understand and use.  
  8. Submit via the Submit for Review button in the top right of the screen. 
  9. Fill out ADA’s standard license document, sign, date and email it to ADA. The following selections are recommended by ADA:
    • A. Data Category – Non-Sensitive Data  
    • B. Access Category – Open Access  
    • Terms and Conditions of Use – CC license (CC BY-SA, credit must be given to the creator and adaptations must be shared under the same terms) 
    • Access Guestbook – No License Access Guestbook questions to be applied 

 When filling out the metadata section remember:

  • Notes: Write “This is an RO-Crate folder. When you first open it, open the html file for a listing of the files, descriptions of them and links to them.”
  • Contact: Write “Use email button above to contact. Australia Data Archive (ANU).” Make ADA the contact because whoever wants to access this data or has problems accessing it should approach ADA first. 
  • Author: Format as Surname, first name (Institution) – ORCID no. For example, Ariotti, Kate (University of Queensland) – ORCID: https://orcid.org/0000-0003 4941-7141.The author is the maker of the map. 
  • Topic classification: Use vocabulary from APAIS https://www.vocabularyserver.com/apais/index.php for consistency. 
  • Series: Write “TLCMap” and provide the link to your map on TLCMap. 
  • Generally: 
    • Where possible, format links so that they work as hyperlinks.   

Here’s a comprehensive guide on signing up, depositing and managing data via ADA/Dataverse.

Add DOI to TLCMap / GHAP

When you deposit the archive with an official repository, such as ADA, it should have been assigned a Digital Object Identifier (DOI). A DOI looks something like this: http://dx.doi.org/10.26193/5P4JFY It is a ‘persistent’ URL – IE: a unique and formally registered link to the archive that lasts a long time. This should be used when citing the archive in an academic context. The layer in TLCMap GHAP has a metadata field for the ‘DOI’.

  • Log in to https://ghap.tlcmap.org
  • Go to ‘My Layers’
  • Find your layer and click Edit.
  • Enter the DOI in the DOI field and click the Save button.

Each of these can be used to recreate interactive maps in some other platform, or used in a GIS system. Each of them is ‘plain text’ in that if you open it in a text editor you can read it and the markup, you won’t see a lot of 1s and 0s or other computer gobbledegook (you might want some IT skills to understand the markup though).

  • CSV is a common tabular format that can be opened in Excel or other spreadsheet software, and you can save in this format too. It stands for ‘Comma Separated Values’ meaning simply that the information in the table is separated by commas.
  • KML is a kind of XML. XML is a widely used open standard format for ‘marking up’ information, with many applications for different purposes. Every web page uses a kind of XML for web pages. KML is XML for maps created by Google.
  • GeoJSON is a kind of JSON. JSON is another widely used open standard format often used in web development.
  • ROCrate is a more recent standard, but it is is build on these old standards. It is designed specifically for long term storage of research data, on principles similar to those described above. Importantly, this format includes the metadata you have attached to your layer, stored as JSON and as HTML. When you package up your data in TLCMap layers/multilayers using Ro-Crate, it will generate a zip file containing your layer or multilayers in all the formats above (CSV, KML and GeoJSON) plus a ro-crate-metadata.json and a ro-crate-preview.html file. The ro-crate-preview.html provides a more human readable way to read the metadata and navigate the data in the ROCrate. Here’s a detailed explanation on how RO-Crate works.

CSV files aren’t specifically for spatiotemporal data, but because they are a widespread format for spreadsheets that are easy to read across many different systems, they are often used for also used for spatiotemporal data. A CSV file may be created by saving an Excel file as filetype ‘csv’. ‘CSV’ stands for ‘comma separated value’.

KML is a standard goecoding XML format. This means it can be processed by a computer easily, and can also, to some extent, be read and modified by a human. Because KML is a standard format for geodata, it can usually be imported into other systems. One of our main aims is not to try to build one system that does all things, but to allow for and further the parallel development of different systems independently. Interoperability, then, is key. How this works in practice is often by making data produced in one system available to another in a standard format. Sometimes this is as simple exporting a KML file from one system and importing it into another. Another common format for geodata is GeoJSON. A good tip is to make sure you can get the effort you put into your system out of it again in some standard format.

GeoJSON is another standard for spatial data, but written in JSON, which is a popular way of structuring data for the web.

All of these data formats are stored as plain text, so they can be read and edited by a computer or human.

How to create KML files:

How to create GeoJSON files:

  • Use a tool like geojson.io
  • Many map systems allow export of data in KML format
  • Find data in repositories

Convert a CSV file to KML or GeoJSON

Often we have data in a spreadsheet, or that may be exported from a database in tabular form as a CSV file, that has columns for latitude and longitude.

An Excel spreadsheet can be ‘saved as’ in .csv format.

You can convert CSV to KML by importing it into TLCMap Quick Coordinates, Google MyMaps or Google Earth.

Alternatively you can find converters on the web by Googling ‘convert CSV to KML’ or similar, such as CSV to KML or MyGeoData Converter

Humanities researchers often need to deal with information that is in some way vague or uncertain. For example, we may want to map a diary entry which says, ‘3 days north of the bend in the river’ or ‘late Spring’ but these need to be translated into specific coordinates and times in order to be placed on a map. Simply placing information on the map may give viewers the false impression that it is accurate or certain. This can have major implications if people use the information in an an appeal to authority (eg: the University says it was here at this time) to prove some case, potentially with legal implications. In other situations, users may misinterpret mapped information as complete such that gaps on the map seem to indicate nothing there, rather than no research done or data gathered there yet. In any case a common requirement requested by humanities researchers is the ability to represent vagueness in some way.

This involves many questions that could be handled with different data structures:

  • What kind of vagueness: margin of error in measurement / informed estimate based on multiple sources / infilled data / it occured within a range of time and place etc?
  • Do we want to represent that it is certain/uncertain or indicate a degree of uncertainty?
  • Can the system represent this with icons, fading, colour, shape, dotted lines, blurring?
  • Will this degree of accuracy cause users not to use it? Do we need data entry to include a figure for how accurate it is, or range of time and space within which it might be? Will this additional data be so onerous that people won’t bother or it takes so much time the project won’t finish?

All of these need to be considered and balanced against each other and the needs depend on the circumstances. Often simple answers are the best, and practicality dictates we don’t want to overcomplicate data entry, we don’t have time and money for extra detail, we need to work within/around and adapt established formats rather than create new systems, and we want users not to have to read manuals to interpret visualisations. At a base level, we could:

  • add a question mark to the end of a place title or specific attribute value to indicate at least that there is some uncertainty about a place eg: Brisvegas(?).
  • use a ring instead of a pin or a circle to indicate that the exact location is vague rather than having pin-point accuracy.
  • use the datestart and dateend attributes common in geodata standards to indicate a range of time within which an event occurred.
  • ensure the surrounding and contextualising information highlights and explains the issues around uncertainty.

None the less there is some research investigating nuances of representation of vagueness, eg:

  • Potter, S., Doran, B. and Mathews, D. (2016) https://doi.org/10.1016/j.apgeog.2016.09.016
  • Kinkeldey, C. (2017) https://doi.org/10.1080/15230406.2015.1089792

There are several metadata standards for spatial information, sometimes overlapping and sometimes with more or less than seems needed. We will aim to ensure that any metadata standard used can at least be transformed into another common standard.

AURIN has already done work to establish this guide: https://aurin.org.au/legal/metadata-record-guide/ including a metadata tool based on an extended version of ISO 19115, which was used in the creation of the AS/NZS version ). AURIN’s original metadata standard work was funded under demonstrator projects with ANDS https://projects.ands.org.au/id/AP31

Dublin Core is also a good, well established standard to follow for set of basic metadata https://dublincore.org/

There may be any number of reasons. Here’s a few common problems:

Coordinates are back to front. Coordinates often appear as a pair, like this: -32.914154, 151.800702. Some systems assume latitude first and longitude second, while others expect the coordinates to be the other way around. Even within Google mapping systems they are expected in one way, and in other Google system another way.

Coordinates are in an unexpected format. Coordinates can be expressed in different ways: as decimal numbers, as degrees, minutes, seconds and so on. Check your data is in the correct format. If not, convert it using a conversion tool.

An invalid character or other glitch may be the problem. Computers are temperamental and very literal. Sometimes a whole system might not work because of a full stop in the wrong place. A single letter in a coordinate field that is assumed to be a number might make some systems fail. The only way to deal with this is to hunt down the problem and correct it.

To find and fix problems, try working with just a very small example of your data. If it doesn’t work, it will be easy to find issues and try different approaches. If the problem doesn’t occur, you can keep adding chunks of your data till you narrow down where the problem might be occurring.

To be researched and documented. This may be an area we can improve on.

To be researched and documented. This may be an area we can improve on.

To be researched and documented. This may be an area we can improve on.


There are many ways we think and talk about time. We aim to make available ways to structure temporal information and visualise it for different circumstances such as:

  • journey (eg: a ship, or ships on a journey is at certain points at certain times. The data structure is a series of points with times or durations.)
  • serial (eg: a sequence of events where they happen in a certain order, but there is no specific date. We just want to see that one happens after another. Eg: “first go to Dudley, then to Merewhether, Dixon and Bar, and then on to Newcastle and finally Nobby’s”)
  • migration (quantities of movement between places. Eg: not just this or those ships going from here to there, but for example 200 people go from London to Sydney, and 150 to Melbourne in 1960, 300 to Sydney and 350 to Melbourne in 1961, 256 to Sydney and 132 to Melbourne in 1962, etc)
  • frontier (movement of lines and polygons, rather than movement of a point along them. Eg: the Western Front in WWI.)
  • stationary change (places which stay in one place but with multiple properties changing over time. Eg: cinemas don’t move but change between being cinemas and not being cinemas, change managers, changed from showing Hollywood to Greek and Italian films etc)
  • cyclical (things which repeat in a pattern with amounts of time between each. These may be concentric cycles at different scales. Eg: indigenous seasons.)
  • calendar (things which happen repeatedly at certain times, eg: train timetables)

To be researched and documented. This may be an area we can improve on.

To be researched and documented. This may be an area we can improve on.

To be researched and documented. This may be an area we can improve on

To be researched and documented. This may be an area we can improve on.

To be researched and documented. This may be an area we can improve on. (canoe time)

To be researched and documented. This may be an area we can improve on.

Processing and Metrics

We are improving on features in Recogito which uses Named Entity Recognition (NER) to automatically identify places and people in texts and produce maps of the places.

See ‘How can I get statistics and metrics on spatiotemporal data?’

To be researched and documented. This may be an area we can improve on.

‘Close’ compared to what? Handling statistics on elipsoid surfaces, and with time too.

To be researched and documented. This may be an area we can improve on. (least cost techniques etc)

Images, Virtuality and Visualisation

Yes. If you add a place to a layer, or edit a place you have already added, you can upload an image to it. When someone clicks the dot on the map, they will see the image.

Note that you can only attach one image to each place.

You could add an image to a place as an illustration, for example a picture of a building that was here, or the person who was involved in the event that happened here.

You could also map an image collection, each image might relate to a place, so if you put them on a map you will have many points, each with an image, perhaps some in the same place. The user can discover and navigate these at will. Other links attached to that record may link back to your website for the full set of information, a page in which this image appears, it’s source, a story about it, or the project or institution that curates this collection.

At present images can only be uploaded to a place one by one through the user interface. If you would like to load images in bulk, such as an institutional collection, please contact TLCMap and we’ll see what we can do.

The following provide georeferencing tools that are free to some extent:

To be researched and documented. This may be an area we can improve on.

Google Earth can be used to draw polygons, set elevation and extrude them to create simple 3D shapes. For presentation you could simply use screen capture to make a video.

More information required for doing this in the web in an interactive 3D environment, and for doing more detailed architectural reconstructions. Also, an account is needed of how to handle not just coordinates but what floor things are on in city environments, etc.

To be researched and documented. This may be an area we can improve on. HuNI, M2M.

To be researched and documented. Omeka, WordPress, ArcGIS Storymaps, Google MyMaps, etc etc.

To be researched and documented. This may be an area we can improve on.

To be researched and documented. This may be an area we can improve on.

To be researched and documented. This may be an area we can improve on.