Star Trek is the future of archaeology
Last month I spent a day at the Daresbury laboratory for a RICHeS data day, to think about how we will manage data for our facilities and make it accessible. This is a daunting task, but a challenge I am excited to tackle, working together with the Heritage Science Data Service team. One of the highlights of the day was touring the Visual Computing Labs. Seeing full laser scans and digital models of entire cities (in this case Liverpool) was genuinely awe inspiring. These aren’t just impressive visualisations, but complex data‑rich representations that can be interrogated. In archaeology, where we constantly move between scales, from microscopic residues to landscapes and infrastructures, the potentials are endless. What might we learn and better understand if we can apply these technologies to ancient cities?
Another highlight was seeing virtual museums integrated with a treadmill system. The user sees a virtual environment and feels as if they are moving through it. It felt like an early prototype of the Star Trek holodeck, perhaps experimental for now, but pointing towards something much bigger in the future. You can easily imagine how these environments could become future research spaces, teaching labs, or even training grounds for fieldwork. This is also where my thoughts kept looping back to the work we’re doing through HSDS on virtual work environments and advanced visualisation. Increasingly, we’re working with complex three‑dimensional datasets (such as CT scans, volumetric models etc) that are difficult to fully grasp on a flat screen. VR technologies that allow researchers to virtually manipulate this data, to move around it, through it, and inside it, are going to make such a difference in understanding.
This connects directly to a long‑standing challenge in geoarchaeology and micromorphology. I’ve always been convinced that one reason these approaches are still not as fully integrated into mainstream archaeological practice (despite being clearly fundamental and crucial) is because they are hard to visualise and communicate. Microscale observations require years of training to ‘see’, and even then they can be difficult to relate back to what archaeologists encounter in the field. The problem has always been the micro–macro link. We ask people to trust that what is visible in a thin section under the microscope meaningfully explains what they excavated weeks or months earlier. But imagine being able to move seamlessly between those scales. To start with a soil section in situ, then zoom closer, and closer again, until you are effectively standing inside the fabric of the deposit at the microscopic level. To move back and forth between trench, profile, block sample, thin section and mineral grain within a single visual environment.
This kind of immersive visualisation has the potential to change everything about how we teach, interpret and integrate geoarchaeological data. It offers a way of showing, rather than telling, how microscopic processes relate to human activity, formation processes and long‑term environmental change. It also forces us to confront something challenging - how much of the archaeological record is fundamentally invisible to the naked eye. So much of what matters, chemically, biologically, structurally, sits beyond human perception without technological mediation.
This is where I think RICHeS could have its real legacy. On the surface, RICHeS is about funding access to high‑end analytical equipment, world‑class instruments that would otherwise be out of reach for many researchers especially those based in museums and humanities. But I increasingly suspect that the real, long‑term legacy of RICHeS will be the sheer volume and diversity of data that will be produced and made available to the world. Those datasets – CT scans, hyperspectral images, compositional maps, 3D models – are exactly the kinds of material that drive advances in computer vision, machine learning and AI assisted interpretation. As they accumulate, they will enable entirely new ways of asking questions. They can train algorithms, reshape workflows, and blur the boundaries between analysis, interpretation and visualisation.
I feel as though we’re standing at the edge of a genuine paradigm shift. Laser‑scanned cities, immersive virtual environments, real‑time AI translation tools (the Star Trek universal translator!), handheld scanners (the Star Trek medical tricorder!). Technologies that expand what we can see, and understand, once science fiction and increasingly becoming reality...
Comments
Post a Comment