Wednesday, 17 June 2015

G. B. Morgagni's face is revealed in Italy

Morgagni's Digital Facial Reconstruction

The face of Giovanni Battista Morgagni was showcased at FACCE exhibit since February 2015, however its official presentation took place on May 27th, at the facilities of the Museum of Anthropology of the University of Padua, as part of a cycle of lectures about the exhibit, which was held until June 14.


The lecture L’identificazione molecolare dei resti di Morgagni e la ricostruzione forense del suo volto (Molecular identification and facial reconstruction of the remains of Morgagni) was presented by Dr. Luca Bezzi, Arc-Team's archaeologist and researcher, and Alberto Zanatta, one of the scientists responsible for the identification of the remains of Morgagni. Read the research paper here.
 
 
Meanwhile in Brazil, artist Mari Bueno gave the finishing touches on the 3D printed face of the great Italian scientist.
 
 
 
The artist worked from a print sent by CTI Renato Archer. There were some schedule alterations with the Italian staff during the work. Initially the bust was to be presented in April this year. CTI printed a test within the deadline we had for painting and shipping it to Italy being, however the 3D print had small irregularities on its surface. CTI's head of the sector suggested a new print, but as time was tight, I asked them to send me the piece anyway, as I could correct irregularities manually. It was difficult to convince the DT3D staff to send the bust, as they did not want an impression with irregularities to be shipped (this agency really focuses on quality standards), but at that time we had no alternative. I take this opportunity to thank the CTI for their support and partnership, without it this job would prove nearly impossible to be done.

Initially I thought it would be enough sand the bust, and the problem of irregularities would be resolved, but you cannot imagine, dear reader, how hard this material actually is. It simply destroyed 3 sanding sheets of coarse granulation, but barely changed its surface! I had to apply 3 layers of spackling material, waiting one day for each layer to be completely dry.


I was finally able to sand the surface and it got smoother. The bust was ready to be painted.


Initially, the renowned artist filled in the region of the crystalline lens, so it would be apparent side view. The face then received the first layer of paint.
 
 
Day after day the layers were superimposed in order to approach the characteristics of the digital facial reconstruction, but at the same time respecting the style of the artist.
 
 The first layers of the eyes began to emerge.
 
 
The enhancement of expression marks by creating new layers.

Finishing touches, and...
 
 
Ecco il volto di Morgagni in 3D! Here's Morgagni's face in 3D!

I greatly acknowledge Mari Bueno, who agreed to support us in this project. Just as St. Anthony's 3D print, this painting was also excellent and effectively gave the scientist's eyes a beautiful glow. But the story does not end here...

Mari Bueno, as shown in the video above, went to Padua where she would meet Dr. Alberto Zanatta to hand him the painted bust.
 
The two seized the occasion and visited the exhibit FACCE. i molti volti della storia umana.
 
 
 
She could finally see the Saint Anthony's bust she retouched. The very same bust that originally would be presented in Sinop (Brazil), but came across the people of Padova and ended up staying there.

I'm really glad to see the results of all this work. A year ago I was in Padova, in the most important journey of my life, at that moment I had no idea about everything that would happen in the meantime, so many joys and honors, many partnerships in the name of science that ended up introducing me to great comrades..

I am very grateful to everyone who took part in this story and is already included in advance on every single other that might take place from now on. Greetings to my friends and "capos" of Arc-Team, Antrocom Onlus, Antonian Studies Museum, CTI Renato Archer, Ebrafol - Brazilian Team of Forensic Dentistry and Anthropology, as well as the Museum of Anthropology at the University of Padua.

And thank very much to Dr. Paulo Miamoto from Brazil, to translate this text to English.

Abbracio per tutti !
 


Tuesday, 16 June 2015

OpenMVG VS PPT

Hi all,
as I promised, we are back with new post in ATOR. We start today with an experiment we wanted to do since long: a comparison between two Structure from Motion - Muti View Stereoreconstruction (SfM - MVS) suite. The first is Python Photogrammetry Toolbox, developed by +Pierre Moulon some years ago and integrated in ArcheOS 4 (Caesar) with the new GUI (PPT-GUI) written in Python by +Alessandro Bezzi and me. The second one is the evolution of PPT: openMVG, which Pierre is developing since some years and that will be integrated in the next releases of ArcheOS.
Our small test regarded just four pictures taken with a Nikon D5000 on an old excavation. We want to point out the speed of the overall process in OpenMVG, which gave a result compatible with the one of PPT.
In the image below you can have an overview (in +MeshLab) of the two pointclouds generated bye the different software: openMVG processed a ply file with 391197 points, while PPT gave us a result with 425296 points.


Comparison of the two models generated by opnMVG and PPT

The main different stays in the processing time. In fact, while PPT needed 16 minutes, 11 seconds and 25 tenths, openMVG complete the model in just 3 minutes, 28 seconds and 20 tenths.
Here below I report the log file of openMVG, where you can see each step of the process:

STEP 1: Process Intrisic analysis - openMVG_main_CreateList took: 00:00:00:00.464
STEP 2: Process compute matches - openMVG_main_computeMatches took: 00:00:01:13.73764
STEP 3: Process Incremental Reconstruction -  openMVG_main_IncrementalSfM took: 00:00:00:47.47717
STEP 4: Process Export to CMVS-PMVS - openMVG_main_openMVG2PMVS took: 00:00:00:00.352
STEP 4: Process Export to CMVS-PMVS - openMVG_main_openMVG2PMVS took: 00:00:00:00.352
STEP 5: Process CMVS-PMVS took: 00:00:01:25.85958
--------------------
The whole detection and 3D reconsruction process took: 00:00:03:28.208258

We will go on in working and testing openMVG, hopfully posting soon news about this nice software.

Have a nice day!

Acknowledgment

Many thanks to +Pierre Moulon and +Cícero Moraes for the help!

Saturday, 13 June 2015

Intervallo n° 2

As you probably noticed it is a long time we do not write something new in ATOR and the reason is simple: summer is the most productive season for archaeologists, so most of us are working in the field and we have no time for new post.
Luckily we are engaged in new interesting projects and this will give us the opportunity to experiment new solutions and test new techniques, so we will have soon new material to share through ATOR.
In the meantime, like I did in 2013 with this post, I leave you with a short "Intervallo", just to tell that we are still active and that we will come back soon with new post and articles.
Have a nice day!


Monday, 18 May 2015

Turning GeoTIFF into TIFF + worldfile (QGIS)

hi all,
after some weeks I go on with the videotutorial from the Project Tovel. Until now we saw how to download some Open Data for our GIS, how to load georeferenced raster level in QGIS, how to georeference historical maps.
Today I will show something particular, that probably many of you will not need very often working on landscape archaeological project, but that will be more important to manage excavation GIS: how to turn a GeoTIFF picture into a TIFF + worldfile image.
As some of you will know a GeoTIFF is a particular kind of raster data in which the georeferencing values are embedded within the TIFF itself. This option can be a nice solution for a topographer but it is extremely annoying for archaeologists. The reason is simple: topographers often work on pictures or maps that are ready to be used, without the necessity of any photo-editing, which (on the contrary) is an important phase in archaeological photo-mapping process (e.g. for the "Aramus method"). The primary difference between a GeoTIFF and a TIFF + worldfile image is that it is not possible to modify the first one without loosing the georeferencing values (which are integrated in the picture), while it is possible to perform some photo-editing operations (change the colors, balance the brightness and contrast, etc...) in the second one, without problems, being the geolocalization data stored in a separate file (the worldfile).
For this reason working with raster images and worldfile is often the best choice for archaeological GIS (especially for excavation), where it can be useful to "erase" all the part of the photo which are outside the area of interest (e.g. outside the rectification region) and to take advantage of transparency in overlapping different raster levels (which can correspond to different stratigraphic levels).
As I wrote previously, the videotutorial I prepared using the data of Project Tovel simply shows how to turn a GeoTIFF, currently the unique option for QGIS georeferencing module) into a TIFF and a worldfile, a more useful format, without exiting the software.


Have a nice day!

Sunday, 3 May 2015

Fusion + Quantum GIS: an Open Source approach for managing LIDAR data

As I was recently asked by some students here at Lund university to provide them with some "alternative" solution (compared to the ArcGIS-based workflow) to import LIDAR data in GIS, I ran into this software package named Fusion (http://forsys.cfr.washington.edu/fusion/fusionlatest.html). It has been developed by a branch of the US Forest Service to manage and analyze LIDAR data. Basically, by using Fusion, users are enabled to convert .las binary data format into a .txt file. Then, the data can be imported in QGIS and filtered based on the classification codes associated to the file itself. Here you can find a short video-tutorial which can help you in understanding the whole data-processing workflow (p.s. please ignore the MS Windows background! :-) ).


Friday, 24 April 2015

Doing quantitative archaeology with open source software

This short post is written for archaeologists who frequently perform common data analysis and visualisation tasks in Excel, SPSS or similar commercial packages. It was motivated by my recent observations at the Society of American Archaeology meeting in San Francisco - the largest annual meeting of archaeologists in the world - where I noticed that the great majority of archaeologists use Excel and SPSS. I wrote this post to describe why those packages might not be the best choices, and explain what one good alternative might be. There’s nothing specifically about archaeology in here, so this post will likely to be relevant to researchers in the social sciences in general. It’s also cross-posted on the Software Sustainability Institute blog.

Prevailing tools for data analysis and visualization in archaeology have severe limitations

For many archaeologists, the standard tools for any kind of quantitative analysis include Microsoft Excel, SPSS, and for more exotic methods, PAST. While these software are widely used, there are a few limitations that are obvious to anyone who has worked with them for a long time, and raise the question about what alternatives are available. Here are three key limitations:
  • File formats: each program has its own proprietary format, and while there is some interoperability between them, we cannot open their files in any program that we wish. And because these formats are controlled by companies rather than a community of researchers, we have no guarantee that the Excel or SPSS file format of today will be readable by any software 10 or 20 years from now. 
  • Click-trails: the main interaction with these programs is by using the mouse the point and click on menus, windows, buttons and so on. These mouse actions are ephemeral and unrecorded, so that many of the choices made during a quantitative analysis in Excel are undocumented. When a researcher wants to retrace the steps of their workflow days, months or years after the original effort, they are dependent on their memory or some external record of many of the choices made in an analysis. This can make it very difficult for another person to understand how an analysis was conducted because many of the details are not recorded. 
  • Black boxes: the algorithms that these programs use for generating results are not available for convenient inspection to the researcher. The programs are a classic black box, where data and settings go it, and a result comes out, as if by magic. For moderately complicated computations, this can make it difficult for the researcher to interpret their results, since they do not have access to all of the details of the computation. This black box design also limits the extent to which the researcher can customise or extend built-in methods to new applications.
How to overcome these limitations?

For a long time archaeologists had few options to deal with these problems because there were few alternative programs. The general alternative to using a point-and-click program is writing scripts to program algorithms for statistical analysis and visualisations. Writing scripts means that the data analysis workflow is documented and preserved, so it can be revisited in the future and distributed to others for them to inspect, reuse or extend. For many years this was only possible using ubiquitous but low-level computer languages such as C or Fortran (or exotic higher level languages such as S), which required a substantial investment of time and effort, and a robust knowledge of computer science. In recent years, however, there has been a convergence of developments that have dramatically increased the ease of using a high level programming language, specifically R, to write scripts to do statistical analysis and visualisations. As an open source programming language with special strengths in statistical analysis and visualisations, R has the potential to be a solution to the three problems of using software such as Excel and SPSS. Open source means that all of the code and algorithms that make the program operate are available for inspection and reuse, so that there is nothing hidden from the user about how the program operates (and the user is free to alter their copy of the program in any way they like, for example, to increase computation speed).

Three reasons why R has become easier to use

Although R was first released in 1993, it has only been in the last five years or so that it has really become accessible and a viable option for archaeologists. Until recently, only researchers steeped in computer science and fluent in other programming languages could make effective use of R. Now the barriers to getting started with R are very low, and archaeologists without any background with computers and programming can quickly get to a point where they can do useful work with R. There are three factors that are relevant to the recent increase in the usability of R, and that any new user should take advantage of:
  • the release of an Integrated Development Environment, RStudio, especially for R
  • the shift toward more user-friendly idioms of the language resulting from the prolific contributions of Hadley Wickham, and 
  • the massive growth of an active online community of users and developers from all disciplines.
1. RStudio

For the beginner user of R, the free and open source program RStudio is by far the easiest way to quickly get to the point of doing useful work. First released in 2011, it has numerous conveniences that simplify writing and running code, and handling the output. Before RStudio, an R user had little more than a blinking command line prompt to work with, and might struggle for some time to identify efficient methods for getting data in, run code (especially if more than a few lines) and then get data and plots out for use in reports, etc. With RStudio, the barriers to doing these things are lowered substantially. The biggest help is having a text editor right next to the R console. The text editor is like a plain text editor (such as Notepad on Windows), but has many features to help with writing code. For example, it is code-aware and automatically colours the text to make it a lot easier to read (functions are one colour, objects another, etc.). The code editor has comprehensive auto-complete feature that shows suggested options while you type, and gives in-context access to the help documentation. This makes spelling mistakes rare when writing code, which is very helpful. There is a plot pane for viewing visualisations and buttons for saving them in various formats, and a workspace pane for inspecting data objects that you've created. These kinds of features lower the cognitive burden to working with a programming language, and make it easier to be productive with a limited knowledge of the language.

2. The Hadleyverse

A second recent development that makes it easier for a new user to be productive using R is a set of contributed packages affectionately known in the R user community as the Hadleyverse. User contributed packages are add-on modules that extend the functionality of base R. Base R is what you get when you download R from r-project.org, and while it is a complete programming language, the 6000-odd user contributed packages provide ready-made functions for a vast range of data analysis and visualization tasks. Because the large number of packages can make discovering relevant ones challenges, they have been organised into 'task views' that list packages relevant to specific areas of analysis. There is a task view for archaeology, providing an annotated list of R packages useful for archaeological research. Among these user-contributed packages are a set by Hadley Wickham (Chief Scientist at RStudio and adjunct Professor at Rice University) and his collaborators that make plotting better, simplify common data analysis activities, speed up importing data in R (including from Excel and SPSS files), and improve many other common tasks. The overall result is that for many people, programming in R is shifting from the base R idioms to a new set of idioms enabled by Wickham's packages. This is an advantage for the new user of R because writing code with Wickham's packages results in code that is easier to read by people, as well as being highly efficient to compute. This is because it simplifies many common tasks (so the user doesn't have to specify exotic options if they don't want to), uses common English verbs ('filter', 'arrange', etc.), and uses pipes. Pipes mean that functions are written one after the other, following the order they would appear in when you explain the code to another person in conversation. This is different from the base R idiom, which doesn't have pipes and instead has functions nested inside each other, requiring them to be read from the center (or inside of the nest) to the left (outside of the nest), and use temporary objects, which is a counter-intuitive flow for most people new to programming.

3. Big open online communities of users

A third major factor in the improved accessibility of R to new users is the growth of an active online communities of R users. There has long been an email list for R users, but more recently, user communities have former around websites such as Stackoverflow. Stackoverflow is a free question-and-answer website for programmers using any language. The unique concept is that it gamifies the process of asking and answering questions, so that if you ask a good question (ie. well-described, includes a small self-contained example of the code that is causing the problem), other users can reward your effort by upvoting your question. High quality questions can attract very quick answers, because of the size of the community active on the site. Similarly, if you post a high-quality answer to someone else's question, other users can recognise this by upvoting your answer. These voting processes make the site very useful even for the casual R user searching for answers (and who may not care for voting), because they can identify the high-quality answers by the number of votes they've received. It's often the case that if you copy and paste an error message from the R console into the google search box, the first few results will be Q&A pages on Stackoverflow. This is very different experience compared to using the r-help email list, where help can come slowly, if at all, and searching the email list, where it's not always clear which is the best solution. Another useful output from the online community of R users are blogs that document how to conduct various analyses or produce visualizations (some 500 blogs are aggregated at http://www.r-bloggers.com/). The key advantage to Stackoverflow and blogs, aside from their free availability, is that they very frequently include enough code for the casual user to reproduce the described results. They are like a method exchange, where you can collect a method in the form of someone else's code, and adapt it to suit your own research workflow.

There's no obvious single explanation for the growth of this online community of R users. Contributing factors might include a shift from SAS (a commercial product with licensing fees) to R as the software to teach students with in many academic departments, due to the Global Financial Crisis of 2008 that forced budget reductions at many universities. This led to a greater proportion of recent generations of graduates being R users. The flexibility of R as a data analysis tool, combined with  rise of data science as an attractive career path, and demand for data mining skills in the private sector may also have contributed to the convergence of people who are active online that are also R users, since so many of the user contributed packages are focused on statistical analyses.

So What?

The prevailing programs used for statistical analyses in archaeology have severe limitations resulting from their corporate origins (proprietary file formats, uninspectable algorithms) and mouse-driven interfaces (impeding reproducibility). The generic solution is an open source programming language with tools for handling diverse file types and a wide range of statistical and visualization functions. In recent years R has become the a very prominent and widely used language that fulfills these criteria. Here I have briefly described three recent developments that have made R highly accessible to the new user, in the hope that archaeologists who are not yet using it might adopt it as more flexible and useful program for data analysis and visualization than their current tools. Of course it is quite likely that the popularity of R will rise and fall like many other programming languages, and ten years from now the fashionable choice may be Julia or something that hasn't even been invented yet. However, the general principle that a scripted analyses using an open source language is better for archaeologists, and science generally, will remain true regardless of the details of the specific language.

Wednesday, 22 April 2015

Rediscovering Ancient Identities in Kotayk (Armenia)

Since some years crowdfunding has become a new resource in archeology, providing support to those projects which have difficulties in financing the many research activities connected with historical investigations in general.
Despite our team has not yet tested the true potential of this system, today I would like to help some colleagues and friends who decided to experiment this way of funding for their expedition in the Kotayk region (Armenia). 
Their mission started in summer 2013 and tries "to register and study all the archaeological sites along the upper Hrazdan river basin, in the Armenian province of Kotayk. The project is organized by the Institute of Archaeology and Ethnography of the Academy of Sciences of the Republic of Armenia, the International Association of Mediterranean and Oriental Studies (ISMEO) and the Italian Foreign Affairs Minister." Up to now the team achieved some remarkable results, locating 56 historical/archaeological sites and starting an excavation in the well preserved iron age fortress of Solak.
If you want to support their effort in recording and analyzing archaeological evidences in the Kotayk region, you can find more details in their official Indiegoo page.
I personally met most of the team members (Dr. +Manuel Castelluccia, Dr. Roberto Dan and Dr. +Riccardo La Farina) between 2010 and 2011, when they joined (between 2010 and 2011) the missions of Aramus (Armenia) and Khovle Gora (Georgia), in which I was working with Arc-Team for the Institut für Alte Geschichte und Altorientalistik

The visit to the city of Vardzia, during the mission in Khovle Gora (2011)


There I could appreciate their commitment and professionalism. For this reason I wish a very successful 2015 mission for the Kotayk Survey Project, hoping to get soon some feedbacks from this interesting project also here in ATOR!

A moment of relax during the mission in Khovle Gora (2011)




BlogItalia - La directory italiana dei blog Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.