Monday 25 February 2013

Cloud distance tool.

I was working on different SfM/IBM of a grave we dug in 2010. we have the documentation of four different levels (see picture below). It was a complex archaeological context, with two skeletons buried in different times (double burial), both partially destroyed by the construction of the Renaissance apse. Moreover the tomb was built on the side of a prehistoric house.



I tried to rectify the point clouds inside CloudCompare v. 2.4 (normally i use GRASS with the ply importer addon or MehLab) and I discover this fantastic tool: compute cloud/cloud distance. It can calculate the distance between two different overlapping mesh, similarly to the GRASS command "r.mapcalc". As you can see in the pictures below, the distance analysis between the first and the last documentation can represent the quantity of removed ground. It could be really useful for analysis of damages in buildings.

first point cloud

fourth point cloud

cloud/cloud distance

cloud/cloud distance over the fourth point cloud


Saturday 23 February 2013

The first photomosaic for architectural purposes?

In these days I am teaching various techniques to document in 2D cultural heritage with FLOSS (Free/Libre and Open Source Software) at the UNESCO master Open Techne. In particular I speak about photomapping and 2D photogrammetry technologies (to record horizontal and vertical surfaces).
Teaching students is always an interesting and instructive experience and , in most cases, it is often a mutual exchange of information, so it is more similar to a dialogue than to a monologue. I often learned a lot during these occasions and sometimes I have the opportunity to further investigate particular topics or to change my point of view on them, thanks to the discussion with other people.
Today it happened something like this: we were investigating the right way to correctly take pictures in order to use them for am architectural photomosaic (fortunately among the students there are not only archaeologists, but also architects, engineers, computer technicians, etc...), so I thought that a good example was the "photographic paint" maid in 1873 by Giacomo Rossetti, that I happened to see some years ago visiting the Musei Civici di Arte e Storia di Brescia (IT). You can see this "masterpiece" in the image below:

Photographic paint of S. Maria dei Miracoli in Brescia (G. Rossetti)

If I remember well what I read about this photographic paint, G. Rossetti built a wooden stage in order to collect the different photos that compose the photomosaic without excessive distortion. Of course now there are simpler ways to take good pictures (just read the last post Alessandro wrote about the UAV dornes we built), but the question of the students was: 

is this the first example of photomosaic for architecturale purposes?

To be honest, I was not able to answer the question. I just know that G. Rossetti presented its work during the exposition in Vienna in 1873 (where he won the medal of merit), but he started similar project earlier (around 1862). It seems that Rossetti's experiments were most appreciated abroad that at home (there is not even an Italian page in Wikipedia), so I think that better informations can be found in foreign countries.

 If some of the readers knows similar work of other photograpgers/artist (or of G. Rossetti), please report them on this blog, so next time maybe I will be able to better answer to student's question about this topic :).

Saturday 16 February 2013

"Henry IV", forensic facial reconstruction by manual tracking

Some days ago I read an article in a site talking about the embalmed head of Henry IV (originally Henri IV).

When I finished to read the article I rapidly started to search about matter to make a forensic approximation. However, how mostly of times, I find few things to work.

The mostly of images found was of the embalmed head. It is difficult to put the tissue depth marks because you can make some mistake and compromising the result of the approximation.

Resignedly I researched more on the internet until find a link, where an user called Patrick (thank you!) wrote about an article with a lot of image and two videos, one of a endoscopy ans other about CT-Scan.

When I saw the video of CT-Scan rotating, I imagined that would be able to reconstruct it using one of the automated technology that I had:

1) By SfM (opensource)

2) With Stereoscan (freeware)

3) With 123D Catch (freeware, running on Linux by Wine)

4) With Blender tracking (opensource)

5) With Voodoo Tracker (freeware)

But before, I needed to download the video. I thought that the Videodownloader Helper would be able to be this, but I was wrong.

When I tryed to open the video on Google Chrome, I received the link and the Vlc/Totem players opened the animation.

So I made a experience without pretentions writing a command line, where FFMPEG converts the online streming video in a image sequence on my pc:

$ ffmpeg -i rtmp://highwirepr.fcod.llnwd.net/a1969/o21/flv:h4-ctscan -sameq %04d.jpg

And... it worked!

FFMPEG surprised me, but none of that cited technology of automatic reconstruction worked. Some because of the characteristics others, maybe, because of my incompetence.

In fact, I had to import the sequence inside Blender and tracking the 3D object manually, using the sequence how reference.


If you see the frame where the skin starts to appear you'll see that the skull was not modeled, instead this you'll see some reference lines. The tissue depth markers was put using these lines and the image how reference.

The skulls that appear are only background images.

The next step is refine the technique always thinking on evolution of knowledge.

But, more important than this is to share what I learned with you, dear reader.

I hope you enjoy. A big hug and I see you in the next!

Converting a painting into a 3D scene

The image above is a 3D scene modeled using a frame of Piero della Francesca how a base. The source can be found here.


Why reconstruct a scene from a painting?

I don't know if all think this way, but I always imagined how would be inside a frame I was seeing.

Now I had the opportunity to do it using Blender and few hours of work.

Beyond this, a "commercial" application of this technology is to convert every frame in a 3D scene for visualization in the new media that supports it.

Or other situation that appear in the future.

An interesting thing in this case was the distribution of the buildings. When you see a 2D painting you don't have a good idea of the space.

The bigger difficulty was the edition of the texture near the observer, because when you creat a 3D stereo pair scene, you need the two views of the eyes (left-right). So, when you move a little the camera, you will see some parts that the paint didn't cover.

Even being a manual work, to find the viewpoint was relatively easy, thanks to "Lock Camera to Views" of Blender. With it you can manipulate the position of the camera directly by the 3D viewport.

Is it possible to create anaglyphic images inside Blender, but I like to use  Imagemagick to do it:

$ composite -stereo +0+0 Right.png Left.png Anaglyph.png

Well, the history was this. I hope you enjoyed.

See you in the next. A big hug!


Wednesday 6 February 2013

Financial Candlestick chart for archaeological purposes: preliminary tests

Candlesticks chart is a plot used primarily in finance for describing price movements of quoted stock, derivative or currency over time. It is a combination of a line-chart and a bar-chart:

Chart made by R package "Quantmod"

the wick (i.e. one or two lines coming out from the polygon) illustrates the highest and lowest traded prices during the time interval represented; the body (i.e. the polygon) illustrates the opening and closing trades. Candlesticks chart seems apparently similar to box plots, but they are totally different (source: Wikipedia, 05/02/2013).
I often saw this kind of chart in newspapers or in TV, but only now I have undertaken to understand how it works; and so I had the “crazy” idea of applying candlesticks chart to archaeological data.
More specifically I thought about archaeological finds. For many of them - in particular for ceramic types - we know a starting date, i.e. the period in which a specific production begins, a time range of maximum diffusion, defined by an initial moment and a final one, and an end date after which there are no more traces of our object.
If we replace the 4 financial values of higest, lowest, opening and closing price with these 4 chronological value (starting date, initial and final moments of maximum diffusion's range, end date), we could use profitably candle plot for describing the life's path of each archaeological material found in a stratigraphic unit (US); the goal is to date the same US by comparing the candlesticks of all materials contained therein.

In R Candlesticks chart is provided by Quantmod package, Quantitative Financial Modelling & Trading Framework for R (http://cran.r-project.org/web/packages/quantmod/index.html). But this package is very specific for financial purposes and requires specific data types like time series (xts): so I put aside the idea of using the Quantmod package and I tried to build a new R function for plotting candlesticks with non-financial data.
This is a first (simplified) example of my preliminary tests (for which I have to thank R-help mailing list: https://stat.ethz.ch/mailman/listinfo/r-help).

I built a table like this:

find, min, int_1, int_2, max
find_a, 250, 300, 400, 550
find_b, 200, 350, 400, 450
find_c, 350, 400, 450, 500
find_d, 250, 350, 450, 500
find_e, 200, 400, 500, 600

For each archaeological object (find_a, find_b, find_c, …) is given the starting date (min), the initial and final moments of maximum presence range (int_1, int_2) and the end date (max), all in approximate years.
I plotted this dataframe in R using “with()” function that enables to build a personal environment from data. Here is the source code:

> US1 <- read.table("../example.txt", header=TRUE, sep=",")
> with(US1, symbols(find, (int_1 + int_2)/2, boxplots=cbind(.4,int_2-int_1,int_1-min,max-int_2,0), inches=F, ylim=range(US1[,-1]), xaxt="n", ylab="Years (AD)", xlab="Findings", main="Findings chronological distribution of US 1", fg="brown", bg="orange", col="brown"))
axis(1,seq_along(US1$find),labels=US1$find)

and here is the result:


Analyzing this plot it is possible to deduce that the layer US1 probably dates back to the first half of 5th century A.D.; the materials find_a and find_b could be residuals of previous ages.


As I said, this is just a simple example, but potentialities are clear. This method enables to plot the duration of archaeological materials and to compare the datings of objects found in stratigraphic units for assigning them a chronological information. The statistical environment could provide other advantages like probabilistic analyses, confidence intervals, etc., giving a mathematical-statistical support to the usual (and often subjective) dating of the archaeological layers.
The next steps will be the building of a specific R function for “archaeological” candle plot, starting from the simple code written above, and tests to plot the duration of archeological finds with other statistical techniques like seriation, boxplot, etc.

Any suggestions, websites, literature and bibliographic references about this topic, advice on R packages different from Quantmod that provide candlesticks chart without financial data are welcomed.

by Denis Francisci

Friday 1 February 2013

It is Carnival!

Once we were young and stupid, now we are no more young
(quote attributed to Mick Jagger)

OK, I am stupid, but the Taung Child face was the only 3D data I had in my computer at this moment, so I gave a try to a software we would like to add in ArcheOS.
We are working on the implementation of some new functionalities for the next release (Theodoric), especially regarding a good 3d engine and some augmented reality applications. I think Alessandro, surfing on the net, found the right open source software (openspace3d) and, with the help of ORNis (aka Romain Janvier) we hope to port it in GNU/Linux as soon as possible.
So here is the result of the first test:


Do not worry for the slow reaction of the software, it is mainly caused by the on-line screen recorder I was using to register the video (it was based on Java and it slowed down a little bit the applications that were running on my computer...). As usual, if you want to help us (also for software evaluation), just join ArcheOS channel on IRC.
Stay tuned :).

BlogItalia - La directory italiana dei blog Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.