Monday, 30 January 2017

Digitizing the excavation

The 21st Conference on Cultural Heritage and NEW Technologies (CHNT 21, 2016) took place in Vienna  the first week of November 2016. In that occasion we gave a presentation entitled "Digitizing the excavation. Toward a real-time documentation and analysis of the archaeological record". Today I found the time to publish it in our blog, to share our research regarding this topic and in particular some interesting projects of "archeorobotics" we are working on.
Here below you can see the video of the presentation, done like always with the open source software impress.js and Strut...

... and here is a short description of each slide:


The title (strictly related with Digital Archaeology in general)


A short presentation of Arc-Team


All the work has been done thanks to Free/Libre and Open Source Software. In order to keep going on with our research regarding archaeological methodology we need the source code!


The fundamental schema of the archaeological cognitive process elaborated by G. Leonardi in 1982. The schema shows the progressive reduction of the informations regarding human actions before and during the archaeological excavation (Human activities --> Traces on the soil --> Natural and anthropological degradation of the record --> archaeological excavation --> archaeological documentation) until the interpretative knowledge starts recover information during the post-excavation stage (with analitical data interpretation and reconstructive hypothesis)


A practical example of the schema from the site of Torre dei Sicconi in Italy (a medieval castle):
1. Human activities (summarized in the building of the castle, the medieval battle and the destruction of the main structure and the controlled explosion during the Great War)

2. Traces on the soil (summarized in the evidences of the battle, of the controlled explosion and of recent agrarian activities, while just negative layers were found regarding the construction of the structure)

3. Natural and anthropological degradation (summarized in the battle, the explosion, the agrarian activities and the normal natural dynamics)

4. Archaeological excavation (the most destructive investigation: in Torre dei Sicconi all the layers concerning the tower and the main central building has been removed by this activity)

5. The importance of archaeological documentation comes from distructive analysis (excavation). Being a long term project, Torre dei Sicconi was documented both with traditional and digital methodology

6. Data analysis. During this stage our knowledge of the site started to grow again. In this case both archaeological and historical techniques have been used

7. Reconstructive hypotheses represent the maximum increase of our (interpretative) knowledge of the site. For Torre dei Sicconi this stage has been achieved just for the central part of the castle (tower and main building)


The archaeological excavation is the most critical (destructive) stage of our knowledge regarding a site.


Arc-Team's excavation strategies:
1. increasing the amount of information registered decreasing the time-consuming operation of archaeological documentation
2. on-site direct observation for a better interpretation, avoiding at the same time any kind of data selection
3. moving the lab into the field (chemical and physical analyses)


A milestone of our research: in 2006 the development of the "Metodo Aramus" gave us a better (more precise and accurate), faster and corect (equalized) 2D digital documentation with FLOSS.


Another milestone. Between 2008 and 2009 the migration from pure photogrammetric software to SfM and MVSR methods (through the development of a GUI for +Pierre Moulon's application  Python Photogrammetry Suite) gave us better and faster 3D digital documentation


Even today we still use a combination of 2D and 3D techniques to meet different requirements of various archaeological projects


2D digital documentation through GIS is fast enough for on site interpretation during emergency excavation


A software like +QGIS  allows a direct interpretation on the field without the necessity of long post-rpocessing


3D documentation gives better results, but needs longer processing time (even if it does not need long data acquisition on the field, which is always performed)


We achieved (a lower quality) 3D data acquisition which has the fundamental characteristic of being real-time, thanks to open hardware (archeorobotics)
Our experience in archeorobotics dates back to 2006 with our first prototype of UAV, which could be use professionally just in 2008.


Currently or archeorobotics research regards our last prototype of Archeodrone (a UAV specifically designed for aerial archaeology)...


... some CNC machines and, above all, the Fa)(a 3D, a 3D open hardware printer which without any kind of modifications was able to satisfy our archaeological needs (like 3D printing casts of unique finds or exctract and print DICOM data form x-ray CT scan)...


... and the ArcheoROV, the open hardware Remotely underwater Operated Vehicle which we developed with the +Witlab Fablab 


Some pictures of the first test of the ArcheoROV


A first step into 3D real-time documentation through SLAM (Simultaneous Localization and Mapping) techniques has been done with the open source ROS (Robot Operating System) and RTAB-Map via Kinect...


... and tested for 3D real-time documentation in wooden areas (where SfM and MVSR or laserscab would have been too slow), reaching in almost one hour of work a model (with real dimension) of 75000 points.


A benefit of archaeorobotic system like these (which are ROS capable) is the possibility to change the sensor in order to adapt the hardware to different situation, using monocular or stereo cameras (for odometry) as well as LIDAR or SONAR devices.


Another benefit is the wide range of possibilities offered by the different open source software (e.g. RTAB-Map, LSD-SLAM, REMODE, Cartographer, ecc...)


Currently the precision/accuracy level of a real-time 3D archaeological documentation cannot be compared with the results achieved with post-processing through traditional SfM - MVSR systems, but there are good prospects for improvement.


Nowadays, basing on our professional experience, the best use of such devices seems to be during extreme operations, such as high mountain archaeology, glacial archaeology, underwater archaeology or speleoarchaeology


Another important step to improve the reaction time of professional archaeology, in order to avoid errors during the critical stage of the excavation, is the possibility to perform some basic archaeometrical analyses (chemical and physical) directly on the field.


Considering the composition of any archaeological layer based on two different elements, the skeleton (macroscopic) and the fine earth (microscopic), it is obvious that different analyses can be performed in different work environment.


For instance, in the case of the skeleton, a fast petrografic (ontoscopic) analysis can be easily performed directly on the field (defining allogeneic elements), while further (more specific) investigations need an equipped laboratory.


Also in the case of fine earth, some raw descriptive analyses can be performed on the field, while laboratory investigation can reach very detailed results (e.g. with the Scanning Electron Microscope).


The field analysis of the fine earth is more problematic (compared with the skeleton) the most common test (e.g. the Soil texture by feel) are anametric and subjective
For this reason, archaeometric test are the better choice (e.g the sedimentation test)


The sedimentation test on the field can be improved with basic physical analysis (e.g. considering the Stoke's Law in order to define sand, silt and clay by the tme they need to sediment)


Another implementation on the field for the sedimentation test is the possibility to directly store the data into a PostreSQL/PostGIS database (through some specific fields of the archaeological recording sheet), using the open source application geTTexture.


An example of the use of geTTexture


Other archaeometric test which are simple to perform directly during the excavation are based on basic chemical analyses, and specifically with the quantification of compounds like phosphates or nitrates.


Moreover, with some simple workarounds, it is possible to turn anametric (boolean) analyses of carbonates or organic substances, into metric (quantitative) observations.


The Archaeological excavation is a destructive process, subject to fatal (not reversible) errors. Moreover the reduced time and budget in professional and emergency archaeology increase stress conditions during decision making stages.
Real-time 3D mapping can speed up data interpretation, avoiding data selection on the field, while on-site chemical and physical analyses (geoarchaeology and archaeometry) can define a better (data-driven) digging strategy.

I hope this presentation can be useful. Have a nice day!

Sunday, 29 January 2017

Gufan, the 2000 year old Brazilian


In 2013 I visited the Paranaense Museum with Dr. Moacir Elias Santos. At that time I was in Curitiba to present the face of an Andean mummy, on the occasion of the II Happy Mommy's Day.
Panel printed with Gufan's facial reconstruction process - Photo: Karen Becker
Dr. Moacir had told me that I would be surprised by the rich collection of the museum. In fact I was surprised, every room I could see pieces and more pieces, which together made up a historical panel, not only of the state of Paraná, but of Brazil and even of other countries.

TV story about the facial reconstruction of Gufan and the use of virtual reality

After dazzling myself with old vestments, pictures, coins and infographics, we arrived at a room where the bones of an abogirinal child of a few hundred years were being presented.

I wasted no time and took a series of photographs of the skull, already with the intention of digitizing it in 3D and later reconstruct it.

As soon as I returned to Mato Grosso, that's exactly what I did. I showed the work to Dr. Moacir and he appreciated it, but he asked me to contact those responsible for the museum so that they would know about the work I was doing, after all, I had not agreed with them to use the pictures.

I called the museum, explained the situation and the clerk transferred me to Dr. Claudia Parellada. Undoing my initial fears, which foresaw a future dominated by coercion, she was interested in the idea of reconstruction and not only allowed me to post the work on my site, but also raised the possibility of building a partnership, since she relied on others skulls, some of them over a thousand years old.

The facial reconstruction project

The story does not stop there. In 2008 I traveled to Curitiba for the first time at the invitation of my friend Alessandro Binhara, to lecture on Blender and computer graphics at the educational institution he was working on. The talk was given and we agreed to one day we would work on a project together.

Steps of facial reconstruction

Nine years passed and the opportunity appeared. I closed an in-house workshop with Mr. Binhara, Beenoculus staff, and my other buddy, the developer Sandro Bihaiko. The plan was to bring together a number of experts and study some applications using virtual and augmented reality.

In the meantime I realized that it was a good opportunity to resume the discussions with the staff of the Paranaense Museum and I went back to talking with Dr. Claudia Parellada and Dr. Renato Carneiro, director of the institution.

I learned then that they had a rich collection of skulls, and among them was Gufan, a 2000-year-old proto-Jê autochthon. The name Gufan comes from the Kaingang language and means "ancestor". For the integrity of the anatomical piece she proved to be the most apt to be reconstructed.

Dr. Parellada and Dr. Carneiro collected all the data about Gufan, as well as sent me a series of photos that served as a basis for 3D scanning by the photogrammetry technique. Shortly afterwards I had the skull digitized and the reconstruction work started.

Facial reconstruction

The process of facial reconstruction went smoothly with nothing new in relation to the other works. Starting with the positioning of soft tissue thickness markers, I then went through digital sculpture, retopo (simplification of the mesh), mapping and pigmentation, and finally the placement of hair and generation of images.

The base of facial texture
It must be documented that I received the mapping references with an international flavor. My friend Santiago González photographed one of his students in Lima, Peru and sent a series of images to be used at work. I take this opportunity to thank him and the student!

I had to resort to this solution because here in my city I did not have any individuals with indigenous traits to take pictures. I thought about it a little and turned to my Peruvian friends, since in that beautiful country, a considerable part of the population carries the appearance of its historic and warrior people.

The Virtual Reality

With the face of Gufan reconstructed I traveled to Curitiba where I would meet with the team to carry out our project. The works took place at the premises of Beenoculus, a virtual reality glasses assembly company and interactive content.

The excitement was so great that our workshop was just about creating a presentation for Gufan. Beenoculus donated a state-of-the-art goggles, my friend Binhara came in with cutting-edge machinery, a generous video card for the application to roll without choking, and Sandro Bihaiko wrote the application with the help of local officials.

While the presentation was developed on one side, we moved to the Paranaense Museum to see if everything was right with the space where the revelation would be held. A panel was assembled illustrating the stages of facial reconstruction, we talked about the distribution of the elements and seats and everything was right, just wait for the big day.

The face presentation

The presentation of the face of Gufan was held on January 24, 2016. Initially we expected 20 to 30 people, but I articulated a rapprochement with the press in order to supplant that number without much pretension, of course.

Before traveling to Curitiba I composed a release with the digital technology personnel and the management of the Paranaense Museum. I also telephoned several TVs and newspapers in the city and soon faced the biggest newspaper (Gazeta do Povo) and the biggest TV (RPC, Globo) showed interest in the agenda. The result of all this has been translated into two newspaper covers and a 7-minute story with two live insertions in the midday issue of January 24.

And during the presentation, instead of 20 or 30 people came 170 according to the organizers! A lot of people had to attend the two lectures standing. Total success!


I just have to thank everyone who made this possible: Claudia Parellada, Renato Carneiro, Alessandro Binhara, Sandro Bihaiko, Anelise Daux, Junior Evangelista Terrabuio, Rawlinson Terrabuio, Matheus Dalla, Victor Ullmann, Amilton Binhara, Adelina Binhara, Lucas Gabriel Marins, Durval Ramos, Angieli Maros, Fernanda Fraga, Keyse Caldeira, Caroline Olinda Everton da Rosa e  Karen Lisse Fukushima.

Not forgetting to mention the companies and institutions involved: Paranaense Museum, Azuris, Beenoculus, State Secretary of Culture of Paraná, Government of Paraná, Arc-Team Italy and all the press.

I hope from the bottom of my heart that this partnership continues and that good news the future holds. A big hug and thank you for reading!

Wednesday, 28 December 2016

The devils boat

This year, thanks to Prof. Tiziano Camagna, we had the opportunity to prove our methodologies during a particular archaeological expedition, focused on the localization and documentation of the "devils boat". 
This strange wreck consists in a small boat built by the Italian soldiers, the "Alpini" of the battalion "Edolo" (nicknamed the "Adamello devils"), during the World War 1, near the mountain hut J. Payer (as reported by the book of Luciano Viazzi "I diavoli dell'Adamello"). 
The mission was a derivation of the project "La foresta sommersa del lago di Tovel: alla scoperta di nuove figure professionali e nuove tecnologie al servizio della ricerca” ("The submerged forest of lake Tovel: discovering new professions and new technologies at the service of scientific research"), a didactic program conceived by Prof. Camagna for the high school Liceo Scientifico B. Russell of Cles (Trentino - Italy).
As already mentioned, the target of the expedition has been the small boat currently lying on the bottom of lake Mandrone (Trentino - Italy), previously localized by Prof. Camagna and later photographed during an exploration in 2007. The lake is located at 2450 meters above see level. For this reason, before involving the students into such a difficult underwater project, a preliminary mission has been accomplished, in order to check the general conditions and perform some basic operations. This first mission was directed by Prof. Camagna and supported by the archaeologists of Arc-Team (Alessandro Bezzi, Luca Bezzi, for underwater documentation, and Rupert Gietl, for GNSS/GPS localization and boat support), by the explorers of the Nautica Mare team (Massimiliano Canossa and Nicola Boninsegna) and by the experts of Witlab (Emanuele Rocco, Andrea Saiani, Simone Nascivera and Daniel Perghem).
The primary target of the first mission (26 and 27 August 2016) has been the localization of the boat, since it was not known the exact place where the wreck was laying. Once the boat has been re-discovered, all the necessary operations to georeference the site have been performed, so that the team of divers could concentrate on the correct archaeological documentation of the boat. Additionally to the objectives mentioned above, the mission has been an occasion to test for the first time on a real operating scenario the ArcheoROV, the Open hardware ROV which has been developed by Arc-Team and WitLab.
Target 1 has been achieved in a fast and easy way during the second day of  mission (the first day was dedicated to the divers acclimation at 2450 meters a.s.l.), since the weather and environmental conditions were particularly good, so that the boat was visible from the lake shore. Target 2 has been reached positioning the GPS base station on a referenced point of the "Comitato Glaciologico Trentino" ("Galciological Committee of Trentino") and using the rover with an inflatable kayak to register some Control Points on the surface of the lake, connected through a reel with strategical points on the wreck. Target 3 has been completed collecting pictures for a post-mission 3D reconstruction through simple SfM techniques (already applied in underwater archaeology). The open source software used in post-processing are PPT and openMVG (for 3D reconstruction), MeshLab and CloudCompare (for mesh editing), MicMac (for the orthophoto) and QGIS (for archaeological drawing), all of them running on the (still) experimental new version of ArcheOS (Hypatia). Unlike what has been done in other projects, this time we preferred to recover original colours form underwater photos (to help SfM software in 3D reconstruction), using a series of command of the open source software suite Image Magick (soon I'll writ  a post about this operation). Once completed the primary targets, the spared time of the first expedition has been dedicated to secondary objectives: teting the ArcheoROV (as mentioned before) with positive feedbacks, and the 3D documentation of the landscape surrounding the lake (to improve the free LIDAR model of the area).
What could not be foreseen for the first mission was serendipity: before emerging from the lake, the divers of Nautica Mare team (Nicola Boninsegna and Massimiliano Canossa) found a tree on the bottom of the lake. From an archaeological point of view it has been soon clear that this could be an import discovery, as the surrounding landscape (periglacial grasslands) was without wood (which is almost 200 meters below). The technicians of Arc-Team geolocated the trunk with the GPS, in order to perform a sampling during the second mission.
For this reason, the second mission changed its priority an has been focused on the recovering of core samples by drilling the submerged tree. Further analysis (performed by Mauro Bernabei, CNR-ivalsa) demonstrated that the tree was a Pinus cembra L. with the last ring dated back to 2931 B.C. (4947 years old). Nevertheless, the expedition has maintained its educational purpose, teaching the students of the Liceo Russell the basics of underwater archaeology and performing with them some test on a low-cost sonar, in order to map part of the lake bottom.
All the operations performed during the two underwater missions are summarized in the slides below, which come from the lesson I gave to the student in order to complete our didactic task at the Liceo B. Russell.


Prof. Tiziano Camagna (Liceo Scientifico B. Russell), for organizing the missions

Massimiliano Canossa and Nicola Boninsegna (Nautica Mare Team), for the professional support and for discovering the tree

Mauro Bernabei and the CNR-ivalsa, for analizing and dating the wood samples

The Galazzini family (tenants of the refuge “Città di Trento”), for the logistic support

The wildlife park “Adamello-Brenta” and the Department for Cultural Heritage of Trento (Office of Archaeological Heritage) for close cooperation

Last but not least, Dott. Stefano Agosti, Prof. Giovanni Widmann and the students of Liceo B. Russel: Borghesi daniele, Torresani Isabel, Corazzolla Gianluca, Marinolli Davide, Gervasi Federico, Panizza Anna, Calliari Matteo, Gasperi Massimo, Slanzi Marco, Crotti Leonardo, Pontara Nicola, Stanchina Riccardo

Tuesday, 27 December 2016

Basic Principles of 3D Computer Graphics Applied to Health Sciences

Dear friends,

This post is an introductory material, created for our online and classroom course of "Basic Principles of 3D Computer Graphics Applied to Health Sciences". The training is the result of a partnership that began in 2014, together with the renowned Brazilian orthognathic surgeon, Dr. Everton da Rosa.

Initially the objective was to develop a surgical planning methodology using only free and freeware software. The work was successful and we decided to share the results with the orthognathic surgery community. As soon as we put the first contents related to the searches in our social media, the demand was great and it was not only limited to the professionals of the Dentistry, but extended to all the fields of the human health as well as veterinary.

In view of this demand, we decided to open the initial and theoretical contents of the topics that cover our course (which is pretty practical). In this way, those interested will be able to learn a little about the concepts involved in the training, while those in the area of ​​computer graphics will have at hand a material that will introduce them to the field of modeling and digitization in the health sciences.

In this first post we will cover the concepts related to 3D objects and scenes visualization.

We hope you enjoy it, good reading!

Chapter 1 - Scene Visualization

You already know much of what you need

Cicero Moraes
Arc-Team Brazil

Everton da Rosa
Hospital de Base, Brasília, Brazil

What does it take to learn how to work with 3D?

If you are a person who knows how to operate a computer and at least have already edited a text, the answer is, little.

When editing a text we use the keyboard to enter the information, that is, the words. The keyboard helps us with the shortcuts, for example the most popular CTRL + C and CTRL + V for copy and paste. Note that we do not use the system menu to trigger these commands for a very simple reason, it is much faster and more convenient to do them by the shortcut keys.

When writing a text we do not limit ourselves to writing a sentence or writing a page. Almost always we format the letters, leaving them in bold, setting them as a title or tilting them and importing images or graphics. These latter actions can also be called interoperability.

The name is complex, but the concept is simple. Interoperability is, roughly speaking, the ability of programs to exchange information with one another. That is, you take the photo from a camera, save it on the PC, maybe use an image editor to increase the contrast, then import that image into your document. Well, the image was created and edited elsewhere! This is interoperability! The same is true of a table, which can be made in a spreadsheet editor and later imported into the text editor.

This amount of knowledge is not trivial. We could say that you already have 75% of all the computational skills needed to work with 3D modeling.

Now, if you are one of those who play or have already played a first-person shooter game, you can be sure that you have 95% of everything you need to model in 3D.

How is this possible?

Very simple. In addition to all the knowledge surrounding most computer programs, as already mentioned, the player still develops other capabilities inherent in the field of 3D computer graphics.

When playing on these platforms it is necessary first of all to analyze the scene to which one is going to interact. After studying the field of action, the player moves around the scene and if someone appears on the line the chance of this individual to take a shot is quite large. This ability to move and interact in a 3D environment is the starting piece for working with a modeling and animation program.


Observation of the scene

When we get to an unknown location, the first thing we do is to observe. Imagine that you will take a course in a certain space. Hardly anyone "arrives rushed in’’ an environment. First of all we observe the scene, we make a general survey of the number of people and even study the escape routes in case of a very serious unforeseen event. Then we move through the studied scene, going to the place where we will wait for the beginning of the activities. In a third moment, we interact with the scenario, both using the course equipment such as notebook and pen, as well as talking to other students and / or teachers.

Notice that this event was marked by three phases:
1) Observation
2) Displacement
3) Interaction

In the virtual world of computer graphics the sequence is almost the same. The first part of the process consists in observing the scene, in having an idea of ​​what it is like. This command is known as orbit. That is, an observer orbit (Orbit) the scene watching it, as if it were an artificial satellite around the earth. It maintains a fixed distance and can see the scene from every possible angle.

But, not only orbiting man lives, one must approach to see the details of some specific point. For this we use the zoom commands, already well known to most computer operators. Besides zooming in and out (+ and - zooming) you also need to walk through the scenes or even move horizontally (a movement known as Pan).

A curious fact about these scene-observation commands is that they almost always focus on the mouse buttons. See the table below:

We have above the comparative of three programs that will be discussed later. The important thing now is to know that in the three basic zoom commands we see the direct involvement of the mouse. This makes it very clear that if you come across an open a 3D scene and use these combinations of commands, at least you will shift the viewer .

The phrase "move the observer" has been spelled out, so that you are aware of a situation. So far we are only dealing with observation commands. By the characteristic of its operation, it can very well be confused with the command of rotation of the object. As some would say, "Slow down. It's not quite that way. This is this, and that is that. ". It is very common for beginners in this area to be confused between one and the other.

To illustrate the difference between them, observe in the figure above the scene to the center (Original) that is the initial reference. On the left we observe the orbit command in action (Orbit). See that the grid element (in light gray) that is reference of what would be the floor of the scene accompanies the cube. This is because in fact the one who moves in the scene is the observer and not the elements. At the right side (Rotate) we see the grid in the same position as in the scene in the center, that is, the observer remained at the same point, except that the cube underwent rotation.
Why does this seem confusing?

In the real world, the one we live in, the observer is ... you. You use your eyes to see the space with all the three-dimensional depth that this natural binocular system offers. When we work with 3D modeling and animation software, your eyes become 3D View, that is, the working window where the scene is being presented.
In the real world, when we walk through a space, we have the ground to move. It is our reference. In a 3D scene usually this initial ground is represented by the grid that we saw in the example figure. It is always important to have a reference to work, otherwise it is almost impossible, especially for those who are starting, to do something on the computer.

Display Type

"Television makes you fatten".

Surely you have  already heard this phrase in some interview or even some acquainted or someone that had already been filmed and saw the result on the screen. In fact, it can happen that the person seems more robust than the "normal", but the answer is that, in fact, we are all more full-bodied than the structure that our eyes present to us when we look at ourselves in front of the mirror.

In order for you to have a clear idea of ​​what this means, you need to understand some simple concepts that involve viewing from an observer in a 3D modeling and animation program.

The observer in this case is represented by a camera.

Interestingly, one of the most used representations for the camera within a 3D scene is an icon of a pyramid. See the figure above, where three examples are presented. Both Blender 3D software and MeshLab have a pyramid icon to represent the camera in space. The simplest way to represent this structure can be a triangle, like the one on the right side (Icon).

All this is not for nothing. This representation holds in itself the basic principles of photography.

You may have heard of the pinhole camera(dark chamber). In a free translation it means photographic camera of hole. The operation of this mechanism is very simple, it is an archaic camera made with a small box or can. On one side it has a very thin hole and on the other side a photo paper is placed. The hole is covered by a dark adhesive tape until the photographer in question positions the camera in a point. Once the camera is positioned and still, the tape is removed and the film receives the external light for a while. Then the hole is again capped, the camera is transported to a studio and the film revealed, presenting the scene in negative. All simple and functional.

For us what matters is even a few small details. Imagine that we have an object to be photographed (A), the light coming from outside enters the camera through a hole made in the front (B) and projects the inverted image inside the box (C). Anything outside this capture area will be invisible (illustration on the right).

At that point we already have the answer of why the camera icons are similar in different programs. The pyramid represents the projection of the visible area of the camera. Notice that projection of the visible area is not the same as the ALL visible area, that is,  we have a small presentation of how the camera receives the external scene.

Anything outside this projection simply will not appear in the scene, as in the case of the above sphere, which is partially hidden.

But there's still one piece left in this puzzle, which is why we seem more robust to TV cameras.

Note the two figures above. Looking at each other, we can identify some characteristics that differentiate them. The image on the left seems to be a structure that is being squeezed, especially when we see the eyes, which seem to jump sideways. On the right, we have a structure that, in relation to another, seems to have the eyes more centered, the nose smaller, the mouth more open and a little more upwards, we see the ears showing and the upper part of the head is notoriously bigger.

Both structures have a lot of visual differences ... but they are all about the same 3D object!

The difference lies in the way the photographs were made. In this case, two different focal lengths were used. 

Above we see the two pinhole camera on top. The image on the left indicates the focal length value of 15 and on the right we see the focal length value of 50. On one side we see a more compact structure (15), where the background is very close to the front and on the other a more stretched structure, with a more closed catch angle (50).

But why in this case of 15 focal length, the ears do not appear in the scene?

The explanation is simple and can be approached geometrically. Note that in order to frame the structure in the photo it was necessary to bring it close enough to the light inlet. In doing so, the captured volume (BB) only picks up the front of the face (Visible), hiding the ears (Invisible). At the end, we have a limited projection (CC) that suffers from certain deformation, giving the impression of the eyes being slightly separated.

With the focal length of 50 the visible area of the face is wider. We can attest this to the projection of the visible region, as we have done previously.

In this example we chose to frame the structure very close to the camera capture limits and thus to highlight the capture differences. Thus we clearly see how a larger value of focal length implies in a wider capture of the photographed structure. A good example is that, with a value of 15, we see the lower tips of the ears very discreetly, in 35 the structures are already showing, at 50 the area is almost doubled, and at 100 we have an almost complete view of the ears. Note also that in 100, the marginal region of the eyes transverse the structure of the head and in orthogonal (Ortho) the marginal region of the eyes is aligned with the same structure.

But what is an orthogonal view?

For comprehension to be more complete, let us go by parts.

If we isolate the edges of all the views, align the eyebrows and base of the chin and put the superimposed forms, we will see at the end that the smaller the focal distance, the smaller the structural area visualized. Among all the forms that stand out the most is the orthogonal view. It simply has more area than all the others. We see this to the extreme right by attesting to the blue color appearing in the marginal regions of the overlap.

But, and orthogonal projection, how does it work?

The best example is the facade of a house. Above the left we have a vision with focal length 15 (Perspective) and right in orthogonal.

Analyzing the capture with focal length 15, we have the lines in blue, as usual, representing the boundary of the visible area (limit of the image generated) and in the other lines the projection of some key parts of the structure.

The orthogonal view in turn does not suffer from deformation of the focal length. It simply receives the structural information directly, generating a graph consistent with the measurements of the original, that is, it shows the house "as it is." The process is very reminiscent of the x-ray projection, which represents the x-ray structure without (or almost without) perspective deformation.

Looking at the images side by side, from another point of view, it is possible to attest a marked difference between them. The bottom and top of the side walls are parallel, but if you draw a line in each of these parts in perspective, that path will end up at an intersection that is known as the vanishing point (A and B). In the case of the orthogonal view, the lines are not found, because ... they are parallel! That is, we again see that the orthogonal projection respects the actual structure of the object.

So you mean that orthogonal view is always the best option?

No, it is not always the best option because it all depends on what you are doing. Take as an example the front views, discussed earlier. Even if the orthogonal view offers a larger capture area (D) if we compare the exclusive regions of the orthogonal (E) with the exclusive regions viewed by the focal length perspective 15 (F), we will attest that even covering a smaller area of pixels, The view with perspective deformation contemplated regions that were hidden in the orthogonal view.

Moraes & Salazar-Gamarra (2016)
That answers the question about whether or not people gain weight. The longer the focal length, the more robust the face looks. But this does not mean to fatten or not, but to actually show its structure, that is, the orthogonal image is the individual in his measurements more coherent with the real volumetry.

The interesting thing about this aspect is that it shows that the eyes deceive us, the image we see of people does not correspond to what they are actually structurally speaking. What we see in the mirror does not either.

Professional photographers, for example, are experts for how to exploit this reality and to extract the maximum quality in their works.

View 3D

Have you ever wondered why you have two eyes and not just one? Most of the time we forget that we have two eyes, because we see only one image when we observe things around us.  

Take this quick test.

Look for a small object to look at (A), which is about a meter away. Position the indicator (B) pointing up at 15cm from the front of the eyes (C), aligned with the nose.

When looking at the object you will see an object and two fingers.

When looking at the finger, you will see a finger and two objects.

If you observe with just one eye, you will attest that each has a distinct view of the scene.

This is a very simple way to test the limits of the binocular visualization system characteristic of humans. It is also very clear why classic painters close one eye by measuring the proportions of an object with the paint-brush in order to replicate it on the canvas (see the bibliography link for more details). If they used both eyes it just would not work!

You must be wondering how we can see only one image with both eyes. To understand this mechanism a little better, let's take 3D cinema as an example.

What happens if you look at a 3D movie screen without the polarized glasses?

Something like the figure above, a distortion well known to those who have already overdone alcoholic beverages. However, even though it seems the opposite, there is nothing wrong with this image.

When you put on the glasses, each lens receives information related to your eye. We then have two distinct images, such as when we blink to see with only one side. "

Let's reflect a little. If the blurred image enters through the glasses and becomes part of the scenery, transporting us into the movies to the point of being frightened by debris of explosions that seem to be projected onto us ... it may be that the information we receive from the world Be blurred with it. Except that, in the brain, somewhere "magical" happens that instead of showing this blur, the two images come together and form only one.

But why two pictures, why two eyes?

The answer lies precisely in the part of the debris of the explosion coming to us. If you watch the same scene with just one eye, the objects do not "jump" over you. This is because stereoscopic vision (with both eyes) gives you the power to perceive the depth of the environment. That is, the notion of space that we have is due to our binocular vision, without it, although we have notion of the environment because of the perspective, we will very much lose the ability to measure its volume.
Para que você entenda melhor a questão da profundidade da cena, veja a seguinte imagem.

To better understand the depth of the scene, see the following image.

If a group of individuals were asked which of the two objects is ahead of the scene, it is almost certain that most respondents would say that it is the object on the left.

However, not everything is what it seems. The object on the left is further away. This example illustrates how we can be deceived by monocular vision even though it is in perspective.

Would not it be easier for modeling and animation programs to support stereoscopic visualization?

In fact it could be, but the most popular programs still do not offer this possibility. In view of the popularization of virtual reality glasses and the convergence of graphic interfaces, the possibility of this niche has full support for the stereoscopic visualization in the production phase. However, this possibility is more a future projection than a present reality and the interfaces of today still count on many elements that go back decades.

It is for these and other reasons that we need the help of an orthogonal view when working on 3D software.

If on one hand we do not yet have affordable 3D visualization solutions with depth, on the other hand we have robust tools tested and approved for years and years of development. In 1963, for example, the Sketchpad graphic editor was developed at MIT. Since then the way of approaching 3D objects on a digital screen has not changed so much.

The most important of all, is that the technique works very well and with a little training you calmly adapt the methodology, to the point of forgetting that one day you had difficulties with that.

Almost all modeling programs, similar to Sketchpad, offer the possibility of dividing the workspace into four views: Perspective, Front, Right, and Top.

Even though it is not a perspective from which we have the notion of depth, and even the other views being a sort of "facade" of the scene, what we have in the end is a very clear idea of the structure of the scene and the positioning of the objects .

If, on the one hand, dividing the scene into four parts reduces the visual area of each view, on the other hand the specialist can choose to change those views in the total area of the monitor.

Over time, the user will specialize in changing the point of view using the shortcut keys, in order to complete the necessary information and not make mistakes in the composition of the scene.

A sample of the versatility of 3D orientation from orthogonal views is the exercise of the "hat in the little monkey" passed on to beginner students of three-dimensional modeling. This exercise involves asking the students to put a hat (cone) on the primitive Monkey. When trying to use only the perspective view the difficulties are many, because it is very difficult those who are starting to locate in a 3D scene. They are then taught how to use orthogonal views (front, right, top, etc.). The tendency is that the students position the "hat" taking only a view as a reference, in this case front (Front). Only, when they change their perspective view, the hat appears dislocated. When viewed from another point of view, such as right (Right), they realize that the object is far from where it should be. Over time the students "get the hang of it" and change the point of view when working with object positioning.

If we look at the graph of the axes that appear to the left of the figures, we see that in the case of Front we have the information of X and Z, but Y is missing (precisely the depth where the hat was lost) and in the case of Right we have Y and Z , But the X is
 missing. The secret is always to orbit the scene or to alternate the viewpoints, so as to have a clear notion of the structure of the scene, thus grounding its future interventions.


For now that’s it, we will soon return with more content addressing the basic principles of 3D graphics applied to health sciences. If you want to receive more news, point out some correction, suggestion or even better know the work of the professionals involved with the composition of this material, please send us a message or even, like the authors pages on Facebook:

We thank you for your attention and we leave a big hug here.

See you next time!

Wednesday, 21 December 2016

Low cost human face prosthesis with the aid of 3D printing

Dear friends,

It is with great honor and joy that I communicate my participation, for the first time in fact, in the preparation of a human facial prosthesis. I started my studies in early 2016 with Dr. Rodrigo Salazar, who materialized the prosthesis and was kind enough to invite me, as a 3D designer, to compose the group led by Dr. Luciano Dib. The team, made up of specialists from the Paulista University (UNIP) at São Paulo, University of Illinois at Chicago and Information Technology Center. This scanning is a basis for a digital preparation of the prosthesis made on Blender 3D software, with the help of the 3DCS addon developed by our team (myself, Dr. Everton da Rosa and Dalai Felinto). What we did with the innovative techniques of 3D modeling so as to optimize the quality of prototypes of prostheses, the merit is all of doctors Salazar, Dib and team.

Authors of rehabilitation:
Rodrigo Salazar
Cicero Moraes
Rose Mary Seelaus
Jorge Vicente Lopes da Silva
Crystianne Seignemartin
Joaquim Piras de Oliveira.
Luciano Dib.

Publisher and infographics:
Cicero Moraes

Rodrigo Salazar

How the technique works

Based on: Salazar-Gamarra et al. Monoscopic photogrammetry to obtain 3D models by a mobile device: a method for making facial prostheses. Journal of Otolaryngology Head & Neck Surgery 2016;45:33
Infographics: Cicero Moraes

The first part of the process consists in photographing the patient in 5 different angles with 3 heights each angle, totaling 15 photos.

These photos can be made with a mobile phone, then they are sent to an online photogrammetry service (3D scanning per photo) called Autodesk® Recap360.

In about 20 minutes the server completes the calculations and returns a three-dimensional mesh corresponding to the patient's scanned face (first column on the left).

This face is mirrored in order to supply the missing structure, using the face complete as a parameter. Through a Boolean calculation the excesses of the mirrored part are excluded, resulting from the process a digital prosthesis that fits the missing region (second column on the left).

The digital prosthesis is sent to a 3D printer that materializes the piece. Then the structure is positioned on the patient to see if there is a fitting (third column on the left).

Once the structure has fit, a wax replica of the 3D print is generated. The purpose of this replica is to improve the marginal regions of docking and prepares the region that will receive the glass eye (fourth column on the left).

Finally a mold is generated from the replica in wax. This mold receives several layers of silicone. Each layer is pigmented to make colors compatible with the patient's skin. At the end of the process the prosthesis is obtained and can be adapted directly on the face of the patient (first column on the right).

A little of history

The doctors Prof. Dr. Luciano Dib, MSc. Rodrigo Salazar and MAMS. Rose Mary Seelaus are members of the Latin American Society of Buccomaxillofacial Rehabilitation. Among the activities developed by the society is the biannual event, where the members selected speakers who would speak at the event. In the April 2014 event, one of these invited speakers was MAMS. Rosemary Seelaus (anaplastologist), a specialist in facial prostheses for humans for almost 20 years.

At this event In early 2014, during one of the congresses organized by the association, Dr. Dib instigated Dr. Salazar to do a master's degree, of which he would be the advisor. Both were interested in the sophisticated techniques of Rose Mary Seelaus and intended to approach her in the studies, but they found a barrier, because at that time the prostheses were made with high operating costs, making it difficult to apply in Latin American public hospitals like Brazil.

The specialists then approached Dr. Seelaus, inquiring her whether it would be possible to assist them in adapting the technique to the Brazilian reality, reducing costs and, in the face of this, popularizing it to be used by the greatest number of Health professionals, thus benefiting those people who would not have access through the classical methodology, because its high cost.

Rodrigo Salazar, Rosemary Seelaus, Jorge Vicente Lopes da Silva e Luciano Dib at DT3D of CTI Renato Archer, Campinas-SP

Dr. Salazar began his master's studies in 2015. In March of that year, after preliminary studies on photogrammetry (3D digitalization by photos), via the online 123D Catch solution, the researchers sought the CTI Renato Archer (ProMED), to helping them to carry out the project to create a low-cost facial prosthesis.

During that time, CTI/ProMED not only supported the project with the necessary 3D printing (tests and final versions), but also helped in the training of the team members, through specific guidelines and necessary for the evolution of technology, always with the support from the head of the DT3D sector, Dr. Jorge Vicente Lopes da Silva.

In December of 2015 the article about the initial methodology was written and sent for publication (which occurred in May 2016).

The researchers were successful because the technique developed by them matched the classical technology in the results, but the cost has declined considerably.

Cicero Moraes and Rodrigo Salazar, Lima, Peru

Also at the end of the year, Dr. Salazar started the talks with me about the project and the possibilities of helping the development of the technique at a higher level using my know-how in computer graphics applied to the health sciences.

Because of the two full agendas, we spent some time communicating, but we resumed the dialogue in early 2016 and in February I began my studies in this field.

In a few months thanks to the versatility of the free software and the support of Dr. Salazar, Dr. Dib and CTI / ProMED, we were able to further develop the technique of facial scanning and prosthesis making.

Tests of human facial scanning in high resolution from photogrammetry. Moraes and Salazar-Gamarra (2016)

We did a series of tests, comparisons and discussions until we proceeded with the production of a real prosthesis. In the first half of December a patient received this piece and the procedure was successful, with an impressive result.

Now, after the help that I humbly proposed to offer and the help of the specialists in each phase, the quality of the prosthesis, according to the own team, has surpassed the methodology of high cost!

I am extremely honored to be a part of this project and to be able to help people with an accessible and robust technology, born of a team work and very, very study, which obviously has a lot to develop yet.

Happy of the society that will receive all the result of the successes. Whether through procedures that elevate self-esteem and contribute to a full life for those who have been victims of cancer, or for those who want to access and help improve the technology with us.
BlogItalia - La directory italiana dei blog Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.