Last blogpost!

Long time no post! But in the meantime, a lot has been done. We’ll give you guys a quick summary of what we did in the last few months, what went wrong, how we solved it, and what eventually became the outcome.

In the beginning our plan was to make more prototypes of the small tablet to figure out which material was the best to use. Also, we would adapt the videomapping so it would look better on both the small and the big tablet. Meryam made a new start on the videomapping – two seperate videos instead of one with both the highlighting and the translation, Omar and Samir were struggling to make a mould for the prototypes and Jana went to Amsterdam to buy all the materials for the prototyping, but then a big event happened: the museum wanted something else and our concept was completely changed. We had to start over again.

The new concept was three small (1:1) 3D-printed tablets in front of the audience to touch and three big ones (1:6,25) on the wall, where projections are going to be displayed on. Also there should be some kind of headphones with the audio in Akkadian.

We recieved two new scans from Yale. From there, we needed to test if CNC milling was going to be accurate enough and what we could do to solve it if not.

A sample of one of the big tablets was needed to test with, so we started off by cutting a 100x100mm piece off the STL-file of one tablet. This was done using Meshmixer slicing tool. This sample was carefully picked to represent the entire model. The difficulty here was the high number of constraints. The files we recieved from Yale were enormous in size and put significant strains on our machines. We wished to leave a uniform outline of the tablet facade with no sharp edges or hanging cliffs, but the tablets were not all in great condition, with fragmentation marks cutting deep into the tablet, and also often had odd shapes at places. All this meant a painstaking process where all aspects of the tablet were closely inspected, manipulating orientation, depth and cutting plane for best results.

The samples were to be testmilled from the regular green foam for sale at the PMB, but we already knew that this would not give the proper result for a museum, so a higher quality modelling foam was used. The samples were made in two types of Renshape and an Obomodulan foam. A control sample was made using an ultimaker 2+, using very fine settings.










The modelling foam samples looked great. The pieces were intended to be milled with a 6mm extra long ballnose mill. This was not available however, so a flat-tip 6mm mill was used and quickly showed that the 6mm mill resulted in too rough of a milling job. A rough milling operation was done with the 6mm mill, to be followed up upon by a 2mm ballnose mill. Highly time consuming, but this yielded a very high quality of details and an amazing finish.

We did find that the dense sample have a bit more character to it than the lighter one. Therefore, the dense Obomodulan material would be the material of choice. Two large plates were ordered in gray, as that would require less or no postprocessing in order to be suitable for the videomapping.

Next, we had to prepare the STL-files of the scans for the milling – they had to be one-faced as they were going to be attached to the wall. The struggle with preparing the files was that the software used for CNC milling, DeskProto, seemed to have a limit of 2GB as a filesize it can open. This was a problem for one of the tablets, which was 2,5GB(!).






Because the milling had to be very precise, downsamping the file was not an option. So instead, we considered increasing the tolerance level to match that CNC milling machine’s capability. We also looked to make the model hollow, perhaps significantly reducing size. For that last attempt, we used Meshlab to create an inner shell for a hollow model using voxel approximations. The process took many hours but failed to yield a result to our confusion. After all this we found out we accidentally saved the file as an ASCII STL rather than a Binary STL, which is a lot smaller. The problem luckily solved itself! For the rest, the files were up to 1,7GB in size. That did work for our project, but might be a problem when these kinds of resolutions are used for larger objects.

Preparing the material was a tough job, but quite doable with two people workng quite carefully with the bandsaw. It is recommended to have quite a bit of experience with the bandsaw in order to cut this, as it is very difficult steering the very large and heavy plate through the saw. Also, it is VERY expensive, so you don’t want to mess it up.
The milling process was the same as with the samples. A rough milling with a 6mm mill, with the fine work done by a 2mm ballnose. The milling took about a week, but the results were stunning. In the close-up images below you can see the difference in results between the 6mm mill and the 2mm ballnose quite well.





























Of course we also had to think of a way to attach the big, heavy tablets to the wall. Maaike came up with an option (shown as ‘option 1’ in the illustration below), but it seemed as if the center of gravity was going to tilt this way. Incidentally, Meryam visited the Booijmans museum, where she saw some photographs attached to the wall with a metal sliding system. With that in mind we came up with another option (‘option 2’), where the chance of the center of gravity tilting is minimized.



The science fair

The first of November, the science fair was finally there. Quite a bit too soon if you’d ask one of us. The model shown at the science fair was still a work in progress however. For instance, the mapping of the highlights was not quite finished yet and the quality of the display was still not quite up to spec. but fortunately, we did manage to get there with a functional model. Some on-the-fly tweaking was required after an audio malfunction, but it did work in the end.

Here are some pictures to give an impression of the event and our stand

the exhibit in action

The exhibit in action

the AR concept being shown

The AR concept being shown


The team

The build

With time short and the deadline that is the science fair breathing down our necks, we went to tackle the build. Small note, Jana will include a separate post on the creation of the replica, as that is too interesting to not give it its own post.

We used the different 3D prints and the replica, as well as a reference image on the computer to get a good feeling for the size our rig will need to have. The speakers can be seen as well, confirming that our audio track works. The rig itself will be a simple box with room for the projector and tablets on top to ensure a nice “stage” for the core of our concept. The box will be large enough to place the speakers inside, as well as all the cable work, the raspberry pi and even a keyboard and mouse should something need to be tweaked tomorrow.


The button was a fairly standard large arcade button. Not highly original, but very easy to mount and use. This is one of the examples of having our work to be distinctly different from what a museum might show. An experienced exhibit designer or curator may have a good idea on what a switch in a museum would need to look or feel like, where they might not have an immediate insight on how to build a replica from a 3D scan and what to do with it from a technical perspective. The same philosophy goes into the rather simple box, making the design portable (just needs a single power socket if we add a small extension cord!) and the stands for the tablets, showing both an opaque and a translucent material, and their effects on the projection.


Comment – Electronics, software and video all come together in this test rig.

This sums up all interesting things we can cover in this blogpost as most of it was software and video-editing and unless you are a video-editing or “python programming for beginners” enthusiast, these blogs might get very boring if we cover everything in detail. Come out to IO tomorrow (nov 1st) and check out our work!

– Samir den Haan

Finalization of the concept.

After the midterm presentation, it was made clear that the core of our final concept would be. The “Translation projection” concept was the one we deemed most achievable in the very short amount of time. We decided that, to get the most out of this project, it would be nice to take elements out of the final two concepts we did not use and see which we can combine with the whole. Narration with audio, AR smartphone functionality and a secondary projector were all options were considered.

With time pressing, this seemed like a good time to get our experts in for a discussion. Designer Maaike Roozenburg was finally back in the Netherlands. In some good conversations with her, Jouke and Alexandra (whom you may have read about in our earlier blogs) we decided to focus on making the set-up suited for the science fair.

We received the most positive response about the narrations, which seemed to be quickly accepted as a feasible core feature.
Especially Jouke, Maaike and Omar were quite excited about the possible use of Augmented Reality in museum exhibits. However, this has its drawbacks. Besides the obvious lack of time – just a single week to build the setup for the science fair – this would create a barrier between the user and the exhibit. This barrier consists of the physical phone or tablet blocking the view to the exhibit, as well as reducing the experience to something cheap, digital and fleeting if not done absolutely right. Finally, blocking out the group of people who don’t have a smartphone or cannot use it well (with an average age estimated between 40 and 60, that group may be larger than four young engineering students might imagine) did not seem appealing as well. However, it is the enthusiasm that could be found that made us not eliminate it completely, but have it return as a secondary objective. More on that later on.
The secondary projector was an interesting part for me personally, as I thought this was something that needed a decision. It was pointed out to me that the prototype shown at the science fair does not need to be identical to a possible version shown at the museum and it is not unlikely that a version shown at the science fair may be adapted to become the real thing in the end. This led to more of a focus on building something for the science fair and being able to distinguish the things we need to make and the things we want to make.

Rounding out this blogpost, we made the planning to build everything in a week. I will not bother you with the details, but generally, Jana would focus on making the plain tablet and the replica tablet for which she would visit Maaike’s studio to get some help, Meryam would be tasked with the video to be projected, Omar would look into the peripherals of the presentation such as the button and alignment of the objects. My job would be to have the technical back-end consistently working.

Besides this, we all had a “secondary objective” to work on should there be any time left to go beyond the basics we needed. Everyone took one objective of the following: auxiliary presentation at the science fair, facilitating seamless transitions in the software, include more recipes to be shown/translated and implementing an augmented reality demo. Can our regular readers guess who worked on what?


– Samir den Haan

Setting up the back-end system for the projector.

A little project with a Raspberry Pi or Arduino was something I would like to do for a while now, especially when after seeing some teammates of Project MARCH easily integrate the Raspberry in the rest of the system to perform a utility role. I have been quite amazed of the power and versatility of these small devices and their use in creative projects. That I would use one this soon, was not something I anticipated though!

When we came up with the concepts of the “Decode challenge” and the “Translation projection” concepts, it was clear a projector has a decent chance of being used in the final product, and we needed some way to run it. Rather than choosing a large and potentially expensive PC system, I decided it would be interesting to run things off a microcontroller or Raspberry-like device.

My eye fell on the Raspberry Pi due to its array of GPIO pins (General Purpose Input/Output), low cost, HDMI port and its reputation for being very flexible, and being able to program almost anything on. Some quick research showed that the Raspberry, being a fully functional Linux computer, has a very high configurability and graphics/video support. It natively supports a range of programming languages and has some built in basic development environments like Python. A friend of mine in computer sciences helped me get my bearings and told me how Python has some importable functions to add video manipulation.

I borrowed the things I need. A mini-projector, the Raspberry and all necessary accessories (keyboard etc.) to build a set-up in my home. Luckily, my neighbour had some jumper wires as a finishing touch to really get into the matter with physical objects being controlled from the Pi.


Comment – My battle station for the weekend.

This being my first project with a Raspberry, it took a while to get set-up. Luckily, there are plenty of “getting started” guides out there. This, combined with an online course to learn the basics of programming in Python and some internet research, got me a decent developing environment.

While browsing the web for information, I got really lucky! I found someone who built a project that, in terms of software and Pi use, is very similar to what we try to do. Reverse engineering his code helped speed up the process a lot. After I finally started to understand how to receive signals and use them to program, the rest of the prototype was programmed.

In the end, we can run the script, plug a wire into the Raspberry Pi and see a video play on the projector. Plug in a different wire? Different video plays. It is currently quite buggy but should give a decent enough image of how such a thing works.

This functional prototype also gave some good insights on the things that still need work before it can be an exhibit. For instance, the script should run when booting the Pi, the transition between videos should be more or less (visually) seamless, and the program should of course be bug free. But all of this is done in the end stage. Next up is deciding whether or not to build this concept and decide on its functionality such as selecting a random recipe to tell about or really anything we can come up with!

Words by Samir den Haan

Personal Translation

This concept was inspired on the basis of the google live translate feature. Its ambition is to provide you a window through to the world from your perspective.

Interested in applying this to our concept, I investigated to see if we could provide such an experience to museum visitors through existing technology. The challenge for us was to implement this idea on a physical 3d tablet replica. Some solutions I came across were:

– Project tango (Google): A phablet with special sensors able to map the environment as a 3d model, problem is we don’t own one, and it is only meant for spatial

– Accelerometer sync with 3d model: this would be the ideal solution as it would enable an ‘exact’ augmented replica. Problem is it isn’t easy, and would may be a risky path to try.

– 2d image mapping: this could work great as there turns out to be a lot of preexisting apps and sdks to implement such a method. Problem is it is quite difficult to use on a 3d surface as the shadows and light greatly influence performance.

Decode Challenge

Like said in one of the previous posts, the hardest part about this concept is inventing a challenge that is challenging enough for our audience.

The idea of this concept is that we want the audience to actively interact with the tablet and its content so they will learn something about the language as well as the recipe. Once they have decoded the task the translation of the recipe will be revealed, we made a quick render to give you a small impression of how it is going to look like in the exhibit


One way to do this is by letting the audience figure out the what the transliteration is of the different words, once they figured it out, they can enter it and the translation will appear.

transliteration    >  translation shown

One of our first ideas we had is add audio to all of this. We can do this by letting the audience read part of the recipe, once they do this correctly they can hear the  the whole recipe with spatial sounds, so they will get the real Mesopotamian experience.



Following up on concepts

For those following our weblog, you must have been curious to find out what our three most promising concepts are. For those recently joining, now is a great time to do so!

Last week we showed the process of concept exploration using the morphological chart, and how the “Illusion + context projection”, “Accelerometer” and “Touch light-up tablet” concepts did not quite make the cut. This post is dedicated to the three concepts we deemed worthy of further investigation.

The first concept is called “Personal translation”. It provides the user an alternative window of looking at the tablet, thus providing the user with translation. This could be done with a smartphone. Once the user has completed reading the tablet, they would be able to take the recipes and some background information home with them via their phones. Google translate has an app that uses letters as some sort of AR markers to provide a real-time overlay of the translation on your phone’s screen. This does (unsurprisingly) not support Akkadian, which reveals the weakness of this concept.
Should we use this concept, a lot of software development is involved. This may be a problem as even the best programmer in our group is just a beginner, with some not able to code at all. This means the viability of this concept will stand or fall with the availability of off-the-shelf software. Being able to tie some things together and only create the high-level interface is a hard requirement for this concept to be viable.

The second concept is named the “Decode challenge” concept. This concept seeks active interaction with a small group of users that perform an action that will reveal parts of the translation on an overhead projector. The main challenge here is the challenge part of this concept. We do want to have the audience actively participate in the exhibit, but it seems there is a thin line between causing the audience to skip the exhibit, have them participate in a cheesy game with a high “Eftelinggehalte” (fairy tale-ness), or make them have a cool experience that gave them interesting new insights that they retain due to actively being involved in the exhibit.

The third concept is the “Translation projection” and it emerged as a combination of the “Illusion + context projection” and “Touch light-up tablet” concepts. The idea is to have the user engage with the exhibit (hold down a switch or button for instance) and have a projector light up (a part of) the text in some sort of scanning visual fashion, to have the translation be revealed in real time projected on a blank clay tablet sitting next to the replica. This concept has the challenge of finding a good mix of hardware and software. Luckily, Jouke already provided us with a great head start by offering to lend us a mini-projector. From there, we still need to find out how we can have a hardware switch trigger an animation on the projector, and how we can integrate all the systems into one. The viability of this concept can be made or broken with the centrepiece of the system that ties everything together, as this is the concept using the most subsystems.

As you may have noticed, all of these concepts have their pros and cons. That has been our focus these past few days. Remember the to-do list from our previous post? That’s the things we have been researching and will present tomorrow! Expect some more small blogposts elaborating on these small projects and prototypes that emerged from it!

Words by Samir den Haan

Exploring the concepts

image 1

One of the phenomena of designing is that a very open assignment gives you a lot of freedom to work with. However, this freedom comes at a price because more freedom of design you have, the harder it gets to find a firm grasp on your project. After somewhat of a slow start and one of our team members being in Switzerland for a week, we finally got the ball rolling.

We used our teacher (Jouke Verlinden)’s input to use a type of designing tool named a “Morphological chart”. For those unfamiliar with it, this is a method of splitting a big design task up into its functions. Once we listed all things the product needs to be able to do, we can find multiple ways to realise each of these individual functions, without looking at the whole product. When all these functions and possible solutions get listed in a table, we can combine those solutions and out pops an idea for further investigation.

We felt that for this assignment, the system I learned at Mechanical Engineering of finding criteria and numerically rating each solution would be insufficient. Seeing how both user and designer have a very subjective experience with the exhibit, we looked at how certain solutions would synergise with one another to devise ideas, and then check those for viability regarding the technology and time we can use.

image 2

Comment – Some ideas did not make the chart. For instance: the use of accelerometers, smell and the tools used in the Mesopotamian kitchen weren’t deemed good enough solutions.

From this, five preliminary concepts emerged, of which three made the viability check. Before we reveal our ideas in a next blogpost, it might be interesting to discuss the ideas that did not make the check.

One of the ideas was to have the user not know the action they performed to make certain parts of the exhibit spring to life. This would be combined with a projector to give context to the tablet, so several large images would seem to emerge from the small tablet, sending the message that such a small tablet can contain a lot more information than meets the eye. We dismissed this concept due to its distracting nature. The tablet would just be a peripheral to the images, rather than the other way around, plus the fact that triggering unknown actions with a larger group of people might cause somewhat of a chaos.

Another idea was the use of an accelerometer, so the audience can pick the tablet up, and have it spring to life when the user makes certain gestures. The core idea of this is that an audience that can pick an exhibit up, will try to move it around to inspect it from up close on a lot of angles. We felt that this method would put this experience on-rails, therefore taking away its power by making the user focus on the gesture rather than the tablet and its text. Added to this was the fact that we could not come up with a meaningful method to add something extra to the experience of the user.

One that I personally was very excited about was creating a replica that would light up and narrate the translation as you would move your finger over it. Accurately tracking touch is quite tough however. Even with an optic/software approach such as Kinect or Leap Motion, it would be very difficult to accurately track this with the required resolution, even with a 2:1 scale. On top of this would be the lighting rig, which will require a lot of work on the details and mounting. Should we use a lot of LEDs, a simple microcontroller won’t be enough processing power so a more advanced device like an FPGA should be used. Too little LEDs will reduce the experience while requiring the same resolution in touch tracking. All of this make other obstacles like the power supply of the system look like a trivial problem. Sadly, there is no way for us to increase our programming skills so drastically and to find the hundreds of euros required for such a rig, even though we all agreed that it would be really cool if at all possible.

From the morphological chart and our three concept ideas that did seem viable, we will now set out to test several things, creating prototypes for the midterm next Tuesday. We might write about this process over the weekend and reveal the ideas early next week. Stay tuned!

image 3

words by: samir den haan

Museum Visit

Yesterday we visited the Wereldmusuem in Rotterdam, together with Jouke. We got a tour through the current exhibition by Alexandra. We were also accompanied by the designers of the “Keukengeheimen” exhibition, and writer Abdelkader Benali.

Abdel told us that while organising the exhibit he wondered what the oldest recipe would be. They soon figured out that these recipes were back from the Mesopotamian time and that Yale University owned these tablets. They then tried to bring the tablets to the Netherlands, but since they are so fragile is was impossible. Ever since they have been in contact with Yale, to bring the tablet to the museum in another way, which resulted in our project.


We also talked to the designers, they explained us what the variety of things that will be shown in the exhibit are, where in the museum they will be and how the different things are connected to each other.

We can’t tell much about this now, since the ideas are still concepts and we weren’t allowed to share them 😉

They also listened to our first ideas, and how we visualised this tablet to be part of the exhibit. Our first idea was to make a 1:1 replica with light and sounds effects, but we weren’t sure if it would fit within a  museum since only person at the time can experience it.

After the designers left, we finished our tour with Alexandra en Abdel, and went home with a much clearer image than when we entered the museum, now it’s time to develop a first concept.

See you soon!