The build

With time short and the deadline that is the science fair breathing down our necks, we went to tackle the build. Small note, Jana will include a separate post on the creation of the replica, as that is too interesting to not give it its own post.

We used the different 3D prints and the replica, as well as a reference image on the computer to get a good feeling for the size our rig will need to have. The speakers can be seen as well, confirming that our audio track works. The rig itself will be a simple box with room for the projector and tablets on top to ensure a nice “stage” for the core of our concept. The box will be large enough to place the speakers inside, as well as all the cable work, the raspberry pi and even a keyboard and mouse should something need to be tweaked tomorrow.

buildup

The button was a fairly standard large arcade button. Not highly original, but very easy to mount and use. This is one of the examples of having our work to be distinctly different from what a museum might show. An experienced exhibit designer or curator may have a good idea on what a switch in a museum would need to look or feel like, where they might not have an immediate insight on how to build a replica from a 3D scan and what to do with it from a technical perspective. The same philosophy goes into the rather simple box, making the design portable (just needs a single power socket if we add a small extension cord!) and the stands for the tablets, showing both an opaque and a translucent material, and their effects on the projection.

buildup2

Comment – Electronics, software and video all come together in this test rig.

This sums up all interesting things we can cover in this blogpost as most of it was software and video-editing and unless you are a video-editing or “python programming for beginners” enthusiast, these blogs might get very boring if we cover everything in detail. Come out to IO tomorrow (nov 1st) and check out our work!

– Samir den Haan

Finalization of the concept.

After the midterm presentation, it was made clear that the core of our final concept would be. The “Translation projection” concept was the one we deemed most achievable in the very short amount of time. We decided that, to get the most out of this project, it would be nice to take elements out of the final two concepts we did not use and see which we can combine with the whole. Narration with audio, AR smartphone functionality and a secondary projector were all options were considered.

With time pressing, this seemed like a good time to get our experts in for a discussion. Designer Maaike Roozenburg was finally back in the Netherlands. In some good conversations with her, Jouke and Alexandra (whom you may have read about in our earlier blogs) we decided to focus on making the set-up suited for the science fair.

We received the most positive response about the narrations, which seemed to be quickly accepted as a feasible core feature.
Especially Jouke, Maaike and Omar were quite excited about the possible use of Augmented Reality in museum exhibits. However, this has its drawbacks. Besides the obvious lack of time – just a single week to build the setup for the science fair – this would create a barrier between the user and the exhibit. This barrier consists of the physical phone or tablet blocking the view to the exhibit, as well as reducing the experience to something cheap, digital and fleeting if not done absolutely right. Finally, blocking out the group of people who don’t have a smartphone or cannot use it well (with an average age estimated between 40 and 60, that group may be larger than four young engineering students might imagine) did not seem appealing as well. However, it is the enthusiasm that could be found that made us not eliminate it completely, but have it return as a secondary objective. More on that later on.
The secondary projector was an interesting part for me personally, as I thought this was something that needed a decision. It was pointed out to me that the prototype shown at the science fair does not need to be identical to a possible version shown at the museum and it is not unlikely that a version shown at the science fair may be adapted to become the real thing in the end. This led to more of a focus on building something for the science fair and being able to distinguish the things we need to make and the things we want to make.

Rounding out this blogpost, we made the planning to build everything in a week. I will not bother you with the details, but generally, Jana would focus on making the plain tablet and the replica tablet for which she would visit Maaike’s studio to get some help, Meryam would be tasked with the video to be projected, Omar would look into the peripherals of the presentation such as the button and alignment of the objects. My job would be to have the technical back-end consistently working.

Besides this, we all had a “secondary objective” to work on should there be any time left to go beyond the basics we needed. Everyone took one objective of the following: auxiliary presentation at the science fair, facilitating seamless transitions in the software, include more recipes to be shown/translated and implementing an augmented reality demo. Can our regular readers guess who worked on what?

 

– Samir den Haan

Setting up the back-end system for the projector.

A little project with a Raspberry Pi or Arduino was something I would like to do for a while now, especially when after seeing some teammates of Project MARCH easily integrate the Raspberry in the rest of the system to perform a utility role. I have been quite amazed of the power and versatility of these small devices and their use in creative projects. That I would use one this soon, was not something I anticipated though!

When we came up with the concepts of the “Decode challenge” and the “Translation projection” concepts, it was clear a projector has a decent chance of being used in the final product, and we needed some way to run it. Rather than choosing a large and potentially expensive PC system, I decided it would be interesting to run things off a microcontroller or Raspberry-like device.

My eye fell on the Raspberry Pi due to its array of GPIO pins (General Purpose Input/Output), low cost, HDMI port and its reputation for being very flexible, and being able to program almost anything on. Some quick research showed that the Raspberry, being a fully functional Linux computer, has a very high configurability and graphics/video support. It natively supports a range of programming languages and has some built in basic development environments like Python. A friend of mine in computer sciences helped me get my bearings and told me how Python has some importable functions to add video manipulation.

I borrowed the things I need. A mini-projector, the Raspberry and all necessary accessories (keyboard etc.) to build a set-up in my home. Luckily, my neighbour had some jumper wires as a finishing touch to really get into the matter with physical objects being controlled from the Pi.

x

Comment – My battle station for the weekend.

This being my first project with a Raspberry, it took a while to get set-up. Luckily, there are plenty of “getting started” guides out there. This, combined with an online course to learn the basics of programming in Python and some internet research, got me a decent developing environment.

While browsing the web for information, I got really lucky! I found someone who built a project that, in terms of software and Pi use, is very similar to what we try to do. Reverse engineering his code helped speed up the process a lot. After I finally started to understand how to receive signals and use them to program, the rest of the prototype was programmed.

In the end, we can run the script, plug a wire into the Raspberry Pi and see a video play on the projector. Plug in a different wire? Different video plays. It is currently quite buggy but should give a decent enough image of how such a thing works.

This functional prototype also gave some good insights on the things that still need work before it can be an exhibit. For instance, the script should run when booting the Pi, the transition between videos should be more or less (visually) seamless, and the program should of course be bug free. But all of this is done in the end stage. Next up is deciding whether or not to build this concept and decide on its functionality such as selecting a random recipe to tell about or really anything we can come up with!

Words by Samir den Haan

Personal Translation

This concept was inspired on the basis of the google live translate feature. Its ambition is to provide you a window through to the world from your perspective.

Interested in applying this to our concept, I investigated to see if we could provide such an experience to museum visitors through existing technology. The challenge for us was to implement this idea on a physical 3d tablet replica. Some solutions I came across were:

– Project tango (Google): A phablet with special sensors able to map the environment as a 3d model, problem is we don’t own one, and it is only meant for spatial

– Accelerometer sync with 3d model: this would be the ideal solution as it would enable an ‘exact’ augmented replica. Problem is it isn’t easy, and would may be a risky path to try.

– 2d image mapping: this could work great as there turns out to be a lot of preexisting apps and sdks to implement such a method. Problem is it is quite difficult to use on a 3d surface as the shadows and light greatly influence performance.

Decode Challenge

Like said in one of the previous posts, the hardest part about this concept is inventing a challenge that is challenging enough for our audience.

The idea of this concept is that we want the audience to actively interact with the tablet and its content so they will learn something about the language as well as the recipe. Once they have decoded the task the translation of the recipe will be revealed, we made a quick render to give you a small impression of how it is going to look like in the exhibit

concept1render

One way to do this is by letting the audience figure out the what the transliteration is of the different words, once they figured it out, they can enter it and the translation will appear.

transliteration    >  translation shown

One of our first ideas we had is add audio to all of this. We can do this by letting the audience read part of the recipe, once they do this correctly they can hear the  the whole recipe with spatial sounds, so they will get the real Mesopotamian experience.

 

 

Following up on concepts

For those following our weblog, you must have been curious to find out what our three most promising concepts are. For those recently joining, now is a great time to do so!

Last week we showed the process of concept exploration using the morphological chart, and how the “Illusion + context projection”, “Accelerometer” and “Touch light-up tablet” concepts did not quite make the cut. This post is dedicated to the three concepts we deemed worthy of further investigation.

The first concept is called “Personal translation”. It provides the user an alternative window of looking at the tablet, thus providing the user with translation. This could be done with a smartphone. Once the user has completed reading the tablet, they would be able to take the recipes and some background information home with them via their phones. Google translate has an app that uses letters as some sort of AR markers to provide a real-time overlay of the translation on your phone’s screen. This does (unsurprisingly) not support Akkadian, which reveals the weakness of this concept.
Should we use this concept, a lot of software development is involved. This may be a problem as even the best programmer in our group is just a beginner, with some not able to code at all. This means the viability of this concept will stand or fall with the availability of off-the-shelf software. Being able to tie some things together and only create the high-level interface is a hard requirement for this concept to be viable.

The second concept is named the “Decode challenge” concept. This concept seeks active interaction with a small group of users that perform an action that will reveal parts of the translation on an overhead projector. The main challenge here is the challenge part of this concept. We do want to have the audience actively participate in the exhibit, but it seems there is a thin line between causing the audience to skip the exhibit, have them participate in a cheesy game with a high “Eftelinggehalte” (fairy tale-ness), or make them have a cool experience that gave them interesting new insights that they retain due to actively being involved in the exhibit.

The third concept is the “Translation projection” and it emerged as a combination of the “Illusion + context projection” and “Touch light-up tablet” concepts. The idea is to have the user engage with the exhibit (hold down a switch or button for instance) and have a projector light up (a part of) the text in some sort of scanning visual fashion, to have the translation be revealed in real time projected on a blank clay tablet sitting next to the replica. This concept has the challenge of finding a good mix of hardware and software. Luckily, Jouke already provided us with a great head start by offering to lend us a mini-projector. From there, we still need to find out how we can have a hardware switch trigger an animation on the projector, and how we can integrate all the systems into one. The viability of this concept can be made or broken with the centrepiece of the system that ties everything together, as this is the concept using the most subsystems.

As you may have noticed, all of these concepts have their pros and cons. That has been our focus these past few days. Remember the to-do list from our previous post? That’s the things we have been researching and will present tomorrow! Expect some more small blogposts elaborating on these small projects and prototypes that emerged from it!

Words by Samir den Haan

Exploring the concepts

image 1

One of the phenomena of designing is that a very open assignment gives you a lot of freedom to work with. However, this freedom comes at a price because more freedom of design you have, the harder it gets to find a firm grasp on your project. After somewhat of a slow start and one of our team members being in Switzerland for a week, we finally got the ball rolling.

We used our teacher (Jouke Verlinden)’s input to use a type of designing tool named a “Morphological chart”. For those unfamiliar with it, this is a method of splitting a big design task up into its functions. Once we listed all things the product needs to be able to do, we can find multiple ways to realise each of these individual functions, without looking at the whole product. When all these functions and possible solutions get listed in a table, we can combine those solutions and out pops an idea for further investigation.

We felt that for this assignment, the system I learned at Mechanical Engineering of finding criteria and numerically rating each solution would be insufficient. Seeing how both user and designer have a very subjective experience with the exhibit, we looked at how certain solutions would synergise with one another to devise ideas, and then check those for viability regarding the technology and time we can use.

image 2

Comment – Some ideas did not make the chart. For instance: the use of accelerometers, smell and the tools used in the Mesopotamian kitchen weren’t deemed good enough solutions.

From this, five preliminary concepts emerged, of which three made the viability check. Before we reveal our ideas in a next blogpost, it might be interesting to discuss the ideas that did not make the check.

One of the ideas was to have the user not know the action they performed to make certain parts of the exhibit spring to life. This would be combined with a projector to give context to the tablet, so several large images would seem to emerge from the small tablet, sending the message that such a small tablet can contain a lot more information than meets the eye. We dismissed this concept due to its distracting nature. The tablet would just be a peripheral to the images, rather than the other way around, plus the fact that triggering unknown actions with a larger group of people might cause somewhat of a chaos.

Another idea was the use of an accelerometer, so the audience can pick the tablet up, and have it spring to life when the user makes certain gestures. The core idea of this is that an audience that can pick an exhibit up, will try to move it around to inspect it from up close on a lot of angles. We felt that this method would put this experience on-rails, therefore taking away its power by making the user focus on the gesture rather than the tablet and its text. Added to this was the fact that we could not come up with a meaningful method to add something extra to the experience of the user.

One that I personally was very excited about was creating a replica that would light up and narrate the translation as you would move your finger over it. Accurately tracking touch is quite tough however. Even with an optic/software approach such as Kinect or Leap Motion, it would be very difficult to accurately track this with the required resolution, even with a 2:1 scale. On top of this would be the lighting rig, which will require a lot of work on the details and mounting. Should we use a lot of LEDs, a simple microcontroller won’t be enough processing power so a more advanced device like an FPGA should be used. Too little LEDs will reduce the experience while requiring the same resolution in touch tracking. All of this make other obstacles like the power supply of the system look like a trivial problem. Sadly, there is no way for us to increase our programming skills so drastically and to find the hundreds of euros required for such a rig, even though we all agreed that it would be really cool if at all possible.

From the morphological chart and our three concept ideas that did seem viable, we will now set out to test several things, creating prototypes for the midterm next Tuesday. We might write about this process over the weekend and reveal the ideas early next week. Stay tuned!

image 3

words by: samir den haan

Museum Visit

Yesterday we visited the Wereldmusuem in Rotterdam, together with Jouke. We got a tour through the current exhibition by Alexandra. We were also accompanied by the designers of the “Keukengeheimen” exhibition, and writer Abdelkader Benali.

Abdel told us that while organising the exhibit he wondered what the oldest recipe would be. They soon figured out that these recipes were back from the Mesopotamian time and that Yale University owned these tablets. They then tried to bring the tablets to the Netherlands, but since they are so fragile is was impossible. Ever since they have been in contact with Yale, to bring the tablet to the museum in another way, which resulted in our project.

20161004_161843

We also talked to the designers, they explained us what the variety of things that will be shown in the exhibit are, where in the museum they will be and how the different things are connected to each other.

We can’t tell much about this now, since the ideas are still concepts and we weren’t allowed to share them 😉

They also listened to our first ideas, and how we visualised this tablet to be part of the exhibit. Our first idea was to make a 1:1 replica with light and sounds effects, but we weren’t sure if it would fit within a  museum since only person at the time can experience it.

After the designers left, we finished our tour with Alexandra en Abdel, and went home with a much clearer image than when we entered the museum, now it’s time to develop a first concept.

See you soon!

Digging into the past

Hi there again, in this post we will tell you more about the Mesopotamian time, the Akkadian language and our first findings about the clay tablets.

Akkadian

Akkadian was a semitic language spoken in Mesopotamia (modern Iraq and Syria) between about 2,800 BC and 500 AD. It was named after the city of Akkad and first appeared in Sumerian texts dating from 2,800 BC in the form of Akkadian names.

The Akkadian cuneiform script was adapted from Sumerian cuneiform in about 2,350 BC. At the same time, many Sumerian words were borrowed into Akkadian, and Sumerian logograms were given both Sumerian and Akkadian readings. In many ways the process of adapting the Sumerian script to the Akkadian language resembles the way the Chinese script was adapted to write Japanese. Akkadian, like Japanese, was polysyllabic and used a range of inflections while Sumerian, like Chinese, had few inflections.

persian_cuneiform

A large corpus of Akkadian texts and text fragments numbering hundreds of thousands has been excavated. They include mythology, legal and scientific texts, correspondence and.. the oldest recipes.

The recipes

In Bottéro’s book The Oldest Cuisine in the World: Cooking in Mesopotamia we could find some more information about the tablets and its content. There are three tablets with recipes. Tablet C (YBC4648) is the smallest of the three (89 x 137 x 37 mm). It’s also the most damaged one. It contains only three recipes, separated by two horizontal lines after the first, and a single line after the second.  It’s contents connects closely to the other two tablets.

Here you can find two of the three recipes translated by Bottéro:

1 2

Here you can find a study by Alice L. Slotsky from Yale University, she made a transliteration, translation and a working recipe:

Akkadian:
me-e shirim shi-rum iz-za-az me-e tu-ka-an li-pi-a-am ta-na-ad-di [break in tablet] karsum ha-za-nu-um te-te-er-ri me-eh-rum shuhut innu i-sha-ru-tum ash-shu-ri-a-tum shi-rum iz-za-az me-e tu-ka-an li-pi-a-am ta-na-di [break in tablet] ha-za-nu-um zu-ru-mu da-ma sha du-qa-tim tu-ma-la kar-shum ha-za-nu-um te-te-er-ri me-he-er na-ag-la-bi

English Translation:
Meat (cooked in) Water. Meat is used. Prepare water; add fat, [break in tablet], mashed leek and garlic, and a corresponding amount of raw shuhutinnû. Assyrian style. Meat is used. Prepare water; add fat [break in tablet], garlic and zurumu with [break in tablet], blood, and mashed leek and garlic. Carve and serve.

Working Recipe:
Chop/slice/dice: (many) onions, shallots, garlic, chives, leeks, scallions. Fry in oil until soft. Brown all sides of an eye round pot roast in this mixture, add salt to meat and onion mixture. Turn down heat, and simmer until done in a small amount of water to which a quarter to a half bottle of Guiness stout has been added, turning once or twice during cooking. Remove meat. Boil down onion-beer mixtures until it is reduced to a thick