For those following our weblog, you must have been curious to find out what our three most promising concepts are. For those recently joining, now is a great time to do so!
Last week we showed the process of concept exploration using the morphological chart, and how the “Illusion + context projection”, “Accelerometer” and “Touch light-up tablet” concepts did not quite make the cut. This post is dedicated to the three concepts we deemed worthy of further investigation.
The first concept is called “Personal translation”. It provides the user an alternative window of looking at the tablet, thus providing the user with translation. This could be done with a smartphone. Once the user has completed reading the tablet, they would be able to take the recipes and some background information home with them via their phones. Google translate has an app that uses letters as some sort of AR markers to provide a real-time overlay of the translation on your phone’s screen. This does (unsurprisingly) not support Akkadian, which reveals the weakness of this concept.
Should we use this concept, a lot of software development is involved. This may be a problem as even the best programmer in our group is just a beginner, with some not able to code at all. This means the viability of this concept will stand or fall with the availability of off-the-shelf software. Being able to tie some things together and only create the high-level interface is a hard requirement for this concept to be viable.
The second concept is named the “Decode challenge” concept. This concept seeks active interaction with a small group of users that perform an action that will reveal parts of the translation on an overhead projector. The main challenge here is the challenge part of this concept. We do want to have the audience actively participate in the exhibit, but it seems there is a thin line between causing the audience to skip the exhibit, have them participate in a cheesy game with a high “Eftelinggehalte” (fairy tale-ness), or make them have a cool experience that gave them interesting new insights that they retain due to actively being involved in the exhibit.
The third concept is the “Translation projection” and it emerged as a combination of the “Illusion + context projection” and “Touch light-up tablet” concepts. The idea is to have the user engage with the exhibit (hold down a switch or button for instance) and have a projector light up (a part of) the text in some sort of scanning visual fashion, to have the translation be revealed in real time projected on a blank clay tablet sitting next to the replica. This concept has the challenge of finding a good mix of hardware and software. Luckily, Jouke already provided us with a great head start by offering to lend us a mini-projector. From there, we still need to find out how we can have a hardware switch trigger an animation on the projector, and how we can integrate all the systems into one. The viability of this concept can be made or broken with the centrepiece of the system that ties everything together, as this is the concept using the most subsystems.
As you may have noticed, all of these concepts have their pros and cons. That has been our focus these past few days. Remember the to-do list from our previous post? That’s the things we have been researching and will present tomorrow! Expect some more small blogposts elaborating on these small projects and prototypes that emerged from it!
Words by Samir den Haan