Week Zero: The Future, Part Three

My degree is complete. In a fortnight, I get to graduate with first class honours, then fly to Copenhagen to participate in a couple of workshops at the Copenhagen Institute of Interaction Design thanks to being granted the Bradshaw Award. I don’t know who you were, Mr, Mrs, Ms, or Dr Bradshaw, but thanks for continuing to support DJCAD students in building their futures.

I have decided to accept the offer from Dundee to study Augmentative and Alternative Communication, so for now, my home remains here. As for this project, it may be over as far as grading is concerned, but I intend to continue working on it, so this blog will continue to serve as a development diary, although it may undergo a redesign in the weeks to come. The visually impaired guys who assisted in my research are happy for me to return with further prototypes for them to test, and hopefully in time it will help lots of visually impaired players enjoy a wider range of games.

For now though, I’m taking a few days off to celebrate.

 

degree show.png
Degree show space – time to turn off the lights!

 

One Hundred Words

This decade has seen unprecedented innovation within tabletop gaming, yet while boardgamegeek.com currently lists over ninety thousand games, those accessible to people with sight impairments largely remain restricted to expensive Braille versions of traditional games such as chess. Isolation is frequently reported among this group, so it is an important to facilitate their access to social activities.


Metagame enables visually impaired people to play complex modern games through phone-based AI image recognition, and creates wider awareness of inclusive gaming by crowdsourcing the creation of audio tags for game components to sighted players and designers, bringing the entire gaming community together.

Critical Reflection

LorrettaReynolds_work

Well, it has certainly been a pretty intense few months. I choose my project in the very first week and, while I had moments (or more accurately, a month) of doubt this semester as to whether or not my choice had been the correct one, I am glad I decided to undertake something that I knew would be challenging rather than the pursue the second strongest contender for my final year project, which I was something I was sure I could do. I can honestly say that while I knew that my concept was technically possible, I had only a vague idea of to implement it, and that was both terrifying because I essentially bet my future on it, and exhilarating because I enjoy exploring possibilities of what could be achieved.

Looking back, I might have spent a little too long in my research phase. While I always find speaking to potential users intriguing, I possibly didn’t really need to understand all of the different causes of visual impairment, which I spent precious time reading about. What I learned directly from the visually impaired gaming group was much more valuable and the considerable amount of time I spent in their company, not just questioning them, but participating in their games, provided both direction for the project and insights that I could never have gained any other way. It also provided something that I could not have anticipated: encouragement from their incredible determination to play these games which are so poorly designed from an accessibility point of view. Their improvised adaptations not only inspired me as a designer, but also make me proud to be part of a community of gamers – sighted and otherwise – who I have seen remarkable ingenuity from all in the name of gameplay.

In terms of coding, the first two prototypes were very straightforward because I was using simple tools. However, once I switched to Android Studio it became very challenging, which is why there are fewer posts on this blog from the last month. From a programming perspective, it isn’t too bad, but the Android Studio IDE has quite the learning curve. Everything seems counter-intuitive, but now that I am used to it, its just frustrating rather than problematic. Throughout prototyping in both App Inventor and Android Studio, it’s been surprisingly difficult to implement the swiping of specific areas of the screen down to activate an option and away to close (and in fact in App Inventor the behaviour must be faked using a canvas fling instead of an onSwipe event handler) – however I am pleased to say it has been a worthwhile endeavour to map the action taken to read a card to something approaching the physical act of card draw and the action taken to back out of a screen to something like discarding a card as it definitely enhances the user experience, and, as I’d hoped, supports and complements the physical experience of gameplay rather than supplanting it. On a less positive note however, I’ve had a persistent issue with a camera framework bug, which is unfortunately due to Android’s own security measures rather than bad code. To ensure that the prototype functions reliably on the day of the Viva, I may resort to faking the effect with a concealed NFC tag.

On the whole, fourth year has been an interesting experience which has offered the incredible opportunity to dedicate a significant amount of time to a project which I have come to believe in sufficiently strongly enough to pursue it beyond the Viva until it is a stable, viable app which makes a difference to other people’s lives.

 

 

 

 

 

Week 28: Community Site Update

While my main focus has been on developing the app, I have taken forward my wireframes for the community site to and from which the audio files it requires are sent and created a digital prototype of the homepage. This has been built using Adobe’s new Experience Design, a beta version of a new addition to the Creative Cloud. Since this is a beta version, some of the bugs can be forgiven (although I am not impressed that it managed to lose the ‘below the fold’ section of my site* the night before my Mark II presentation), its has a lot of issues which need to be resolved. Why does the zoom work in such a weird way? Why, when it allows transitions does it only output to PNG? And why do I have to create a separate board to delineate the ‘below the fold’ half of the screen if I want to be able to jump to it from an icon on the ‘above the fold’ section? Why can’t I just anchor link to something within the same board but in different ‘viewports’?

So, it’s been interesting to play with and test the limits of. Sadly, I’ve hit the limits within an hour of playing with it, so until it is more fully developed I’ll be using Proto.io, which is a shame because I’d actually rather use an installed app on my laptop than work online at the mercy of temperamental home broadband. It does however, look quite promising as a future rapid prototyping tool.

Android Tablet – 1

Android Tablet – 2
Below the fold. This used to have text before Experience Design inexplicably lost it.

 

*Actually maybe I am a little impressed it managed to lose it. It lost saved work without even crashing.

Week 27: One Final Experiment, or How To Train Your AI

What if this could be done without QR codes? This would be preferable, as a code takes up considerable space on a card, which is fine if it is on the back of a card but somewhat harder to accommodate on the front without obscuring text or illustrations for sighted users. What is instead image recognition was used – not to recognise the entire card, but to recognise a small strip which would be folded lengthwise down one edge of both sides of the card, and either then slid with the card itself into a card sleeve or simply stuck down if the player doesn’t mind permanently attaching something to their card?

This would provide two advantages: first, just as with the NFC tag, the card could now be identified from either the front or back of a card without ruining its aesthetics or obscuring its game text; and second, unlike both the QR and NFC solution, an entire hand of cards could now be identified simultaneously as long as the cards were held in the manner most players hold them: fanned out. I still believe it is important to be able to identify a card from the back, as holding a single card out far enough to scan with a phone is likely to reveal that card to sighted players, but there is definitely an advantage in being able to scan an entire hand, where aside from the top card, only the edge of the card is visible, minimizing the chances of revealing too much information to sighted opposition while gaining the ability to quickly check an entire hand without rescanning.

 

But how would this be achieved? Well, I’ve had some promising results (and fun!) with Clarifai which allows developers to take an image recognition algorithm and train it to recognise whatever image-related concepts they wish to define.

 

IMG_20170428_133445
Princess Annette from Love Letter
aitrainfinal
The strip alone represents and identifies Princess Annette from Love Letter to the algorithm.

The algorithm is taught to disassociate the image on both front and back with the concept of Princess Annette and positively identify only the patterned strip. With twenty one positive examples and just five negative examples it was able to achieve a 100% success rate at identifying new images as the Princess or not the Princess.

The process by which the identifying strip was derived involved first creating a five character code based on the name of the game from which the card comes and the first three characters of the card name. This created the code LLPRI to be used as a unique identifier, but these letters could not be used without first being encoded in a form which would be easy for a computer to distinguish but more difficult for a human to remember. Inspired by the squares I had been using in my branding, I decided to assign a blue square to a dot and a brown square to a dash, facilitating the translation of the five character code into a system derived from International Morse Code. As an extra precaution to prevent cheating from sighted players, this five characters were then encrypted using the Alberti Cipher. This method was chosen in part because I am already familiar with this method of encryption and can’t justify spending time on learning more about cryptography at this stage of the project, but also because it is difficult to break by means of frequency analysis without the aid of a computer – meaning sighted opponents would not be able to deduce the codes through working out common letters first by virtue of them appearing often than uncommon letters (although using a code rather than a full word should also assist with thwarting such a technique). Potentially a user name (in this case my own user name from a social networking site) could be used as an encryption key, ensuring that one user’s set of codes to be printed for a specific game are different to those for another, once again reducing the risk of sighted players learning to cheat through recognising the patterns. However, the patterns are at this point in time potentially recognisable as they are quite large. The patterns should be as small as the recognition algorithm and camera permit to make them difficult to distinguish at distance.

Week 25: Learning Android Studio

The posts on here will undoubtedly become shorter now as I teach myself Android Studio. I have reached the limit of what App Inventor can do: the camera framework bug is not actually the fault of App Inventor and I will still need to find a workaround regardless of which development platform I use, but App Inventor unfortunately can’t at this time handle the swipes that are required to map user interactions to physical actions in tabletop play. So, I’ll just have to learn Android Studio.

 

android studio.png
Android Studio: lots of functionality, lots of chaos!

 

Week 24 : Mark II

My app now recognises cards by QR code rather than by NFC tag as this allows the users to print the identification device at home rather than requiring them to buy any additional items in order to play the games they have bought. I’m having issues with a recurring camera framework bug, so I’m skeptical of the effectiveness of my current prototype to demonstrate how the app should actually function.

On a more positive note, I have started to look at branding. Previously I simply used primary, high contrast colours, but they don’t make for a very appealing or modern interface. It is also the case that just as a person losing their hearing loses certain frequencies first, so too does someone losing their sight lose the ability to distinguish colours. Therefore I have chosen to use two colours which can easily be distinguished from each other and whose appearance is, more or less, consistent even to people with all types of colour blindness. I intend to also use these colours for the QR codes themselves, and have started to work a square motif, inspired by the use of those codes, into my logo and application icon.

IMG_20170428_131105IMG_20170428_131303