Jekyll2023-02-19T01:20:35+00:00https://deckardkane.github.io/feed.xmlPeter TurnbullPersonal website and portfolio of Peter Turnbull.Peter TurnbullThe Absent Minded Research Institute2022-05-13T00:00:00+00:002022-05-13T00:00:00+00:00https://deckardkane.github.io/projects/absent-minded-research-institute<p><strong>Notice: this page is a work in progress. The research subject plans on adding more documentation demonstrating the capabilities of the facility in the future.</strong></p>
<p class="notice--danger"><strong>PARAFICTION WARNING:</strong> Visitor, please be aware that the following documents contain parafictional elements. If you prefer to be <del>BORING</del> reading the purely nonfiction documentation, please visit this <a href="/projects/amri">link</a>.</p>
<h2 id="overview">Overview</h2>
<p>Hello there. My name is Dr. Ada, I am the head researcher at the Absent Minded Research Institute. You may meet my colleagues, Dr. Mitchell and Dr. Fantastic, later. I have been given the responsibility of giving you a tour of our facility. So let’s begin, I have work to get back to.</p>
<p>This is the workstation of one of our researchers at the institute. As you’ll soon see, they are all hard at work, meticulously tracking and monitoring our subject, PT.</p>
<p>You are permitted to peruse the contents of the desk at your leisure. You may find some evidence of the institute’s activities, including testing logs, a live feed of PT’s current status and location, as well as a fully functional internal phone directory through the rotary phone on the desk.</p>
<h2 id="on-display">On Display</h2>
<figure class="half ">
<a href="/assets/images/amri/onDisplay/amri_onDisplay_1.jpg" title="The researcher's desk.">
<img src="/assets/images/amri/onDisplay/amri_onDisplay_1.jpg" alt="Photograph of thesis project on display. No people are present, just the desk is visible." />
</a>
<a href="/assets/images/amri/onDisplay/amri_onDisplay_3.jpg" title="Two guests take a listen to the rotary phone.">
<img src="/assets/images/amri/onDisplay/amri_onDisplay_3.jpg" alt="Photograph of thesis project on display. Two people are listening to the rotary phone on the desk." />
</a>
<a href="/assets/images/amri/onDisplay/amri_onDisplay_2.jpg" title="Guests (and dog) enjoying their time at the institute.">
<img src="/assets/images/amri/onDisplay/amri_onDisplay_2.jpg" alt="Photograph of thesis project on display. Several people and a dog are standing in front of it." />
</a>
<a href="/assets/images/amri/onDisplay/amri_onDisplay_4.jpg" title="One guest dials the rotary phone.">
<img src="/assets/images/amri/onDisplay/amri_onDisplay_4.jpg" alt="Photograph of thesis project on display. One person is dialing the rotary phone on the desk." />
</a>
<figcaption>On display in the Decker/Meyerhoff Gallery at MICA.
</figcaption>
</figure>
<h2 id="display-elements">Display Elements</h2>
<h3 id="rotary-phone">Rotary Phone</h3>
<figure style="width: 300px" class="align-left">
<img src="https://deckardkane.github.io/assets/images/amri/rotaryPhone/amri_phone_1.jpg" alt="A photo of a green rotary phone on a desk, with a phone menu next to it." title="A researcher's phone." />
<figcaption>Please do call around the facility. The scientists get lonely.</figcaption>
</figure>
<p>Each researcher at the institute is provided with a top of the line Western Electric 500 rotary telephone, to better promote department communication. While you’re visiting, please feel free to give the other scientists a ring.</p>
<p>Various departments and offices are accessible by dialing 1 through 9. You can reach our operator by dialing 0. Please be aware that our system does not allow you to dial outside our facility, for security reasons.</p>
<!--
<div class="postit pink right medium">
<b>FANTASTIC HERE:</b> yeag, but the phone tree is old as hell, ther are a few numbers they forgot to block...
dial 911 kids!
</div> -->
<h4 id="dial">Dial!</h4>
<p>Visitor, please note, the virtual dial is currently out of service.</p>
<div class="phone">
<!--
<div id="rotary-dial">
<script src="../../assets/js/RotaryDial.js"></script>
<script src="../../assets/js/main.js"></script>
</div> -->
</div>
<h3 id="status-logger">Status Logger</h3>
<div id="loggerText" style="height:240px; width: 100%;"></div>
<p><em>Please note: the log will attempt to fetch data every 30 seconds, and timestamps are in UTC.</em></p>
<script src="../../assets/js/vendor/jquery/jquery-3.3.1.min.js"></script>
<script src="../../assets/js/StatusLogger.js"></script>
<div class="postit blue right medium">
Hey, if the printer runs out of paper, don't look at me! I do my best under the circumstances...
<br />
- Mitchell
</div>
<p>Here at the institute, we have a state-of-the-art thermal printer tasked with maintaining a constant log of PT’s activities. The data is streamed live, with a delay of a few seconds at most. This allows our researchers to be apprised as to PT’s whereabouts and activities at all times.</p>
<p>Of course, we do need to ensure the printer is well-stocked with thermal paper…</p>
<figure class="half ">
<a href="/assets/images/amri/statusLogger/amri_statusLogger_1.jpg" title="The status logger is a crucial part of our experiments and research on PT.">
<img src="/assets/images/amri/statusLogger/amri_statusLogger_1.jpg" alt="Photo of what appears to be an accounting calculator sitting on a desk, with a heap of paper spooling out the back." />
</a>
<a href="/assets/images/amri/statusLogger/amri_statusLogger_2.jpg" title="As you can see, PT's every movement and activity is diligently logged.">
<img src="/assets/images/amri/statusLogger/amri_statusLogger_2.jpg" alt="Photo of the accounting calculator close up. The logged activities of PT are legible on the spooled out paper. A blue post-it note asking for more paper is stuck to the front of the calculator." />
</a>
<figcaption>The status logger runs continuously to provide the institute scientists with a perpetually-flowing feed of data.
</figcaption>
</figure>
<h3 id="notification-orb">Notification Orb</h3>
<!--
<figure style="width: 200px" class="align-right">
<img src="https://deckardkane.github.io/assets/images/amri/notificationOrb/amri_orb_1.jpg" alt="A photo of a glowing green orb resting on top of a filing cabinet.">
<figcaption>The notification orb atop the filing cabinet stack.</figcaption>
</figure> -->
<div id="notifications" style="text-align: center;">
<canvas id="orb">
<script src="../../assets/js/orb.js"></script>
</canvas>
</div>
<p><em>This is live! Please note: the orb will attempt to fetch data every 15 seconds.</em></p>
<p>The notification orb reflects PT’s most recently received notification, and is often used as a status indicator by our scientists. If it’s green, PT has been texting. If it’s red, they’ve been ordering food. If it’s yellow, PT is probably lonely…</p>
<figure class="half ">
<a href="/assets/images/amri/notificationOrb/amri_orb_1.jpg" title="The notification orb atop the filing cabinet stack.">
<img src="/assets/images/amri/notificationOrb/amri_orb_1.jpg" alt="Photo of the notification orb sitting on top of a filing cabinet." />
</a>
<a href="/assets/images/amri/notificationOrb/amri_orb_2.png" title="This is the key our researchers use to understand the colors of the orb.">
<img src="/assets/images/amri/notificationOrb/amri_orb_2.png" alt="Scanned document that describes which app the test subject is using corresponds to which color the orb flashes." />
</a>
<figcaption>The notification orb, and a printed key to understand it.
</figcaption>
</figure>
<h3 id="filing-cabinet">Filing Cabinet</h3>
<p>Inside the filing cabinet, among other things, you will find detailed documentation of our observations of the test subject.</p>
<!-- Courtesy of embedresponsively.com //-->
<div class="responsive-video-container">
<iframe src="https://player.vimeo.com/video/799985717?dnt=true" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
</div>
<p>The filing cabinet also contains what appears to be a…a maid dress? It’s hard to say what this is for, but it has pretty lights on it!</p>
<!--
<figure class="third ">
<a href="/assets/images/amri/notificationOrb/amri_orb_1.jpg"
title="This should be one of the test subject documents.">
<img src="/assets/images/amri/notificationOrb/amri_orb_1.jpg"
alt="PiGrrl powered off">
</a>
<a href="/assets/images/movingDay/printing_caseTop.jpg"
title="This should be one of the test subject documents.">
<img src="/assets/images/movingDay/printing_caseTop.jpg"
alt="Deckard Kane Studios splash screen">
</a>
<a href="/assets/images/movingDay/printing_buttons.jpg"
title="This should be one of the test subject documents.">
<img src="/assets/images/movingDay/printing_buttons.jpg"
alt="MOVING DAY title screen">
</a>
<figcaption>Inside the filing cabinet, among other things, you will find detailed documentation of our observations of the test subject.
</figcaption>
</figure>
<div class="stack">
<ul class="photostack js-photostack" style="width: 200px; height: 300px;">
<li><img src="https://deckardkane.github.io/assets/images/amri/documents/amri_documents_0007_medium.png"></li>
<li><img src="https://deckardkane.github.io/assets/images/amri/documents/amri_documents_0005_medium.png"></li>
<li><img src="https://deckardkane.github.io/assets/images/amri/documents/amri_documents_0004_medium.png"></li>
</ul>
<script src="../../assets/js/vendor/jquery/jquery-3.3.1.min.js"></script>
<script src="../../assets/js/photostack.js"></script>
<script>
$(".photostack").Photostack();
</script>
</div>
-->Dr. AdaThe Absent Minded Research Institute is meticulously tracking and monitoring their subject, PT, for purposes unknown...Thesis - The Absent Minded Research Institute2022-05-13T00:00:00+00:002022-05-13T00:00:00+00:00https://deckardkane.github.io/projects/amri<p><strong>Notice: this page is a work in progress. I plan on adding more documentation demonstrating the interactive parts of this piece in the future, stay tuned!</strong></p>
<p class="notice--warning"><strong>NONFICTION WARNING:</strong> This is an overview of the project I made for my senior thesis, which contains a parafictional narrative. There is a version of this page written from the perspective of that narrative, which you can read <a href="/projects/absent-minded-research-institute">here</a>.</p>
<h2 id="on-display">On Display</h2>
<figure class="half ">
<a href="/assets/images/amri/onDisplay/amri_onDisplay_1.jpg" title="The researcher's desk.">
<img src="/assets/images/amri/onDisplay/amri_onDisplay_1.jpg" alt="Photograph of thesis project on display. No people are present, just the desk is visible." />
</a>
<a href="/assets/images/amri/onDisplay/amri_onDisplay_3.jpg" title="Two guests take a listen to the rotary phone.">
<img src="/assets/images/amri/onDisplay/amri_onDisplay_3.jpg" alt="Photograph of thesis project on display. Two people are listening to the rotary phone on the desk." />
</a>
<a href="/assets/images/amri/onDisplay/amri_onDisplay_2.jpg" title="Guests (and dog) enjoying their time at the institute.">
<img src="/assets/images/amri/onDisplay/amri_onDisplay_2.jpg" alt="Photograph of thesis project on display. Several people and a dog are standing in front of it." />
</a>
<a href="/assets/images/amri/onDisplay/amri_onDisplay_4.jpg" title="One guest dials the rotary phone.">
<img src="/assets/images/amri/onDisplay/amri_onDisplay_4.jpg" alt="Photograph of thesis project on display. One person is dialing the rotary phone on the desk." />
</a>
<figcaption>On display in the Decker/Meyerhoff Gallery at MICA.
</figcaption>
</figure>
<h2 id="overview">Overview</h2>
<p>This is my thesis piece I made to complete my degree in Interactive Arts from the Maryland Institute College of Art (MICA). It is an interactive installation piece with a parafictional narrative component, about a mysterious institute that is tracking my every move. In the context of the narrative, I am referred to as the “test subject”, or simply PT.</p>
<p>Viewers are encouraged to touch and play with the various elements of the piece. It incorporates numerous different skills that I picked up during my time at MICA, including digital design and fabrication, as well as a lot of electronics and coding. Most notable is the rotary phone, which has been modified with a Raspberry Pi running Python code that interprets the numbers being dialed, and plays back different audio (mostly voice-acted dialogue of the researchers at the institute) to the listener as needed.</p>
<h3 id="wall-text">Wall Text</h3>
<p>This is the workstation of a researcher at the Absent Minded Research Institute. The Institute monitors the successes and failures of their research subject, known as P.T., as he attempts to navigate living with his attention deficit hyperactivity disorder (ADHD), and the executive dysfunction that accompanies it. To that end, the Institute has created a facility and assembled a team of scientists dedicated to collecting data on P.T. These scientists are highly-trained experts in their respective fields…mostly.</p>
<p>The Institute spared no expense in ensuring the extraction and analysis of any possible source of data, and there is no part of P.T.’s life that goes unobserved. His every movement and behavior is tracked, including sleep, medicinal input, waste discharge, and food and drink, all exactingly logged. What the goal of this laborious research and testing is, the Institute isn’t saying, but they surely have only the best of intentions.</p>
<p>Please feel free to peruse the desk and filing cabinet, where you will find these detailed logs of P.T.’s activities.</p>
<p>While the Institute may be fictional, the tracking and logging of these patterns is entirely real.</p>
<h2 id="display-elements">Display Elements</h2>
<h3 id="rotary-phone">Rotary Phone</h3>
<figure style="width: 250px" class="align-left">
<img src="https://deckardkane.github.io/assets/images/amri/rotaryPhone/amri_phone_1.jpg" alt="A photo of a green rotary phone on a desk, with a phone menu next to it." title="The rotary phone on display." />
<figcaption>The rotary phone awaits a caller...</figcaption>
</figure>
<p>This is the centerpiece of the installation. I used a 1973 Western Electric Model 500 rotary telephone, but it’s not just a static prop! Visitors can use the phone to dial other departments in the facility (by referencing the handy phone tree menu on the desk), and hear various snippets of dialogue from the researchers at the institute, as they continue their studies on their subject.</p>
<p>The phone will also occasionally ring when someone walks by, to entice them to pick it up!</p>
<p>The phone is probably the most complex piece of the project. The Raspberry Pi it uses as a brain has several tasks:</p>
<ul>
<li>Accepting physical input from the phone itself, monitoring if the receiver is picked up, what number is dialed, etc.</li>
<li>Communicating back and forth with the backend, so the phone’s behavior can be monitored and controlled remotely</li>
<li>Playing back the appropriate audio as needed, and returning to an idle state when playback ends</li>
<li>Actually making the phone ring!</li>
</ul>
<p>As you might imagine, this was not easy. In particular, getting the phone to ring using a Raspberry Pi (which by default can only really output 5 volts) was quite a challenge. <!-- If you're interested, I go into detail about the process of how I got this to work, [here](/blog/amri-bts). -->
<!-- #### Dial! --></p>
<div class="phone">
<!--
<div id="rotary-dial">
<script src="../../assets/js/RotaryDial.js"></script>
<script src="../../assets/js/main.js"></script>
</div> -->
</div>
<h3 id="thermal-printer-status-log">Thermal Printer Status Log</h3>
<div id="loggerText" style="height:240px; width: 100%;"></div>
<p><em>Please note: the log will attempt to fetch data every 30 seconds, and timestamps are in UTC.</em></p>
<script src="../../assets/js/vendor/jquery/jquery-3.3.1.min.js"></script>
<script src="../../assets/js/StatusLogger.js"></script>
<p>This thermal printer uses a Raspberry Pi running a Python script to retrieve my current activity status (walking, sleeping, eating, listening to music, etc…), as well as my current location, and print out a live log of my behavior.</p>
<figure class="half ">
<a href="/assets/images/amri/statusLogger/amri_statusLogger_1.jpg" title="The status logger sits on the desk, waiting for the next piece of incoming data...">
<img src="/assets/images/amri/statusLogger/amri_statusLogger_1.jpg" alt="Photo of what appears to be an accounting calculator sitting on a desk, with a heap of paper spooling out the back." />
</a>
<a href="/assets/images/amri/statusLogger/amri_statusLogger_2.jpg" title="The status logger hard at work, documenting my every move.">
<img src="/assets/images/amri/statusLogger/amri_statusLogger_2.jpg" alt="Photo of the accounting calculator close up. Peter's logged activities are legible on the spooled out paper. A blue post-it note asking for more paper is stuck to the front of the calculator." />
</a>
<figcaption>The thermal printer on the desk.
</figcaption>
</figure>
<h3 id="notification-orb">Notification Orb</h3>
<div id="notifications" style="text-align: center;">
<canvas id="orb">
<script src="../../assets/js/orb.js"></script>
</canvas>
</div>
<p><em>This is live! Please note: the “orb” will attempt to fetch data every 15 seconds.</em></p>
<p>The “notification orb” perches on top of the filing cabinet, and serves two functions: it changes colors based on the most recent notification I received on my phone (you can see a key for the different colors in the images below), and a small motion sensor inside the housing also serves as a way to monitor movement near the piece in the gallery, and was used to set off other parts of the piece, most notably, by causing the rotary phone to ring when people walked by.</p>
<!--
<figure style="width: 200px" class="align-right">
<img src="https://deckardkane.github.io/assets/images/amri/notificationOrb/amri_orb_1.jpg" alt="A photo of a glowing green orb resting on top of a filing cabinet.">
<figcaption>The notification orb atop the filing cabinet stack.</figcaption>
</figure> -->
<figure class="half ">
<a href="/assets/images/amri/notificationOrb/amri_orb_1.jpg" title="The notification orb atop the filing cabinet stack.">
<img src="/assets/images/amri/notificationOrb/amri_orb_1.jpg" alt="Photo of the notification orb sitting on top of a filing cabinet." />
</a>
<a href="/assets/images/amri/notificationOrb/amri_orb_2.png" title="The color key for the notification orb.">
<img src="/assets/images/amri/notificationOrb/amri_orb_2.png" alt="Scanned document that describes how each app Peter is using, corresponds to which color the orb flashes." />
</a>
<figcaption>The notification orb, and a printed key to understand it.
</figcaption>
</figure>
<h3 id="filing-cabinet">Filing Cabinet</h3>
<p>The filing cabinet contains a variety of miscellaneous ephemera, including spools of the behavior logs, old medicine bottles, as well as a strange maid’s dress, that has been modified with LED lights that react to sound.</p>
<!-- Courtesy of embedresponsively.com //-->
<div class="responsive-video-container">
<iframe src="https://player.vimeo.com/video/799985717?dnt=true" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
</div>Peter TurnbullAn overview and highlight reel of my undergraduate thesis project, the Absent Minded Research Institute.Robotic Arts: Creative Switch2021-02-11T17:00:30+00:002021-02-11T17:00:30+00:00https://deckardkane.github.io/blog/robotic-arts-creative-switch<h2 id="technically-this-counts-as-a-switch">Technically, this counts as a switch</h2>
<p>It’s not a very efficient one, but it is <em>technically</em> a switch.</p>
<figure class="half" type="center">
<img src="/assets/images/roboticArts/creativeSwitch/switch_off_square.jpg" />
<img src="/assets/images/roboticArts/creativeSwitch/switch_on_square.jpg" />
<figcaption>I guess they can add "switch" to their list of multi-tool functions now?</figcaption>
</figure>
<p>This time around, I took out the push button, and used my Leatherman as a “switch”. Why? Truthfully, it makes me chuckle that I can run current through it. I thought at least it wouldn’t work unless I hooked up the leads on the same tool. Nope, I’ve got my alligator clip hooked onto the bottle opener on one end, and the pliers clamped onto the bare wire all the way on the other end.</p>
<figure class="half" type="center">
<img src="/assets/images/roboticArts/creativeSwitch/switch_off_action.jpg" />
<img src="/assets/images/roboticArts/creativeSwitch/switch_on_action.jpg" />
<figcaption>Jaws open, LED off, jaws closed, LED on!</figcaption>
</figure>
<p>Going further, I wanted to make this into a silly ritual, and I decided I needed a soundtrack.</p>
<!-- Courtesy of embedresponsively.com //-->
<div class="responsive-video-container">
<iframe src="https://player.vimeo.com/video/511340135?dnt=true" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
</div>
<p>Hopefully Oingo Boingo doesn’t mind me performing electronics rituals with Leatherman tools to their music.</p>Peter TurnbullWe gotta turn stuff on and off somehow, but this time it needs to be sillyRobotic Arts: Technical Switch2021-02-11T16:00:30+00:002021-02-11T16:00:30+00:00https://deckardkane.github.io/blog/robotic-arts-technical-switch<h2 id="i-present-to-youa-switch">I present to you…a switch!</h2>
<figure class="half" type="center">
<img src="/assets/images/roboticArts/technicalSwitch/switch_step1_square.jpg" />
<img src="/assets/images/roboticArts/technicalSwitch/redSwitch_square.jpg" />
<figcaption>Initial circuit test without a button, and a really nice red push button</figcaption>
</figure>
<p>Unfortunately, the red push button I wanted to use did not really want to cooperate with my breadboard, so I replaced it with a smaller, simpler button.</p>
<figure class="half" type="center">
<img src="/assets/images/roboticArts/technicalSwitch/switch_off_square.jpg" />
<img src="/assets/images/roboticArts/technicalSwitch/switch_on_square.jpg" />
<figcaption>Tada!</figcaption>
</figure>
<figure style="width: 400px" class="align-center">
<img src="https://deckardkane.github.io/assets/images/roboticArts/technicalSwitch/technicalSwitchOptimized.gif" alt="" />
<figcaption>On...off!</figcaption>
</figure>
<p>And finally, a GIF for your viewing pleasure!</p>Peter TurnbullWe gotta turn stuff on and off somehow!Robotic Arts: My Workspace2021-02-05T19:34:30+00:002021-02-05T19:34:30+00:00https://deckardkane.github.io/blog/robotic-arts-my-workspace<h2 id="hello-again">Hello again!</h2>
<p>Things certainly change when you don’t post to your website for over a year, huh.</p>
<p>But anyway. I’m taking a class on robotic arts! First order of business was to clean up and organize my workspace.</p>
<p>Easier said than done.</p>
<figure class="half" type="center">
<img src="/assets/images/roboticArts/myWorkspace/before_1.jpg" />
<img src="/assets/images/roboticArts/myWorkspace/before_2.jpg" />
<figcaption>Before (I may have cleared off a good chunk of debris before I remembered to take a picture, whoops)</figcaption>
</figure>
<p>It took some work, but…</p>
<figure class="half" type="center">
<img src="/assets/images/roboticArts/myWorkspace/after_1.jpg" />
<img src="/assets/images/roboticArts/myWorkspace/after_3.jpg" />
<figcaption>After!</figcaption>
</figure>
<p>We got there. Now I just need to find the power supply for my soldering iron…</p>Peter TurnbullIt's been a while, hasn't it? I was busy organizing my workbench.Hexxing Hunt2019-10-08T00:00:00+00:002019-10-08T00:00:00+00:00https://deckardkane.github.io/projects/hexxing-hunt<h2 id="overview">Overview</h2>
<p>Hexxing Hunt is a game I worked on with 3 of my classmates in our 2D Game Design class.</p>
<p>The game is free and open-source, and it has its own website and trailer! Check it out here!</p>
<p><a href="https://deckardkane.github.io/Hexxing-Hunt/" class="btn btn--primary btn--x-large align-center">Hexxing Hunt website</a></p>
<h3 id="concept">Concept</h3>
<p>The prompt for this project was to create a splitscreen game with a “hide-and-seek” vibe. The game is designed for two players, and the screen is split accordingly. Each player is either the witch or the witch hunter. The goals are simple: the witch needs to survive until the timer runs out, and avoid being burned alive and all that. The witch hunter needs to hunt down that witch!</p>
<h3 id="inspiration">Inspiration</h3>
<p>Our group was heavily inspired by games like <a href="https://samuraipunk.com/screencheat">Screencheat</a> and in particular, the popular Garry’s Mod gamemode <a href="https://steamcommunity.com/sharedfiles/filedetails/?id=135509255">Prop Hunt</a>, for reasons that will probably be immediately obvious.</p>
<h2 id="development-process">Development Process</h2>
<p>I was the Lead Developer for this project. I also contributed to the art, and in particular did some quick work on some of the animations.</p>
<h3 id="gameplay">Gameplay</h3>
<p>The witch hunter simply needs to touch the witch and press a button to end the hunt. The witch has one primary defense mechanism: they can shapeshift! They can transfigure themselves into different items to blend into the area.</p>
<h3 id="art">Art</h3>
<figure class="half" type="center">
<img src="/assets/images/hexxingHunt/Witch_idle.gif" />
<img src="/assets/images/hexxingHunt/WitchHunter_idle.gif" />
<img src="/assets/images/hexxingHunt/Witch_walkRight.gif" />
<img src="/assets/images/hexxingHunt/WitchHunter_walk.gif" />
<figcaption>Basic idle/move animations for Witch and Hunter.</figcaption>
</figure>
<!--
![Witch Idle GIF](https://deckardkane.github.io/assets/images/hexxingHunt/Witch_idle.gif)
![Witch Hunter Idle GIF](https://deckardkane.github.io/assets/images/hexxingHunt/WitchHunter_idle.gif)
![Witch Walk GIF](https://deckardkane.github.io/assets/images/hexxingHunt/Witch_walkRight.gif)
-->
<h2 id="notable-features">Notable Features</h2>
<h3 id="camera-system">Camera System</h3>
<p>The witch hunter has a pretty standard tracking camera, which leaves the character in the middle of their screen at all times.</p>
<p>The hunter can always look over at the witch’s screen, which was something we were expected to specifically design for. We realized early on that if we used the same tracking camera for the witch, it would be immediately obvious which object the witch was disguising themselves as.</p>
<p>So, we designed our camera system to have a second mode, where the camera only moves when the witch moves from one area to another. This makes it impossible to discern exactly where the witch is at any given time, or what object they are currently disguised as. Obviously, it is still possible to discern what room the witch may be in, but our map is designed with duplicate room designs to help mitigate this.</p>
<p>We’ve noticed during playtests that the fast pace of the game also means that screencheating is not as helpful as one might expect. You have to focus on your own screen to move and hunt effectively!</p>
<p>Here’s how we implemented the room lock camera system:</p>
<p>Initially, we were planning on manually finding the coordinate boundaries of each room, and inputting them into our code to make a <em>very</em> long chain of else if statements that would move the camera according to what coordinate bounds the player was currently in. This would require us to find the X/Y coordinates of 2 corners of every room, <em>and</em> the coordinates for the center of each room. Not only was this tedious, it was imprecise.</p>
<p>So instead, we implemented a system using 2D Box Colliders. Each room has a box collider to approximately represent its volume. I say approximately, but this method was much more precise and easier to adjust than our previous attempt with manual X/Y coordinate plotting.</p>
<p>We put our map together as one image, but we made a Prefab object for each room that only contained the BoxCollider2D component and our script, and then took 9 of those prefabs and nested them under our main map object. Note that the box colliders have “Is Trigger” checked, as they are going to act as triggers for a function in our camera script, and <em>not</em> affect or impede movement in any way.</p>
<figure>
<img src="/assets/images/hexxingHunt/hexxingHunt_roomCameraBoxCollider.png" alt="Hexxing Hunt, room camera box colliders" />
<figcaption>If you can see that very thin green line, that's the box collider used to establish the bounds of the room and adjust the camera.</figcaption>
</figure>
<p>Now for the actual script. Two nice things about having a box collider represent each room: for one, instead of writing a bunch of if statements, we can instead use the Unity physics engine’s built in <code class="language-plaintext highlighter-rouge">OnTriggerStay2D</code> function. And two, we can use <code class="language-plaintext highlighter-rouge">boxCenter</code> to quickly calculate the exact center of the room’s box collider on the fly.</p>
<div class="language-c# highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// this function is called every frame where the gameobject has another gameobject inside its bounds (the gameobjects are set to behave as triggers in this case)</span>
<span class="k">private</span> <span class="k">void</span> <span class="nf">OnTriggerStay2D</span><span class="p">(</span><span class="n">Collider2D</span> <span class="n">other</span><span class="p">)</span>
<span class="p">{</span>
<span class="c1">// and when it's called, we check to see if the colliding object is the witch...</span>
<span class="k">if</span> <span class="p">(</span><span class="n">other</span><span class="p">.</span><span class="n">gameObject</span> <span class="p">==</span> <span class="n">witch</span><span class="p">)</span>
<span class="p">{</span>
<span class="c1">// if it is, we move the camera to position (in this case, the center of the Collider volume)</span>
<span class="n">activeCam</span><span class="p">.</span><span class="n">transform</span><span class="p">.</span><span class="n">position</span> <span class="p">=</span> <span class="k">new</span> <span class="nf">Vector3</span><span class="p">(</span><span class="n">boxCenter</span><span class="p">.</span><span class="n">x</span><span class="p">,</span> <span class="n">boxCenter</span><span class="p">.</span><span class="n">y</span><span class="p">,</span> <span class="p">-</span><span class="m">10f</span><span class="p">);</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>
<p>Each room prefab object has this script attached to it. The witch and activeCam variables are public attributes that can be accessed through the Unity editor. We could configure which player and which camera the script modifies, but there’s no need to here, since we are always checking to see if the witch player has entered the space, and we want to move the witch player’s camera.</p>
<p>In a nutshell: each room checks to see if the witch player is currently inside its bounds. If it is, move the camera to the center of those bounds. Otherwise, no movement.</p>
<h2 id="reflections">Reflections</h2>
<h3 id="challenges">Challenges</h3>
<p>Among other things, Unity Collab is…frustrating. There is a 3 person cap for the education version of Unity Collab, which was a problem because we have a team of 4! Definitely going with Git next time.</p>
<p>More specifically, engineering a solution for the room-based camera took a fair bit of time and a lot of googling. Some of the methods we found that initially looked promising turned out to have been deprecated or were only available to 3D box colliders.</p>
<h3 id="next-steps">Next Steps</h3>
<p>This game won the class vote for best midterm game, and we will soon be installing it on the Interactive Arts arcade cabinet in the Decker Library lounge!</p>
<h2 id="source-code">Source Code</h2>
<p>Here’s the source code for this project!</p>
<p><a href="https://github.com/DeckardKane/Hexxing-Hunt/" class="btn btn--primary">Hexxing Hunt source code</a></p>Peter TurnbullA hide-and-seek style splitscreen game with a witch hunt theme!ritualbot2019-09-15T00:00:00+00:002021-05-24T00:00:00+00:00https://deckardkane.github.io/projects/ritualbot<h2 id="overview">Overview</h2>
<p>This project is a collaboration with <a href="https://tanvi.network/">Tanvi Sharma</a> and <a href="https://sophie-shiff.com/">Sophie Shiff</a>, two colleagues and friends of mine here at MICA! This piece makes use of a Raspberry Pi, a thermal printer, some Python code, and Adafruit IO (yep, it returns!) to generate rituals for visitors to take with them. The random nature of the generation can result in rituals that are strange and bizarre, or oddly poetic.</p>
<p>Tanvi and Sophie created a twitter bot that generates “rituals” for the user to follow, or maybe not. This initial bot was created using <a href="http://www.tracery.io/">Tracery</a>, a language generation tool by <a href="https://twitter.com/galaxykate">Kate Compton</a>.</p>
<p>They then approached me to suggest we work together to create a physical version. I happily accepted! Work on the project took about 3 weeks, to prepare for submission to several shows.</p>
<p>This project was selected for the Ritual show here at MICA, and was on display in the Gateway building from September 15th to October 15th, 2019.</p>
<p>We also submitted this piece to the Fuse Factory’s annual exhibition, <a href="http://fuse2019.thefusefactory.org/">TechnoMEME 2</a>, and it was accepted! We sent ritualbot to the Cultural Arts Center in Columbus, Ohio, where it was on display from November 1st to December 7th, 2019.</p>
<h2 id="software">Software</h2>
<p>The physical version of ritualbot is meant to emulate the Tracery-based twitter version as much as possible, but is instead built in Python 3, for ease of interfacing with the thermal printer over the Raspberry Pi’s serial ports, and to talk with Adafruit IO.</p>
<h3 id="adafruit-io-code-part-1">Adafruit IO Code Part 1</h3>
<p>When the Pi boots and this code starts running, it needs to establish a connection to Adafruit IO and collect some data. We make use of 10 different feeds, 8 of which are word types and store a large list of words (adjectives, adverbs, movementVerbs, nouns, occurrences, pastVerbs, placeWords, verbs), and the other 2 are for diagnostic/statistical purposes (monitor, counter). Many of these feeds could very likely be combined together, but for ease of use and for the sake of time, I kept them separate.</p>
<p>The Pi requests all the data from each word type feed (each of which is represented as an array), and then creates new arrays of each word type by using a <code class="language-plaintext highlighter-rouge">for d in array</code> style for loop that specifically takes the value of each “post” to a feed and sets it as an element in this new array (otherwise we’d get a lot of metadata attached to each element).</p>
<h3 id="setup-code">Setup Code</h3>
<p>After collecting our Adafruit IO data, we set up our ritual generation object. The class for ritual generation contains several functions for actually making the strings that will be printed out, which we will discuss in a minute. But first, we establish our connection to the button over the Pi’s serial ports, and set up a button callback function. When the button is pressed, we check to see when the button was last pressed to prevent abuse, and if it’s been long enough, we run our ritual generation code.</p>
<h3 id="ritual-generation-code">Ritual Generation Code</h3>
<p>There are two main steps to creating a ritual in the code. First, we select an “origin”. This refers to how the ritual begins and the overall structure of the ritual. For example, we have a ritual structure that begins with “you wake up in your [random adjective] [random place]”, one that begins with “you are [random adjective] again”, and one that begins with “do you remember the last time you [random past-tense verb] your [random noun]?”. The full structures of each are as follows:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>wake up in your (adjective) (place). (verb) your (noun) (adverb). (moveVerb) to your (place).
you are (adjective) again. (adverb), (verb) your (adjective) self and (verb) from the (noun). bearing (noun) of (noun) on which a (noun) and a (noun) (verb) (adverb).
do you remember the first time you (pastverb) your (noun)? if only you could (verb) (occurrence), you might (verb) something for the (noun).
</code></pre></div></div>
<p>So first, we pick one of those structures. In the code this is accomplished by picking a random integer, then using a specific structure based on the integer selected. Then the structure generates each sentence by selecting random elements from arrays that store our different word types, like adjectives or movement verbs or nouns, to name a few. So now we have a few strings to represent our ritual. These are then passed on to the printer code…</p>
<h3 id="printer-code">Printer Code</h3>
<p>The printer code has to perform a handful of operations on the line strings passed to it. Each string is passed to a function that checks how long it is. This function was written because there’s no built in text wrap on the printer, it just splits words when the character length for the line is exceeded. So our function checks how long the string is against our maximum line length variable, and if it is too long, it looks for the space between two words that is closest to the line maximum, and splits the string in two at that space. The function then recursively calls itself with the second string as an argument, to check the length of that string and repeat this process if necessary.</p>
<p>Once a line string has been made to fit on the thermal paper, it is printed out, and the printer feed is advanced before printing the next line, to create space between each line. Now that we’ve printed out the ritual for the user, we need to send stats to Adafruit IO!</p>
<h3 id="adafruit-io-code-part-2">Adafruit IO Code Part 2</h3>
<p>Now that we’ve printed the ritual, our code does two things: it increments the printCount variable by one, and sends this updated value to the counter feed. It sends “Someone printed a ritual!” to the monitor feed, and then we want to send the entire new ritual, so we concatenate the line strings together and send the compound string to the monitor feed as well.</p>
<h2 id="adafruit-io">Adafruit IO</h2>
<p>Here’s a quick look at our Adafruit IO dashboard for ritualbot.</p>
<figure>
<img src="/assets/images/ritualbot/adafruitIO.png" alt="ritualbot dashboard" />
<figcaption>I always love making dashboards!</figcaption>
</figure>
<p>Our dashboard allows us to add new words to the word feeds on the fly, and also monitor ritualbot’s status and see how many rituals have been printed and what they said!</p>
<h2 id="hardware">Hardware</h2>
<p>In terms of hardware, we used the Mini model thermal printer from Adafruit! Working with it took some doing, as the libraries for doing so are not entirely complete, and have some bugs depending on which version of the printer you receive. Ours had a few issues, primarily printing out some seemingly-nonsense characters each time we ran the printer. I soon realized that the characters, x(J, were not nonsense at all, but was actually ASCII instructions that the library was trying to send to the printer, but was failing to do so for some reason. I had to hunt down the instructions that matched the ASCII values, and comment out those lines.</p>
<p>We also made use of a breadboard and a simple push button, we decided to go for the “DIY” barebones, barewires look. Connecting the button was fairly simple, but creating button callback code that worked in the framework I had set up also took some time.</p>
<h3 id="the-xj-fix">The x(J Fix</h3>
<p>Alright, I’m adding this section as an update. To my absolute delight, someone found this page while looking for a fix for this issue with the Adafruit thermal printer library. It would seem that this issue still persists, and since it’s such an unusual issue (that may or may not be limited to specific models of printer), there has been no patch or much discussion of a fix, if any. If you have this issue and wound up here, I hope these fixes help you!</p>
<p>I actually misspoke in the previous version of this post, calling the x(J characters “hex code” values, which is not what they are, they are ASCII characters. On a whim, I converted them to decimal values, and got the numbers 120, 40, 74. I realize how bizarre this sounds, but I was pulling my hair out trying to troubleshoot this, so I went hunting in the Python library file to see if those numbers showed up anywhere. As it turned out, the number 40 appears a handful of times when setting certain values for the printer.</p>
<p>I started commenting out lines one at a time, and testing to see if they had any effect on the issue, or if it would just break the printer. After a little trial and error, I ended up commenting out the lines 114 through 119, and lines 133 through 136, and that made the offending characters disappear. <a href="https://github.com/ny3t0/ritual_bot/blob/dev/Adafruit_Thermal.py">Here’s a link to the modified Adafruit_Thermal.py file</a>.</p>
<p>I was a little concerned that this would impair the functionality of the printer, but we sent this piece to two gallery shows, and had zero issues. I also figure that if these lines of code were being accidentally interpreted as ASCII or being incorrectly interpreted somehow, those lines weren’t working correctly anyway, so I’d think it wouldn’t cause any lack of functionality.</p>
<p>To those of you using Arduino and the original Adafruit thermal printer library written in C++, I do not have such a direct fix. My advice would be to take a close look at any lines where writeBytes is called. This version appears to be sending ASCII characters directly, and the letters x and J show up a number of times as characters sent using the writeBytes function. Try commenting out those lines and see if anything changes, but your mileage may vary. I hope this helps!</p>
<h2 id="reflections">Reflections</h2>
<h3 id="challenges">Challenges</h3>
<figure style="width: 200px" class="align-right">
<img src="/assets/images/ritualbot/trialAndError.jpg" alt="Trial and Error" />
<figcaption>A <em>lot</em> of trial and error.</figcaption>
</figure>
<p>On the software end, creating the custom text-wrapping function took up a lot of time. Mostly it was me searching for the Python functions that would do what I needed, and then tweaking to make sure we didn’t lose any text and everything looked neat. As far as hardware is concerned, the printer gave us a lot of trouble, though primarily for software reasons, as previously mentioned. Digging through and modifying the thermal printer libraries was a big time sink that involved some trial and error.</p>
<h3 id="next-steps">Next Steps</h3>
<p>There are still a lot of quality-of-life improvements that can be made to the code, primarily being able to batch-add multiple words at once as opposed to just manual input. We’re also hoping to better link the physical version with the twitter bot, so they share a words base without manually bringing words from one to the other.</p>
<h2 id="on-display">On Display</h2>
<p>We had the opportunity to display ritualbot in the Ritual show in the Gateway gallery here at MICA, and as part of TechnoMEME 2, a juried exhibition run by Fuse Factory, an arts and technology initiative and gallery space in Columbus, Ohio. But we’re trying to find a new home for ritualbot, stay tuned! :)</p>
<figure class="third ">
<a href="/assets/images/ritualbot/gatewayShowShadows.jpg">
<img src="/assets/images/ritualbot/gatewayShowShadows.jpg" alt="ritualbot on a pedestal, draped in shadows" />
</a>
<a href="/assets/images/ritualbot/gatewayShowTopView.jpg">
<img src="/assets/images/ritualbot/gatewayShowTopView.jpg" alt="Top down view of ritualbot on its pedestal" />
</a>
<a href="/assets/images/ritualbot/withRitualbot.jpg">
<img src="/assets/images/ritualbot/withRitualbot.jpg" alt="Me and ritualbot!" />
</a>
<figcaption>ritualbot on display as part of the Ritual show
</figcaption>
</figure>
<figure class="half ">
<a href="/assets/images/ritualbot/ritualbotFuseFactory1.jpg">
<img src="/assets/images/ritualbot/ritualbotFuseFactory1.jpg" alt="Gallery view of the Fuse Factory show, with ritualbot in the background" />
</a>
<a href="/assets/images/ritualbot/ritualbotFuseFactory2.jpg">
<img src="/assets/images/ritualbot/ritualbotFuseFactory2.jpg" alt="ritualbot on display" />
</a>
<a href="/assets/images/ritualbot/ritualbotFuseFactory3.jpg">
<img src="/assets/images/ritualbot/ritualbotFuseFactory3.jpg" alt="ritualbot on display" />
</a>
<a href="/assets/images/ritualbot/ritualbotFuseFactory4.jpg">
<img src="/assets/images/ritualbot/ritualbotFuseFactory4.jpg" alt="ritualbot on display" />
</a>
<figcaption>ritualbot in Columbus Ohio for TechnoMEME 2!
</figcaption>
</figure>
<h2 id="source-code">Source Code</h2>
<p>Here’s the source code for this project!</p>
<p><a href="https://github.com/ny3t0/ritual_bot" class="btn btn--primary">ritualbot source code</a></p>Peter TurnbullA nonsensical yet oddly poetic bot that generates random rituals and prints them with a thermal printer.Hey there!2019-04-29T19:34:30+00:002019-04-29T19:34:30+00:00https://deckardkane.github.io/blog/hey-there<p>This is mostly just me testing that this works. Hello!</p>Peter TurnbullThis is mostly just me testing that this works. Hello!MICAVIBE2019-04-29T19:34:30+00:002019-04-29T19:34:30+00:00https://deckardkane.github.io/projects/micavibe<p>This is a large group project I worked on at MICA! Lots of different topics here, including C++, Python, digital fabrication, and Internet of Things, among others.</p>
<h2 id="overview">Overview</h2>
<p>MICAVIBE is a large group undertaking by the MICA Interactive Spaces class of 2019. The goal of the project is to design stations that collect data, and output it locally in a meaningful and interesting way. The stations are placed strategically around the MICA campus in order to get a feel for the “pulse” of the campus. Then we use the collected “pulse” data to generate 75,000 unique book covers for the MICA prospective student books!</p>
<p>The MICA Prospectus update was commissioned by the MICA Communications department, and the firm <a href="https://karlssonwilker.com/">karlssonwilker</a> was selected to take it on. Karlssonwilker’s proposal included the idea of unique bookcovers with generative artwork, and suggested that a class of MICA students take charge of collecting data around campus, since no one knows the campus better than the students. That’s where we came in!</p>
<p>During the course of this semester-long project, we were also filmed by a crew from Adobe for an upcoming highlight on young creatives. Exciting stuff! Hoping to get that footage back soon.</p>
<p>For collecting all of this data, we have a number of station designs, each tracking sound, mood, and motion. Read on to learn more, but note that this is a lengthy document! Making use of the table of contents is highly recommended :)</p>
<h3 id="station-designs">Station Designs</h3>
<h4 id="sound">Sound</h4>
<p><img src="https://deckardkane.github.io/assets/images/micavibe/soundStation.gif" alt="Sound Station GIF" /></p>
<p>The sound stations are made from routed sheets of plexiglass that are sanded to get a more “frosty” appearance. Their brain is a Raspberry Pi, connected to two strands of DotStar LEDs. The lower strand displays sound in the immediate vicinity of the station, while the upper strand is a bit more complex…</p>
<p><img src="https://deckardkane.github.io/assets/images/micavibe/soundDiagram.png" alt="Sound Diagram" /></p>
<p>Each station is receiving data from the other via Adafruit IO, and displaying it on its upper strand of LEDs, enabling a form of “communication” between the two. Note that each station is sending their data to an individual feed on Adafruit IO. This is for ease of identification and as to not overwhelm the data limit rate. Each station’s “waveform” is shown separately on the MICAVIBE website.</p>
<h4 id="motion">Motion</h4>
<p><img src="https://deckardkane.github.io/assets/images/micavibe/motionStation.gif" alt="Motion Station GIF" /></p>
<p>The motion station is a single sheet of plexiglass (sanded to the perfect frosted appearance) mounted on top of a matrix of DotStar LEDs. The camera at the top divides the current frame into segments, and assigns a “motion score” to each segment, depending on how much motion it has detected. It does this with a basic implementation of frame differencing.</p>
<p><img src="https://deckardkane.github.io/assets/images/micavibe/motionDiagram.png" alt="Motion Diagram" /></p>
<p>The motion station takes the motion values it assigns to segments of the frame it sees, and uses them to light up the LED matrix accordingly. It also sends the motion values to Adafruit IO for storage, and for use on the website, where it shows a web-based version of our LED matrix.</p>
<h4 id="mood">Mood</h4>
<p><img src="https://deckardkane.github.io/assets/images/micavibe/moodStation.gif" alt="Mood Station GIF" /></p>
<p>The mood stations are slightly different form factors (one is tall, the other is a petite little box), but both are made from the same routed and sanded plexiglass, and use the same set of buttons to collect inputs. They also include a DotStar strand to reflect the most recent input, and the tall one also includes a receipt printer to give the user a small memento!</p>
<p><img src="https://deckardkane.github.io/assets/images/micavibe/moodDiagram.png" alt="Mood Diagram" /></p>
<p>Unlike the sound stations, the mood stations both send their data to the same feed on Adafruit IO. We can still tell where each message came from, but we decided it was just easier to keep it on one feed, since the data limit rate wasn’t as much of a concern, given that data would only flow when humans pressed buttons on the stations (and we could limit how often messages were sent in case of someone spamming a button). The website pulls mood input data from Adafruit IO, and creates a bar graph of how often a certain color was sent and when.</p>
<h3 id="data-flow">Data Flow</h3>
<p>The data collected by the stations is sent to and stored at Adafruit IO, an Internet-Of-Things data storage system created by Adafruit, a popular electronic components source for hobbyists, hackers, and DIY’ers. The stations that run on Raspberry Pi’s (nearly all of them) also store their data locally, in case of connection failure. For specifics on data I/O for each station, view the diagrams above.</p>
<h3 id="privacy-and-data-ethics">Privacy and Data Ethics</h3>
<p>Data ethics was something we discussed at length from the very beginning of this project. Our approach was to target collect data that was very nearly completely anonymous to begin with, and then abstract it and store it in a way that completely anonymizes any and all participants.</p>
<p>For instance, while our sound stations do have a microphone attached, they only collect local sound values in the form of a numerical value between 0 and roughly 10,000, that represents how loud it is in the immediate vicinity of the station. So we can get a kind of waveform that represents activity in that area at a certain time, but it is impossible to recreate what may have been said, as no actual recording of any kind is performed.</p>
<p>Similarly, the motion station has a camera that analyzes its surroundings for motion, but does not store any still images or video footage. Our code streams the live footage in frame by frame, analyzes each frame, gives a value to certain portions of the frame depending on movement, and then outputs those values in a long string, and lights up the LED “mirror” to give an abstracted representation of what is happening in front of it.</p>
<h2 id="personal-contributions">Personal Contributions</h2>
<p>Our project also includes several heart-shaped indicator stations that track data input and output from the other stations. These are something that I personally put a lot of time into, so here’s a more in-depth breakdown of the design of them!</p>
<h3 id="heart-design">Heart Design</h3>
<div class="sketchfab-embed-wrapper"><iframe width="640" height="480" src="https://sketchfab.com/models/8bf5be58f430419ea8f2edbf582624f8/embed" frameborder="0" allow="autoplay; fullscreen; vr" mozallowfullscreen="true" webkitallowfullscreen="true"></iframe>
<p style="font-size: 13px; font-weight: normal; margin: 5px; color: #4A4A4A;">
<a href="https://sketchfab.com/3d-models/micavibe-heart-station-8bf5be58f430419ea8f2edbf582624f8?utm_medium=embed&utm_source=website&utm_campaign=share-popup" target="_blank" style="font-weight: bold; color: #1CAAD9;">MICAVIBE: Heart Station</a>
by <a href="https://sketchfab.com/DeckardKane?utm_medium=embed&utm_source=website&utm_campaign=share-popup" target="_blank" style="font-weight: bold; color: #1CAAD9;">DeckardKane</a>
on <a href="https://sketchfab.com?utm_medium=embed&utm_source=website&utm_campaign=share-popup" target="_blank" style="font-weight: bold; color: #1CAAD9;">Sketchfab</a>
</p>
</div>
<figure style="width: 150px" class="align-right">
<img src="https://deckardkane.github.io/assets/images/micavibe/heart.gif" alt="" />
<figcaption>A heart in action!</figcaption>
</figure>
<p>This is the design I made for the heart stations. While it was <em>initially</em> designed to be laser cut in slices, it was eventually 3D printed. The cavity in the middle is to hold the Feather Huzzah, and the slot around the perimeter is perfectly sized for a DotStar LED strip. We ended up printing 5 of these.</p>
<p>The DotStar strip has exactly 16 “pixels”, and the code splits it into 4 segments that represent different types of data.</p>
<h3 id="code">Code</h3>
<p>I wrote code for several parts of the different stations, both in Python and Arduino C++. I was also one of the team members responsible for deploying and maintaining code on our different devices running the stations, in our case, Raspberry Pis and Adafruit Feather Huzzahs (ESP8266)! Here’s a breakdown of the code for the hearts.</p>
<p>The code for the hearts was written in Arduino C++, as the hearts used the Feather Huzzah (the ESP8266 board) for a brain. We made use of the <a href="https://arduino-esp8266.readthedocs.io/en/latest/esp8266wifi/readme.html">ESP8266Wifi library</a>, the <a href="https://github.com/adafruit/Adafruit_DotStar">Adafruit DotStar library</a>, and a slightly-modified version of <a href="https://github.com/knolleary/pubsubclient">Nick O’Leary’s PubSubClient</a>.</p>
<p>We ran into an issue early on in development for the hearts code, as many of the libraries for communicating with Adafruit IO have a message limit of about 100 characters, and they will completely ignore any message that exceeds that limit. That meant that we were getting no response from the motion or sound stations (both of which regularly submit messages of about 140 characters). While we were able to modify the MQTT client library a little to allow us to respond to messages on the sound and motion feeds, they currently just pulse a segment of the DotStar strip when a message is received.</p>
<p>Messages from the mood feed however, are parsed through to pick out what color was selected. The buttons on the station correspond to a number 0 through 7, and that is what is sent to the feed when a button is pressed. There’s a simple series of checks for which number was received (in effect, which button was pressed), and the color of the mood segment of the heart is set accordingly.</p>
<h2 id="reflections">Reflections</h2>
<h3 id="challenges">Challenges</h3>
<p>This project was definitely complicated. It was one with many moving parts, multiple distinct interested parties overseeing the work, and sometimes a lack of clear distinction on who was responsible for what. While this is to be somewhat expected with such a large project, laying out and specifying in writing who is supposed to be delivering what would be a helpful step to take next time.</p>
<p>I think while it was a challenge to work as a team with other students who all come from different backgrounds and have very different skillsets, that’s very much real life, and it was a fantastic experience to coordinate working together to play to our strengths.</p>
<h3 id="next-steps">Next Steps</h3>
<p>What’s next? While the project is currently installed on the second floor of the Dolphin Center, we will be installing for the Vigil, an all-night experimental sound art festival at MICA very soon.</p>
<p>The generative book cover code is already underway, and will likely be completed this summer.</p>
<p>For me personally, I’m hoping to take the experience I’ve gained in Arduino, Python, C++, web dev, and digital design/fabrication and make even more cool internet-connected gadgets and gizmos!</p>
<h2 id="source-code">Source Code</h2>
<p>Want to see our source code? All of our code is open-source and free to use. Check it all out below!</p>
<p><a href="https://github.com/micais2019/interactive-spaces" class="btn btn--primary">Station source code</a> <a href="https://github.com/micais2019/MICAVIBE" class="btn btn--primary">Website source code</a> <a href="http://www.micavibe.com/" class="btn btn--primary">Website</a></p>Peter TurnbullA large group project I worked on at MICA! Lots of different topics here, including C++, Python, digital fabrication, and Internet of Things, among others.Just Be Still With Me2018-12-06T00:00:00+00:002018-12-06T00:00:00+00:00https://deckardkane.github.io/projects/just-be-still-with-me<p>Just Be Still With Me is an experimental augmented reality interactive fiction game, made with Twine and a bit of JavaScript hackery!</p>
<p>This page goes into the creative and technical processes behind this game. If you’re in the Baltimore area, and haven’t already, check out the game first before proceeding!</p>
<p><a href="https://deckardkane.github.io/JustBeStillWithMe/" class="btn btn--primary btn--x-large align-center">Push to start</a></p>
<h2 id="overview">Overview</h2>
<p>Just Be Still With Me was a narrative I wanted to tell by having people move through spaces that recalled certain memories for me. I also wanted to experiment with <a href="https://twinery.org/">Twine</a>, the interactive fiction software (that also happens to be free, so I was very excited about it!).</p>
<p>I am very proud to say that this project was selected for the MICA Game Lab’s 2019 Fall Arcade exhibition, and will be on display from September 27th to October 7th, 2019, in the Dolphin Building on the MICA campus!</p>
<h2 id="technical">Technical</h2>
<h3 id="software">Software</h3>
<p>Initially, I wasn’t sure how I was going to be able to combine my goals of working with Twine and creating a location-dependent sort of scavenger hunt thing, as my JavaScript experience was…limited. I was fortunate enough to stumble across <a href="https://github.com/shawngraham/ar-archaeology/blob/master/workshop%20materials/Hacking%20Twine%20to%20make%20a%20location-based%20game.md">this repository</a> by Shawn Graham, which had a basic framework set up for using geolocation inside Twine with JavaScript! As long as I had a place to host it (hello, GitHub Pages!), I was set. Without Shawn’s work this project would not have been possible.</p>
<h3 id="hardware">Hardware</h3>
<p>I toyed with the idea of dedicated devices or controllers for this project (an initial draft had an orb that vibrated/glowed near clue locations!), but I decided I wanted it to be accessible to anyone with an internet-connected device. And maybe a glowing orb was a little too D&D for this particular project.</p>
<h2 id="narrative">Narrative</h2>
<p>The narrative of Just Be Still With Me is a personal one for me. I wanted to evoke the feeling of trying to piece together what had happened, and not necessarily getting a conclusive, satisfactory, or even emotionally healthy answer.</p>
<h3 id="locations">Locations</h3>
<p>I chose a few locations in MICA’s Bolton Hill neighborhood in Baltimore. Specifically, a bench in Cohen Plaza, an alleyway behind the Mount Royal Tavern, the roundabout near the statue in front of Penn Station, and a bench on Park Avenue.</p>
<h3 id="puzzles">Puzzles</h3>
<p>Initially, I wanted to go for a more complex approach, and use an augmented-reality app to make players search for clues. I realized that it was both out of scope, and unnecessary for this particular project.</p>
<p>The “puzzles” ended up being hand-made stickers carefully hidden in the aforementioned locations. My initial concern was that they would get removed/obscured (which was another reason I thought about AR previously), but I’m happy to report that 6 months later, all of the stickers are still intact!</p>
<h2 id="reflections">Reflections</h2>
<p>While this is a project that I’m extremely proud of, there is definitely room to improve. I think making it a little more discoverable on its own is something I’d want to add. Something like stickers that link to the game online placed around town.</p>Peter TurnbullJust Be Still With Me is an experimental augmented reality interactive fiction game, made with Twine and a bit of JavaScript hackery!