Tangible Design Jam

TEI Jam


Tangible and embodied technologies hold great promise for supporting exciting new forms of personal activity and social interaction. The computer is beginning to disappear into its physical and social surrounds. Examples of such technologies include physical computing (e.g., Arduino microcontrollers), geo-locational services (e.g., FourSquare or Google Glass), personal health technologies (e.g., Fitbit or Jawbone), and computer vision (e.g., Microsoft Kinect). People are connecting with each other and with their world in completely novel ways. The “maker movement” reveals a surge of interest in digital-physical interfaces that turn the physical world into a computer interface, instrumenting homes, clothing, and an infinite array of fabricated artifacts.

At Encore lab, our research group is becoming increasingly aware and interested intechnologies that support tangible and embodied interactions. Amidst our shared intuition that technologies that support such interactions will become a fixture within our learning designs, non-trivial applications are still hard to find. We want to develop a better vision for the potential application of these media – what forms of learning they enable, and what kinds of design and development will be required.  What will we be using 3D printers for (besides printing random trinkets to impress our friends)?  What will the context be for learning? What learning activities, and what media will be involved? How might technology applications like computer vision, simulations, and physical interactions become part of our instructional designs?

In November of 2013 we held a “design jam” where we invited colleagues who are interested in tangible and embodied technologies to participate in a discussion about applications that go beyond “hello world” interactions. After a general discussion, we broke into groups where we worked on three projects: a tangible idea garden, interaction web, and a light wall. Groups made progress on their ideas at our first jam and are continuing to work on designs as we gear up for our next “jam”. We are excited to see our ideas become a reality. We intend to use a rapid development approach, using open source frameworks like Processing and D3. Arduino, Raspberry Pi and even Makey Makey.

Wallcology

The Encore Lab is using the Embedded Phenomena (EP) and the Knowledge Community and Inquiry (KCI) frameworks to investigate technologies designed to foster collaborative knowledge construction in elementary science classrooms. In EP environments, a media-rich representation of a scientific phenomena is mapped on to the physical space of a classroom for an extended period of time.

Students participate in a whole-class investigation of the simulated phenomena. Our most recent investigation, a WallCologyunit, was nine weeks long! Students observed the organisms contained within four habitats, visible through a computer monitor or “portal” affixed to each wall of the classroom. Students entertained the notion that “bugs” exist behind the drywall of their classroom walls. The simulation depicts two vegetation species and several organisms, which are in various stages of their life cycle (egg, larva, pupa, and adult). EP gives students an opportunity to observe an approximation of real scientific phenomena in their own classrooms (extended field work is impractical!). In the simulation, like in life, things happen when they happen (i.e., an invasive species might enter the ecosystem overnight, or even at recess!).

Students at a WallCology portal, examining the digital ecosystem of Habitat 3
Students at a WallCology portal, examining the digital ecosystem of Habitat 3

The WallCology unit was co-designed with teachers from Toronto, and researchers and technologists from U of T and UIC. In addition, Tom Moher (project co-PI) enlisted the help of a subject matter expert (evolutionary ecologist, Joel Brown) to help us make sure that the digital ecosystem was consistent with real-life biological principles . We targeted specific curriculum goals in the life sciences including population ecologies, food web relationships, and life cycles. Our team developed a suite of “apps” that students used to collect observations and to share their ideas and theories concerning the “bugs” that they were investigating. Our software collected these data and re/presented to students in aggregate form. One visualization presented “tallies” of pair-wise relationships (e.g., “who eats who”). Another visualization showed population trends over-time, based on the count inputs of students by species and date. The knowledge community used these representations to construct understandings of the relationships among the organisms. The image below shows a sortable table of observations that students made at each of the four habitats. Students could view their observations and those of their peers in real-time on their tablets. The same data was available on the classroom’s interactive whiteboard.

The teacher references one of the aggregate screens (descriptions of the morphology and behaviour of organisms) during a whole-class discussion.

This work is part of a collaboration between two programs of research known as EPIC (Embedded Phenomena for Knowledge Communities). The EP framework was developed by Tom Moher and the Learning Technologies Group at UIC. The Knowledge Community and Inquiry model is being investigated by Jim Slotta and his students at the Encore Lab at OISE/UT. In EPIC classrooms, students work together as active members of a knowledge community, sharing information, reasoning together, and solving problems. Their work is supported by networked technologies that have been carefully designed to scaffold investigations of curriculum topics. Over time, a substantial knowledge base is created from these multiple, structured observations, and the knowledge community is able to draw from it during their ongoing inquiry of the EP.

Rock, Paper, Awesome!

Rock, Paper, Awesome (RPA) was Encore Lab’s initial foray into developing the means for tangible and embodied interactions that would connect to our S3 technology framework. The goal for RPA was simple; individual labs could create their own unique tangible or embodied interactions through which they played rock, paper, scissors with other labs that were physically distributed around the world.

The Theory

We chose rock, paper, and scissors as our test-bed because it provided us with not only a well-defined set of semantics (i.e., win, lose, draw, player ready, player gone), but also a very loose coupling in how we enacted those semantics. For instance how a player chose “rock” in one space could be entirely different from how a lab chose it in another (e.g., standing in a particular spot in the room, versus pushing a button). This allowed us to think deeply about what it meant to convey the same message through various tangible and embodied interactions, and to begin building an understanding of how these different interactions affected the meaning making of the participants. In essence we built a “reverse tower of babel” where multiple languages could all be interpreted through S3, allowing recipients at both ends to effectively communicate through their own designs.

Rock Paper Awesome

In this way, RPA is more than just a game of rock, paper, scissors – it is an avenue for us to begin investigating novel ways for users to interact with the world, and for connecting these investigations within a broader knowledge community. We aim to not only connect these communities, but also to add a layer of user-contributed design to their interactions, where community members engage in creative fabrication and exchange of tangible, interactive media that reflect their ideas, workflow or presence, bridging the distances and connecting the community.

Three critical questions guided our development of RPA and this component of S3 in general:

  • How can we bring distributed communities together through tangible and embodied interactions?
  • What are the possible roles for tangible and physical computing, and ambient or interactive media that are deeply connected to the semantics, workflow, physical presence, ideas, activities, and interests of the distributed communities?
  • How does the temporality of the interactions (synchronous versus asynchronous) determine the selection of appropriate kinds of interactions and representations?

We are currently sending out kits, first versions of the code, and design documents to labs at the Learning Technologies Group at the University of Chicago, and Intermedia at the University of Oslo. We are excited to see how they will develop and contribute new interactive designs that represent their own representations of space and meaning within the game.

The Technology

The physical interactions and ambient feedback is handled by an Arduino microcontroller. The Arduino allows users to develop a wide array of inputs (e.g. proximity, light, and sound sensors, buttons and levers), and outputs (e.g. sound, light, movement). Using the S3 framework, RPA facilitates different game “events” (e.g., joining the game, choosing Rock) by sending messages over an XMPP chatroom (conference). We originally attempted to implement these messages over the XMPP server only using the Arduino  – however, given the relatively limited amount of RAM on the Arduino board (2KB) this turned out to be overly restrictive and we started looking at other solutions.

As a solution to this issue, we made a simplified set of event messages (i.e., single text characters) that were sent over the Arduino’s serial port to a connected computer. For testing purposes we used a laptop. However, in permanent installations, we envision RPA having a more compact and flexible setup. In order to achieve this, we connected the Arduino board to a Raspberry Pi. The benefits of the Raspberry Pi is that it is small and cheap, allowing us to dedicate a Pi for each game installation, and to have the “brains” of RPA be as unobtrusive as possible.

In order to connect the various RPA installations we use node.js as an intermediary between the XMPP chatroom and RaspberryPI. Messages that are posted to the XMPP chatroom are picked up by the node.js server and sent over serial port to the Arduino, which then executes the user-designed action, such as turning on a light or playing a chime. Respectively, any event trigger on the Arduino (e.g. a button is pressed), is sent over the serial port to node.js and translated into a XMPP message.

Sample Arduino code for RPA and the node.js setup code can all be freely downloaded, tinkered with and customized from github.

The Run

We set up two “stations” at OISE, one on the third floor and one on the 11th floor. Players challenged each other to a game of rock, paper, scissors (see the video below).

Each location had different tangible, audible, and visual inputs and outputs providing players unique multi-modal experiences that conveyed the same message. At the third floor location, a “servo motor” swung a dial to let the player know a challenger was waiting to play. At the eleventh floor location, an LED flashed to convey the challenge. We have tested other designs (not shown here) that used proximity sensors to detect where players were within a room, using their location to trigger an event (such as choosing rock). In another instance, a light sensor conveyed one player’s availability to other players (in remote locations) when the lights in the original player’s room were on.

Going Live! RPA at TEI 2013

We submitted RPA to TEI 2013′s student design challenge. The conference was held in Barcelona Spain and provided an ideal opportunity for us to try out RPA (and S3) in a live setting with users who had no experience with it. We had stations running at the site and at labs site running in Toronto allowing us to observe a wide range of interactions and gain feedback from participants. We also added a new layer to RPA which connected a real-time visualization of win/lose/draw results to the game – although this visualization duplicated some of the functionality of the tangible RPA elements it did represent a significant step in merging the tangible elements of S3 with a key element of the existing architecture.

PLACE

smartroom

PLACE (Physics Learning Across Contexts and Environments) is a 13-week high school physics curriculum in which students capture examples of physics in the world around them (through pictures, videos, or open narratives), which they then explain, tag, and upload to a shared social space. Within this knowledge community, peers are free to respond, debate, and vote on the ideas presented within the examples towards gaining consensus about the phenomena being shown, empowering students to drive their own learning and sense making. We also developed a visualization of student work that represented student ideas as a complex interconnected web of social and semantic relations, allowing students to filter the information to match their own interests and learning needs, and a teacher portal for authoring tasks (such as multiple choice homework) and reviewing and assessing individual student work. Driven by the KCI Model the goal of PLACE.Web was to create an environment in which the class’ collective knowledge base was ubiquitously accessible – allowing students to engage with the ideas of their peers spontaneously and across multiple contexts (at home, on the street, in class, in a smart classroom).

To leverage this student contributed content towards productive opportunities for learning, we developed several micro-scripts that focused student interactions, and facilitated collaborative knowledge construction:

  • Develop-Connect-Explain: A student captures an example of physics in the real world (Develop), tags the example with principles (Connect), and provides a rationale for why the tag applies to the example (Explain).
  • Read-Vote-Connect-Critique: A student reads a peers’ published artifact (Read), votes on the tags (Vote), adds any new tags they feel apply (Connect), and adds their own critique to the collective knowledge artifact (Critique).
  • Revisit-Revise-Vote: A student revisits one of their earlier contributions (Revisit), revises their own thinking and adds their new understanding to the knowledge base (Revise), and votes on ideas and principles that helped in generating their new understanding (Vote).
  • Group-Collective-Negotiate-Develop-Explain: Students are grouped based on their “principle expertise” during the year (Group), browse the visualization to find artifacts in the knowledge base that match their expertise (Collective), negotiate which examples to inform their design of a challenge problem (Negotiate), create the problem (Develop), and finally explains how their principles are reflected in the problem (Explain).

Over the twelve weeks 179 student examples were created with 635 contributed discussion notes, 1066 tags attached, and 2641 votes cast.

Culminating Smart Classroom Activity

The curriculum culminated in a one-week activity where students solved ill-structured physics problems based on excerpts from Hollywood films. The script for this activity consisted of three phases: (1) at home solving and tagging of physics problems; (2) in-class sorting and consensus; and (3) smart classroom activity.

PLACE Culminating Script

In the smart classroom, students were heavily scripted and scaffolded to solve a series of ill-structured physics problems using Hollywood movie clips as the domain for their investigations (i.e., could IronMan Survive a shown fall). Four videos were presented to the students, with the room physically mapped into quadrants (one for each video). The activity was broken up into four different steps: (1) Principle Tagging; (2) Principle Negotiation and Problem Assignment; (3) Equation Assignment, and Assumption and Variable Development; and (4) Solving and Recording (Figure 3).

PLACE smart classroom imagesAt the beginning of Step 1, each student was given his or her own Android tablet, which 
displayed the same subset of principles assigned from the homework activity. Students freely chose a video location in the room and watched a Hollywood video clip, “flinging” (physically “swiping” from the tablet) any of their assigned principles “onto” the video wall that they felt were illustrated or embodied in that clip. They all did this four times, thus adding their tags to all four videos.

In Step 2, students were assigned to one video (a role for the S3 agents, using their tagging activity as a basis for sorting), and tasked with coming to a consensus (i.e., a “consensus script”) concerning all the tags that had been flung onto their video in Step 1 – using the large format displays. Each group was then given a set of problems, drawn from the pool of problems that were tagged during the in-class activity (selected by an S3 agent, according to the tags that group had settled on – i.e., this was only “knowable” to the agents in real-time). The group’s task was to select from that set of problems any that might “help in solving the video clip problem.”

In Step 3, students were again sorted and tasked with collaboratively selecting equations (connected to the problems chosen in Step 2), for approaching and solving the problem, and developing a set of assumptions and variables to “fill in the gaps”. Finally in Step 4, students actually “solved” the problem, using the scaffolds developed by groups who had worked on their video in the preceding steps, and recording their answer using one of the tablets’ video camera – which was then uploaded.

Orchestrating Real-Time Enactment With S3

Several key features (as part of the larger S3 framework) were developed in order to support the orchestration of the live smart classroom activity – each is described below including their specific implementation within the PLACE.web culminating activity:

Ambient Feedback: A large Smartboard screen at the front of the room (i.e, not one of the 4 Hollywood video stations) provided a persistent, passive representation of the state of individual, small group, and whole class progression through each step of the smart classroom activity. This display showed and dynamically updated all student location assignments within the room, and tracked the timing of each activity, using three color codes (a large color band around the whole board that reflected how much time was remaining): “green” (plenty of time remaining), “yellow” (try to finish up soon), and “red” (you should be finished now)

Scaffolded Inquiry Tools and Materials: In order for students to effectively engage in the activity and with peers, there is a need for specific scaffolding tools and interfaces through which students interact, build consensus, and generate ideas as a knowledge community (i.e., personal tablets, interactive whiteboards). Two main tools were provided to students, depending on their place in the script: individual tablets connected to their S3 user accounts; and four large format interactive displays that situated the context (i.e., the Hollywood video), providing location specific aggregates of student work, and served as the primary interface for collaborative negotiation

Real-Time Data Mining and Intelligent Agency: To orchestrate the complex flow of materials and students within the room, a set of intelligent agents were developed. The agents, programmed as active software routines, responded to emergent patterns in the data, making orchestration decisions “on-the-fly,” and providing teachers and students with timely information. Three agents in particular were developed: (1) The Sorting agent sorted students into groups and assigned room locations. The sorting was based on emergent patterns during enactment (2) The Consensus Agent monitored groups requiring consensus to be achieved among members before progression to the next step; (3) The Bucket Agent coordinated the distribution of materials to ensure all members of a group received an equal but unique set of materials (i.e., problems and equations in Steps 2 & 3).

Locational and Physical Dependencies: Specific inquiry objects and materials could be mapped to the physical space itself (i.e., where different locations could have context specific materials, simulations, or interactions), allowing for unique but interconnected interactions within the smart classroom. Students “logged into” one of four spaces in our room (one for each video), and their actions, such as “flinging” a tag, appeared on that location’s collaborative display. Students’ location within the room also influenced the materials that were sent to their tablet. In Step 2, students were provided with physics problems based on the tags that had been assigned to their video wall, and in Step 3 they were provided with equations based on their consensus about problems in Step 2.

Teacher Orchestration: The teacher plays a vital role in the enactment of such a complex curriculum. Thus, it is critical to provide him or her with timely information and tools with which to understand the state of the class and properly control the progression of the script. We provided the teacher with an “orchestration tablet” that updated him in real-time on individual groups’ progress within each activity. Using his tablet, the teacher also controlled when students were re-sorted – i.e., when the script moved on to the next step. During Step 3, the teacher was alerted on his tablet whenever the students in a group had submitted their work (variables and assumptions)

HungerGames

Hunger Games is a learning environment designed to support upper elementary learners’ construction of understandings of animal foraging behaviors. It is an educational research project lead by Tom Moher’s research group at the University or Illinois Chicago (UIC). In Hunger Games, learners enact animal foraging within the context of a sequence of increasingly complex simulated scenarios involving varying conditions of competition, resource depletion, sociality, and predation. The instructional unit is designed to develop understandings of the factors that foraging animals use to guide their decisions in selecting food patches, as well as the ways in which populations of animals distribute themselves (e.g., resource matching and ideal free distributions) among available resources. The record of students’ (individual and aggregate) behaviors during enactment of the foraging simulations serves as the object of inquiry for reflective activities.

squirrelHGPatchBlurred

In Hunger Games, the classroom is “transformed” into a natural habitat in which students embody the role of squirrels foraging for food. This was inspired by a longstanding practitioner tradition of using embodied activities with physical materials (e.g., chickpeas, M&Ms) to introduce foraging concepts, and the feeling that an embodied approach had several potential advantages over a distributed screen-based approach (Moher et al., submitted).

In Hunger Games, each student in the classroom is provided with a small stuffed animal (“squirrel”) that serves as his or her “avatar” during the activity. Students forage by physically moving their squirrels among a set of “food patches” of varying quality distributed around the classroom, gaining energy as a function of the elapsed time in the patch, patch quality, and competition within the patch (i.e., the presence of other squirrels). Avatars may also fall victim to predation (signaled on smaller displays adjacent to each food patch). Avatars who are “caught” are considered “injured,” and given a short “time out” period in which their squirrel cannot gain calories even if located in a patch; this allows for the introduction of concepts of predation without forcing children out of the game prematurely.

patchMovepatchGraph

Working closely with the UIC research lab we developed a community knowledge-building application to support individual student, small group, and whole class reflection and discourse. Within the application, students are provided with representations of both individual and aggregate data reflecting their performance during foraging bouts. At the aggregate level, students have access to an interactive version of the Harvest Graph (see image on right) that allows them to sort the distribution of individual caloric gains according to various factors including patch quality, competition strategies, and frequency of moves. At an individual level, students are provided with a “Move Tracker” (see image on left) that enables them to replay the step-by-step patch moves that they made during game play; this tool is used to support reflection on the effectiveness of their moves and to prepare students for subsequent foraging bouts. Finally, the application provides a threaded discussion tool to support development of community knowledge guided by a series of embedded inquiry prompts.

Hunger Games was successfully enacted in three grade 5 classrooms as part of a four-week curricula . Analysis of the findings is currently being conducted and will be reported on in future publications.