smartroom

PLACE (Physics Learning Across Contexts and Environments) is a 13-week high school physics curriculum in which students capture examples of physics in the world around them (through pictures, videos, or open narratives), which they then explain, tag, and upload to a shared social space. Within this knowledge community, peers are free to respond, debate, and vote on the ideas presented within the examples towards gaining consensus about the phenomena being shown, empowering students to drive their own learning and sense making. We also developed a visualization of student work that represented student ideas as a complex interconnected web of social and semantic relations, allowing students to filter the information to match their own interests and learning needs, and a teacher portal for authoring tasks (such as multiple choice homework) and reviewing and assessing individual student work. Driven by the KCI Model the goal of PLACE.Web was to create an environment in which the class’ collective knowledge base was ubiquitously accessible – allowing students to engage with the ideas of their peers spontaneously and across multiple contexts (at home, on the street, in class, in a smart classroom).

To leverage this student contributed content towards productive opportunities for learning, we developed several micro-scripts that focused student interactions, and facilitated collaborative knowledge construction:

  • Develop-Connect-Explain: A student captures an example of physics in the real world (Develop), tags the example with principles (Connect), and provides a rationale for why the tag applies to the example (Explain).
  • Read-Vote-Connect-Critique: A student reads a peers’ published artifact (Read), votes on the tags (Vote), adds any new tags they feel apply (Connect), and adds their own critique to the collective knowledge artifact (Critique).
  • Revisit-Revise-Vote: A student revisits one of their earlier contributions (Revisit), revises their own thinking and adds their new understanding to the knowledge base (Revise), and votes on ideas and principles that helped in generating their new understanding (Vote).
  • Group-Collective-Negotiate-Develop-Explain: Students are grouped based on their “principle expertise” during the year (Group), browse the visualization to find artifacts in the knowledge base that match their expertise (Collective), negotiate which examples to inform their design of a challenge problem (Negotiate), create the problem (Develop), and finally explains how their principles are reflected in the problem (Explain).

Over the twelve weeks 179 student examples were created with 635 contributed discussion notes, 1066 tags attached, and 2641 votes cast.

Culminating Smart Classroom Activity

The curriculum culminated in a one-week activity where students solved ill-structured physics problems based on excerpts from Hollywood films. The script for this activity consisted of three phases: (1) at home solving and tagging of physics problems; (2) in-class sorting and consensus; and (3) smart classroom activity.

PLACE Culminating Script

In the smart classroom, students were heavily scripted and scaffolded to solve a series of ill-structured physics problems using Hollywood movie clips as the domain for their investigations (i.e., could IronMan Survive a shown fall). Four videos were presented to the students, with the room physically mapped into quadrants (one for each video). The activity was broken up into four different steps: (1) Principle Tagging; (2) Principle Negotiation and Problem Assignment; (3) Equation Assignment, and Assumption and Variable Development; and (4) Solving and Recording (Figure 3).

PLACE smart classroom imagesAt the beginning of Step 1, each student was given his or her own Android tablet, which 
displayed the same subset of principles assigned from the homework activity. Students freely chose a video location in the room and watched a Hollywood video clip, “flinging” (physically “swiping” from the tablet) any of their assigned principles “onto” the video wall that they felt were illustrated or embodied in that clip. They all did this four times, thus adding their tags to all four videos.

In Step 2, students were assigned to one video (a role for the S3 agents, using their tagging activity as a basis for sorting), and tasked with coming to a consensus (i.e., a “consensus script”) concerning all the tags that had been flung onto their video in Step 1 – using the large format displays. Each group was then given a set of problems, drawn from the pool of problems that were tagged during the in-class activity (selected by an S3 agent, according to the tags that group had settled on – i.e., this was only “knowable” to the agents in real-time). The group’s task was to select from that set of problems any that might “help in solving the video clip problem.”

In Step 3, students were again sorted and tasked with collaboratively selecting equations (connected to the problems chosen in Step 2), for approaching and solving the problem, and developing a set of assumptions and variables to “fill in the gaps”. Finally in Step 4, students actually “solved” the problem, using the scaffolds developed by groups who had worked on their video in the preceding steps, and recording their answer using one of the tablets’ video camera – which was then uploaded.

Orchestrating Real-Time Enactment With S3

Several key features (as part of the larger S3 framework) were developed in order to support the orchestration of the live smart classroom activity – each is described below including their specific implementation within the PLACE.web culminating activity:

Ambient Feedback: A large Smartboard screen at the front of the room (i.e, not one of the 4 Hollywood video stations) provided a persistent, passive representation of the state of individual, small group, and whole class progression through each step of the smart classroom activity. This display showed and dynamically updated all student location assignments within the room, and tracked the timing of each activity, using three color codes (a large color band around the whole board that reflected how much time was remaining): “green” (plenty of time remaining), “yellow” (try to finish up soon), and “red” (you should be finished now)

Scaffolded Inquiry Tools and Materials: In order for students to effectively engage in the activity and with peers, there is a need for specific scaffolding tools and interfaces through which students interact, build consensus, and generate ideas as a knowledge community (i.e., personal tablets, interactive whiteboards). Two main tools were provided to students, depending on their place in the script: individual tablets connected to their S3 user accounts; and four large format interactive displays that situated the context (i.e., the Hollywood video), providing location specific aggregates of student work, and served as the primary interface for collaborative negotiation

Real-Time Data Mining and Intelligent Agency: To orchestrate the complex flow of materials and students within the room, a set of intelligent agents were developed. The agents, programmed as active software routines, responded to emergent patterns in the data, making orchestration decisions “on-the-fly,” and providing teachers and students with timely information. Three agents in particular were developed: (1) The Sorting agent sorted students into groups and assigned room locations. The sorting was based on emergent patterns during enactment (2) The Consensus Agent monitored groups requiring consensus to be achieved among members before progression to the next step; (3) The Bucket Agent coordinated the distribution of materials to ensure all members of a group received an equal but unique set of materials (i.e., problems and equations in Steps 2 & 3).

Locational and Physical Dependencies: Specific inquiry objects and materials could be mapped to the physical space itself (i.e., where different locations could have context specific materials, simulations, or interactions), allowing for unique but interconnected interactions within the smart classroom. Students “logged into” one of four spaces in our room (one for each video), and their actions, such as “flinging” a tag, appeared on that location’s collaborative display. Students’ location within the room also influenced the materials that were sent to their tablet. In Step 2, students were provided with physics problems based on the tags that had been assigned to their video wall, and in Step 3 they were provided with equations based on their consensus about problems in Step 2.

Teacher Orchestration: The teacher plays a vital role in the enactment of such a complex curriculum. Thus, it is critical to provide him or her with timely information and tools with which to understand the state of the class and properly control the progression of the script. We provided the teacher with an “orchestration tablet” that updated him in real-time on individual groups’ progress within each activity. Using his tablet, the teacher also controlled when students were re-sorted – i.e., when the script moved on to the next step. During Step 3, the teacher was alerted on his tablet whenever the students in a group had submitted their work (variables and assumptions)