Cornell Language and Technology

exploring how technologies affect the way we talk, think and understand each other

Tuesday, April 25, 2006

Facebook Intro - Results

Usage of Facebook

The study found widespread usage of Facebook. Not only did all of the surveyed students possess an account on Facebook, but 92% also stated that they had accessed it at least once in the past month; 35% of the students admitted to using it daily. Only two of the students, however, had ever used Facebook to meet a stranger before.


Knowledge Introduction Process

A statistical analysis revealed significant differences in all conditions, indicating that the average number of Facebook-related, highlighted utterances varied based on whether or not the participant had prior access to their partner's Facebook profile. In other words, if both participants had access to each other's Facebook profiles, students tended to highlight significantly more utterances than either of the individuals in the one-way Facebook condition. This finding can be taken as evidence that Facebook-inspired common ground was introduced into the conversations. Somewhat surprisingly, the individuals in the one-way condition without Facebook access, even after they were told that their partners had looked at their Facebook profiles, tended not to believe that Facebook had contributed at all to their conversation. It is also notable that, across conditions, questions were often interpreted differently by the participants. For example, in the mutual-Facebook condition, the question "what is your major?" was highlighted as a Facebook-related utterance, while in the one-way Facebook condition, the same question was often not highlighted by either of the individuals.

Our coding revealed that the common ground-related information typically introduced in the conversations tended to be academic, personal, or about interests or friends. The participants tended to introduce this information by asking or answering probes, by making explicit references to Facebook, or by making casual references to Facebook-obtained information. While casual references were fairly common, what was particularly surprising was the discovery that 76% of dyads engaged in some form of probing behavior, while only 20% made explicit references to Facebook. One possible explanation for this finding rests in Clark's equity principle; in our experiments, equity appeared to be disturbed either by the participants possessing nonequivalent information about each other or by the participants not knowing how acceptable it would be to introduce Facebook-obtained information into the conversation. As such, it is likely that the observed probing behavior may be the method that the speakers presuppose for maintaining equity with their addressees (Clark 295).
If so, our research may indicate that communicators to some degree use Facebook-obtained information to inform and improve their social interactions.

Of the dyads that used probes to introduce some kind of information, X% of them ended up explicitly mentioning Facebook somewhere in their conversations. Y% of the dyads that used casual references ended up using explicit references as well. The overall number of highlighted references of those whom explicitly mention Facebook was much higher than those whom exclusively used either a probing strategy or casual mentions, suggesting perhaps that after hesitations about the equity-disturbing effects of mentioning Facebook as a source of information have been allayed, the participants were better able to reflect and attribute certain utterances to Facebook.

Gender: Results

Questionnaires

Our questionnaire contains several questions, each one dealing with a specific aspect of the transcript. We will compute the average score for each question by the gender of the instruction givers and determine whether or not there is a significant difference. For example, it would be interesting if male-given instructions were rated an average of 2 on a 1-7 scale measuring clarity, while directions given by females were rated an average of 5 in that particular aspect.

We also want to see if the gender of the rater has any effect on the scores. Thus, we will compute the average scores for each question across the 4 possible pairings of instructor and rater (male-male, male-female, female-male, female-female). For example, do females think instructions given by other women are more understandable than those given by men?

One of the questions asks whether the rater thinks the giver is male or female. We would like to determine if there is any correlation between the perceived understandability of the directions and the apparent gender of the instructor. For instance, are clear-rated instructions assumed to be written by men? Furthermore, is this assumption correct?

Coding

To start with, it will be necessary to come up with an overview of the
language of the transcripts. The basic measures mentioned below
(i.e., response length) will serve to give a quantitative overview.
Particular aspects of IM speech can be described by references to
papers which describe it in depth.

Analysis of the language of the transcripts is straightforward, but
discovering meaningful trends will require taking a number of
measurements, the majority of which will probably not be meaningful.
All quantitative measures will be tested for significance and
correlation with actual gender, a clarity metric, and perceived
gender, the latter two being derived from questionnaire responses.

Basic measures are average response length (in characters, words, and
"sentences"), if only to establish a baseline.

Then we will calculate and tabluate the relative frequency of the use
of:

- words of a particular syntactic category (noun, verb/predicate,
descriptor)
- use of a particular mode of address
- use of personal pronouns
- particular constructions (passive voice [and other marked syntactic
constructions], indirect speech [which is more a semantic
distinction])
- use of particular semantic classes of verb (looking for "action"
verbs or verbs of movement)

If any other patterns in word choice or speech become apparent, we
will check them for significance, as well.

Some weaker measures which still might be useful are:

- lexical differences: does one gender use a given term to describe
something which the other doesn't? (This is hampered by our small
sample sizes and a lack of responses which use very similar
language.)
- use of a particular mood or tense (This is also hampered by our
small selection size and the fact that there is probably
insufficient variation to obtain a significant measure.)

We can also do a speech act analysis of the transcripts and then see
if the prevalence of certain speech acts in the instructions mean
anything, but that will probably not be terribly informative because
everyone is giving the same sort of speech acts. If we do not find
much in analysis of the basic language used, we will do a speech act
analysis similar to that in the Nastri paper.

It would then be possible to check for correlation of certain speech
act patterns with one of the three measures mentioned above (actual
gender, perceived clarity, and perceived gender), but there may be too
much uniformity in the transcripts to get a significant result other
than, "More words (speech acts) means more clear."

Monday, April 24, 2006

#11: Private vs. Public – Results

For the results section, our group is going to analyze the three sets of data obtained from our experiment: task messages, historical messages, and questionnaires. We will be focusing on the language content, syntax, and choice of communication used in the messages.

Task and Historical Messages
Our group will be dividing the messages into two categories: wall posts and private messages. Then, we are going to compare the language usage and content of each group. Specifically, we will be using software that counts and reports statistics on the number of times certain words and punctuation appear in each list. Preliminary analysis indicates that private messages contain more “I” words while wall posts contain more “you” words. This may suggest that PMs focus more on the sender while wall messages focus more on the receiver. With personal messages, there is a one-to-one communication whereas with walling there are numerous people who can view the message. Thus, in such a public setting the sender may want to maintain face and focus more on the receiver. We also found that there are more misspelled words and use of slang in wall posts. This observation is interesting because common sense suggests that public posts would be written more carefully than private messages. However, perhaps this is not the case due to the informal nature of walling.

We also plan to examine the data content to see whether there is a correlation between the type of message and the choice of communication used. Our group expects wall posts to be more social-oriented and positive, and PMs to be more task-oriented and negative/neutral. For our six tasks, we designed the statements so that three of them were task-related and positive, while the other three were social-related and negative/neutral. From the data obtained so far, the choices of communication used for five out of the six tasks confirms our predictions. Thus, wall posts in general seem to be more informal and casual than private messages.

Questionnaire
For each set of statements in the questionnaire, we will be evaluating which of the potential reasons listed were favored the most. Then we will relate those preferred statements back to the type of communication that was used for the particular task and see whether it matches our predictions. For example, one question for Task 1 was to circle to what degree the participant agreed with the following statement: “I did not want other people to see it (the message they wrote for Task 1)”. If they had initially chosen to use private messaging for Task 1 and also strongly agreed with the statement, then we can infer that audience plays a significant role in the choice of private versus public communication.

Tuesday, April 18, 2006

Emotions Group - Methods

Participants

We plan to recruit a total of 60 students from the Communications courses that offer class credit for participation in experiments. All the participants will be between the ages of 18 and 25. 30 people will be randomly assigned to the control group, and 30 people will be randomly assigned to the experimental group. In this way, both the control and experimental group will consist of 15 dyads. Each dyad will be of the same sex: males will be paired together and females will be paired together.

Our study was conducted on the second floor of Kennedy Hall in Professor Hancock’s laboratory. The two members of a dyad were asked to arrive in Kennedy Hall at separate locations so that they cannot communicate prior to the experiment. Additionally, they were shown to different rooms in the laboratory so that they could not see their partner during the study.

Procedure

The subjects were first told that they were involved in a study that will test their ability to analyze movies. In the control group, both people were shown a neutral clip from the documentary Mammoths of the Ice Age. In the experimental group, one person was shown the same neutral clip from Mammoths of the Ice Age, while the other person was shown a sad clip from the movie Sophie’s Choice. Afterwards, they were asked to fill out a brief questionnaire about the movie and a PANAS emotion scale. We then deceived the subjects by asking them to participate in a second study that will analyze how people interact when they meet someone new online. They were given the task of getting to know their partner for twenty minutes through the medium of instant messaging. In addition, we requested that they not mention the previous study to their partner, as conversation about the movie could affect the mood of both subjects. Following this, both members of the dyad were given a questionnaire that included questions asking them to rate their own mood as well as that of their partner. The subjects also filled out another PANAS emotion scale. After the experiment, we debriefed the subjects on the true purposes of our study.

Materials

As our objective was to use films to induce certain emotions, we chose two different films for the study. The film intended to induce sadness was entitled Sophie’s Choice; the neutral film was a documentary entitled Mammoths of the Ice Age. Our group also used instruction sheets, which we did not show the participants. In addition, we used two computer terminals, one for each participant. We used the basic psychological scale, PANAS, twice in the experiment in order to assess the mood of each participant after the movie and after the conversation. Accompanying each part of our experiment (the film and the online chat) was a questionnaire. One questionnaire asked about the effects of movies on viewers; the second questionnaire asked about the feelings of each participant and his partner after conversing through instant messaging. We expect that both the scales and questionnaires will be crucial to gauge the emotions of our participants.

Since our experiment was divided into two sections, we used two consent forms to aid in deceiving the participants. As a distraction for the single experiment we were running, the participants were told that one form was for Professor Shapiro and the other was for Professor Hancock. Lastly, we had debriefing forms that the participants received after completing the experiment.

Coding

Following the experiment, we will use the Linguistic Inquiry and Word Count program to analyze the emotion expressed through instant messaging. This program will help us determine the 1) emoticons and other CMC conventions, 2) length of responses, 3) use of affect language (divided into subcategories of positive and negative), 4) pronoun use, and 5) the number of negations and assents in the conversations. By hand, we can determine the 1) use of punctuation and 2) content of the conversations. In addition to the transcripts, we will use the questionnaires from our experiment to determine the mood of the neutral participant and to determine whether he could perceive the emotions of his partner.

#10 — Grounding/FOOK

Method


Participants


In our experiments, n students (y males, x females; average age z) from Communication courses at Cornell University participated in the study in exchange for course credit.


Materials & Procedure


At the start of each experiment, participants were given a brief overview about the premise of the experiment. They were told they would be participating in a study on short-term memory to prevent any expectations of the experiment from affecting the outcome. After this overview, each participant was asked to complete a consent form and a demographics form, the latter being used to collect statistics on age and gender.


The participants in each dyad were then separated; each participant was given an excerpt from a T.S. Eliot poem, "Burnt Norton", to read. Afterwards, they were asked to answer a short questionnaire that related to what they did and did not understand in the poem, and how they interpreted it. The final section of the questionnaire tested their recognition of public figures in a variety of fields as diverse as physics, art, entertainment, and crime. After each participant answered their questionnaire, the dyad was brought together either in an AIM
conversation or in a face-to-face setting and was asked to have a short conversation with each other about what they had read. After completing the conversation, the participants were again separated and asked to answer a second questionnaire. Many of the questions on this questionnaire corresponded directly to specific questions on the first questionnaire, but this time asked how they believed their partner in
the dyad had understood the poem. Similar to the first questionnaire, the second questionnaire included a public figure recognition section, which asked if a participant thought their partner would recognize the same people the participant was asked about in the first questionnaire.


Once both participants finished their questionnaires, they were brought together again for a short debriefing, during which they were told about the true premise of the experiment. Each participant received a copy of a standard debriefing document, as well as a copy of the consent form that they had signed before starting the experiment.


Coding


Once both questionnaires had been collected from all participants, the data from each dyad was entered into a spreadsheet and correlated. We plan to perform four types of correlations with these data:



  1. For each participant, compare his/her first questionnaire and second questionnaire;

  2. For each dyad, compare their first questionnaires;

  3. For each dyad, compare their second questionnaires;

  4. For each dyad, compare one’s first questionnaire to the other's second questionnaire, and vice versa.


Since many of the questions on the first questionnaire correspond to questions on the second questionnaire, these will be the focus of our numerical correlations. All of these corresponding questions have numerical values ranging from 1 to 7, except for the yes/no questions in the person recognition section. Our correlations will measure numerically how accurately participants intuited each other's knowledge and feelings, as well as how close each particpant's knowledge and feelings were to the knowledge and feelings of their dyad partner.

Monday, April 17, 2006

#10 Private vs. Public Communication

Participants
The group plans to recruit twenty to thirty participants for the study. So far, fifteen people have successfully completed the experiment. All of the participants were 18-22 year old Cornell college students with Facebook.com accounts. Each person had been asked individually from among the group members’ friends to participate in the study.


Materials and Procedure
The experiment consisted of three main parts: a task section, a historical data section, and a questionnaire. Before the experiment began, the participants were first briefly told about what they would be expected to do in the study. Then, they were asked to sign a consent form to ensure that their data could be used for future analysis. To guarantee that any personal information would be kept confidential, they were assigned code numbers so that their names would not be tied to the data.

In the task section, the participant was given a procedure form containing six tasks to complete on Facebook. They were required to write messages to their friends using either wall posting or private messaging. The tasks were as follows:
1. Comment on a photo.
2. Ask someone what the homework or reading was for a class. (or ask about a prelim/final)
3. Make an inside joke with a positive connotation.
4. Ask a friend if they have a job or internship after the semester ends.
5. Compliment a friend.
6. Ask someone for more information about an organization that they are in.
After the participants were done with the tasks, they copied and pasted their six messages into a template notepad document that had been provided for them. The participant was told to replace all identifying information with XXX and to indicate for each message whether it was a wall post or private message. They then were asked to sign in to a preexisting Yahoo account made specifically for the experiment. After attaching the notepad document and putting their assigned code number in the subject heading, the participant emailed the Yahoo message to another account made specifically for the study. Throughout the first part, the experimenter waited outside of the room. However, the participants were told that they could ask questions at any time if necessary.

The second part of the experiment was collecting historical data. Each participant was asked to provide five previous wall messages and five previous private messages that they had written in the past. They copied and pasted their messages into another provided template notepad document. Then, using the same procedure and format as in the first part, the participants attached and sent the document by email.

In the final part, the participants were asked to fill out a questionnaire. The first section consisted of a few general questions asking them about their general usage of Facebook – how often they use it and which method of communication they use more frequently. The questionnaire then also contained six sets of questions pertaining to the tasks from part one of the experiment. Each set was the same, and focused on the reasons behind why the participants chose a particular method for that task when communicating on Facebook. Within each section, they were first asked to reflect on how many people they anticipated would read and understand the message they wrote. Then, given a set of potential reasons for choosing each method, they were asked to rate their level of agreement with each statement. Additional sections were available for writing open responses.

At the end, the participants were debriefed about the purpose of the study and given candy bars for participating in the experiment. Overall, the entire process took around 40-50 minutes for each student.


Coding Scheme
Our group is still working on the coding scheme for our project, but we have a few preliminary ideas:
1. Length of messages – number of words, number of complete sentences
2. Number & type of events contained in the message – i.e. trip, party, meeting etc. and whether academic or personal related

Sunday, April 16, 2006

Assignment #10

Methods:

Phase 1:

Participants:

In this part, we chose Cornell University undergraduate students who were our acquaintances between the ages of 18 and 22. Aiming for an even division between males and females, we selected 9 males and 11 females, all of whom speak fluent English (i.e. having spoken English for at least seven years). Additionally, familiarity with the instant messaging program, AOL Instant Messenger (AIM), was required.

Procedure:

First, the participants signed an informed consent form. We then used a predetermined script to ask them, via AIM, how to tie the shoelaces on a typical untied shoe. The participants used their own messaging clients and personal computers and communicated with us from their own environment (generally dorm rooms). After they gave an indication that they were finished, the conversations were saved and the participants were debriefed. Some extraneous content was removed from the final transcripts; this consisted of unnecessary introductory information from the script as well as statements given after the conclusion of the instructions themselves.

Coding:

We coded each transcript for various linguistic features (exact details to be determined).

Phase 2:

Participants:

The participants for the next phase of the experiment were drawn from the same pool as those in Phase I and consisted of approximately 20 students from a communications class at Cornell University, most of whom were between the ages of 18 and 22. The same requirement for fluency in English is applied from the previous section.

Phase II participants participated for credit in communications courses which they were taking.

Procedure:

Before beginning, the participants signed an informed consent form. They were then given a packet of the Phase I transcripts, which included a questionnaire for each one. They were asked to read the transcripts and then to grade them on a numerical scale as to whether it tended to be wordy or concise, sloppy or organized, confusing or straightforward, and vague or clear. After making these evaluations, the participant was asked to determine if the directions in each transcript were written by a male or by a female. Finally, the participants were debriefed.

Tuesday, April 11, 2006

Blogs: shotgun mouthwash?

COMM 450 blog entry #9


It looks like my first submission of this entry was swallowed up somewhere, so here is a second one.



I find and believe that most blogs that pretend to any sort of content are repositories of semi-digested intellectual pap. This goes double for the popular ones. My impression is that there exist networks of blogs which are highly self-referential. They are not a source of valid scholarship, and most of the time they serve as little more than a timesink for everyone involved. Latour's theory of how scientific papers attain legitimacy has some bearing here: according to a paper of his I read in my freshman year, scientific papers gain legitimacy
by having a large degree of other papers referencing to them. This is the same principle behind Pagerank. A given blog can have a lot of legitimacy within a certain community because everyone else cites it, but it's a horse of a different color to have any kind of legitimacy in the larger world, whether it be another Web community or some community in real life.



As with most blog entries, this one makes vague, grandiose claims without any sort of proper citation except that of the anecdotal sort (which, as any good academic can tell you, is the best kind, as it can't be questioned). Am I lying about Latour? Who knows?! Here's a link to Wikipedia: hooray!



That said, there are some linkages between real life and blog posts which can lend blogs some legitimacy. When you get a large enoughmass of people together, collaborative environments gain some sort of importance of their own: Daily Kos and Free Republic aren't blogs per se, but they have some kind of clout because a lot of vocal people read them and consider themselves part of their blogging communities. Since people like to congregate and think of themselves as members of groups, blogs and blogging communities won't go away. This is why we need to implement painful penalties for writing stupid things in any public media. I'm open to suggestions as to my own penance.

Assignment #9

I wish Pearls Before Swine was in the Cornell Sun, but at least we have Dilbert.


#9 - Barry Bonds

Wow. This is my first ever blog ... wow ...
So I'm going to write about Barry Bonds and steroids because this needs to be said. I was talking with the physical therapist/track team trainer about these subjects, and I learned some very interesting things that everyone should know.

Steroids 1) improve muscle recovery time and as a consequence stamina - allowing you to hit the ball hard April through August, 2) improve strength, potentially to the point of having excessive bulk (see Caminetti ~1995) depending on the drug and use, but not if taken properly (see Sosa, Bonds, Big Mac, etc...), 3) encourage muscular growth, putting excess pressure on joints, ligaments/tendons, muscles themselves, bone structure (see 50 Cent’s body and head) and decreasing flexibility.

It's easy to confuse muscular strength, stamina and recovery with injury prevention, but DON'T. Steroids can't prevent injuries, rather, they only help with fatigue - by recruiting muscular fibers faster. Therefore, steroids are WAY more likely to cause injuries. Bonds was more likely to be injured while on steroids. Also, people have said that as soon as he stopped taking steroids he started having injuries. But this is basically nonsense, especially when you consider his injuries have been to his knees - probably due muscular imbalance/weight from steroid use and regular wear combined with his body. Remember, he has a family history of bad knees (Willie Mays and his father too), and what's more, wear and tear is from chronic stress (which is the second definition of the word "baseball" in the dictionary).

Many people think steroids lengthened Bonds' career, but this is not so. The problems and injuries he has been having probably resulted from long-term playing, and were only catalyzed by steroids. It seems to me Barry Bonds has only shortened his career by using steroids, despite rejuvenating it significantly over the last few years.

Assignment #9

This past Sunday, about 100 juniors from my high school descended on Cornell for their college trip. These annual trips are organized by Stuyvesant High School's college office and generally attempt to visit 2 or 3 schools in one day (after Cornell, they went on to Binghamton). This time the trip fell on Cornell Days, and Cornell was unwilling to provide them with official tours, so they called on Stuy alumni to offer the kids some sort of a Cornell experience. I showed up, but as I was feeling sick I did little other than talk a bit with the students and the chaperones. The trip leader was Ms. Archie, a school administrator I've known since even before high school, and it was nice that she recognized me.
Things like these make me reflect on my past, my future, and everything in between. As a senior I'm especially prone to this, I guess. These kids didn't seem particularly excited to be on the trip, but I'm sure they were thinking of the new great beyond that Cornell appears to be. I wanted to tell them that they are mistaken, but why should I? No one made it any easier for me. I'm starting to think that high school, even more than college, shaped me into who I am, and it seems that things are about to come full circle.

Monday, April 10, 2006

#9 — The Horror, The Horror

This post will be as boring as dirt and likely hard to understand unless you are a code geek. You have been warned.


I deal with a great deal of legacy code in my job as a systems developer: we have dozens of arcane PHP scripts which were never documented, and even their authors couldn't tell you how they work. Granted, that might be related to the fact that some of our alumn employees did a great deal of their work drunk. Our biggest app, which is used by the HelpDesk and all of the labs on campus to manage employees, is perhaps the most (or least, depending on how you look at it) auspicious example of these things in practice.


One of the most heavily used and poorly written pieces of this application is the scheduling functionality: managers create empty schedules with shift slots, and then people schedule themselves to work in empty slots. The resulting schedules are drawn in a big pretty grid with colors and everything so that people can see at a glance who's supposed to be where when. Unfortunately, the script responsible for actually drawing the grid is nigh incomprehensible, and huge at 1111 lines. In order to determine where it is supposed to actually draw a header cell in the grid, it doesn't use an HTML <table>, which would be simplest, nor does it use something simple like saying "well, the last cell was over here, so if this one is supposed be right next to that one I should put it at here+something." Oh no! Either of these solutions would be far too sensible and simple for this little script. What it does do is this:


$day_name_position = ($num_slots_cumul*SLOT_WIDTH)+($realDayOfWeek)*TIME_WIDTH + ($realweek*7)*TIME_WIDTH + $day_name_offset;
if($day_name_position < 0)
{
$day_name_offset = -$day_name_position;
$day_name_position += $day_name_offset;
}


Let's ignore the complete lack of consistency in variable name style. Let's also ignore for a moment that the if statement there will always, when $day_name_position (which specifies where the left edge of a header cell should go) is negative, result in $day_name_position being zero. Now Let's ignore that it manages to accomplish setting $day_name_position to zero in two lines when one would have suited just fine. Instead, let's look at the first line. What. The. Hell? What is this code even doing? Even if I told you what the values of all those other variables are, and how they're calculated, do you think it would make any sense? Hint: it wouldn't. But I'll tell you anyways, just for fun. $num_slots_cumul is the number of schedule "slots" drawn so far. So that, times the width of any single slot, seems fair enough. That should tell you about where the next one should be drawn, don't you think? Apparently not. $realDayOfWeek and $realWeek are, you guessed it, integer representations of the day of the week and the week of the year for which we are drawing a header cell. Why are these necessary? Why on earth should which week of the year we're drawing a schedule for affect how we draw said schedule?


Asking myself all those questions gave me a headache. So I decided to not even attempt to answer them, and instead nullify them by simply removing all the code that was there and replacing with code that worked the way I thought it should. I figured that if there was some obscure reason that the week of the year actually mattered, my code wouldn't work and I would know. So I replaced the above (and some other stuff) with the following:



 $last_box_properties = array("width" => 0, "left" => 0);
...boring stuff...
$day_name_position = $last_box_properties["width"] + $last_box_properties["left"];
...boring stuff...
$last_box_properties["width"] = $day_name_width;
$last_box_properties["left"] = $day_name_position;


Isn't that a bit nicer? Turns out it works perfectly too, at least so far. Of course, I still have the other 1105 lines of the script to try and understand and fix.

The Bob Loblaw Law Blog

If Gwen Stefani's song "Hollaback Girl" were written in German, it might sound a little something like this:

Einige Male bin ich um diese Schiene gewesen
So ist er nicht gerades Gehen, wie das zu geschehen
Ursache Ich ist nicht kein hollabackmädchen
Ich ist nicht kein hollabackmädchen
Think about that for a minute.

Assignment 9 - My weekend trip

I love traveling, but I haven’t gotten much of an opportunity to go new places throughout my life, mostly because my parents aren’t too keen on traveling. But last weekend I got to go on a trip…lots of new places to see. Aaron (my boyfriend)’s cousin was getting married in Indiana, so we took the trip out there for the weekend.

We drove all the way, so it was a looooong trip. 10 hours in the car, one way. All the way West through New York, through a bit of Pennsylvania, through all of Ohio, then into Indiana. I had no idea just how flat Indiana is. I mean, it was something I wouldn’t have been able to comprehend until I actually saw it – nothing but land as far as the eye could see, to a far off horizon. The roads were all straight, completely level, no hills like in Ithaca. It was just a completely foreign experience. Luckily, we were going very close to the Ohio/Indiana border, so once we got into Indiana we were almost there, just traveling South for a bit to the small town of Warren (population about 1200) where our hotel was.


The wedding itself was in Fort Wayne, at a huge church that would accommodate the 400 guests. Basically we drove all day Friday (8am-6pm), then spent most of Saturday at the church for pictures, the ceremony, and then spent lots of time with the family at the reception. Then got back in the car Sunday morning and drove all the way back (10am-8pm). We got back to Ithaca Sunday completely exhausted – I fell asleep at 10pm, and slept for almost 11 hours. But it was well worth it! A beautiful wedding, and a wonderful time with friends and family.


Picture Caption:
Myself with Aaron's family at the church(L to R: his sister-in-law, brother, mother, myself, and Aaron).

CDL Fashion Show!

I don't know if anyone in this class went to the CDL (Cornell Design League) fashion show, but it was spectacular. At least, in my opinion it was and that's because....I was a model at the show!! :) How did I get this lucky? One of my housemates is a designer for CDL! And she specifically picked me to model one of her designs this year! :)

Ok enough chit-chat, here are some pictures from behind-the-scenes:











Tuesday, April 04, 2006

Assignment #8 - Option 2

According to Kraut et al, people use visual information to obtain situational awareness and to help with conversational grounding. Situational awareness consists of seeing bodies and their environment, and using this information to keep a current knowledge of the status of the task. Visual information also helps with conversational grounding, which is the knowledge shared by speakers.

Kraut et al first acknowledge that it is too difficult for video systems to relay all of the visual information that we have in face-to-face settings. Instead, they wish to pinpoint the specific visual cues required to perform group tasks, acting under the assumption that if these important cues are presented through video, the group task will be more likely to succeed. Kraut et al chose bicycle repair for their experiments, an activity which falls into the category of a mentor task: a task in which a person performs the physical actions while guided by the speech of another. Kraut et al divided the visual information in the experiments into the categories of heads/faces, bodies/actions, task objects, and environment. They observed the ways in which subjects used these categories in keeping track of task status and others' actions, identifying the focus of others' attention, communicating successfully and quickly, and keeping track of others' understanding.

Kraut et al address the problems in the current video system of "talking heads" - videoconferencing that only provides the cue category of heads/faces. They suggest that this limited visual information requires that people in videoconferences use the same language as with the telephone. With this in mind, their experiment compares task performance in the following three mediums: face-to-face, audio, and videoconferencing with a view of hands, actions, objects, and environment, but not heads.

The conclusion from both their experiments was that while the video system did not affect the final performance, it facilitated understanding between the subjects. My questions from the experiments are the following: What was the reason behind the experimenters initially deciding that helpers could only view objects that were in the worker's field of vision, rather than providing a fixed camera in the room? Why do you think that factors such as eye gaze are unimportant in conversational grounding? Do you believe that further research is possible in testing different views in videoconferencing to see which is best?

Assignment #8 - Option 2

(I find it ironic that in an assignment discussing — at least partially — why no one uses videoconferencing, one person chose the option to use the medium).

In this article, Kraut et. al focus on the way people use visual information to enhance/improve communication in collaborative physical tasks. Webcams are rarely used compared to other forms of communication, and this paper looked at how it can improve to become a better — and potentially more popular — tool.

According to this paper, visual information has two different categories: situational awareness and conversational grounding. Situational awareness is the monitoring of other participants’ actions to regulate the task using actions. It involves thought about the present and future of the direction of the communication based on the visual information. Conversational grounding is the idea that certain information becomes grounded over the course of the communication and people tend to use these grounded actions more. The study found that video system did not improve performance (for completing a task), but did change its users’ actions and thoughts about the task. In terms of track1 and 2 signals, the system created more track 2 signals than excepted, normally a sign of the establishment of grounding, but in this case likely because of the removal of certain cues through the technology. The research showed that communication combining audio and visual information did not result in better performance than mere audio communication.

I think there are a couple of things that could be further studied in this study. First of all, the constraints of the camera — stationary, location in the room, zoom, etc… — could have a major impact on the type of information people gain and even what information is grounded. Secondly, I think gender was an important factor that was somewhat overlooked. Gender can play a major influence in communication because it can effect how people communicate (across gender, with mixed gendered/same gendered groups, etc…). Males and females tend to have different speech patterns, not to mention very distinct audio and visual information.

I also have a few questions. Would the outcome of this experiment change if the participants didn’t have to complete a task but had to do something more “social” instead, such as talk about their life (that topic was arbitrary). Did/would gender have an effect on the outcomes? Could eye tracking be possible (without being too invasive) to see if it affected collaboration?

I like bicycles

COMM 450 blog entry #8



This paper discusses two experiments that were run to assess aspects
of technologically mediated communication in goal-oriented tasks. The
variables that were controlled dealt with the shared media of two
participants, one of whom was a naive worker and the other of whom was
an "expert" instruction-giver.



Predictions were, essentially, that measured success at the tasks
would increase along a continuum of shared communication space ranging
from least successful with half-duplex shared audio to most successful
with person-to-person contact in a shared physical space. Between
these two extremes existed full-duplex shared audio and shared audio
and video wherein the expert could see a portion of the field of view
of the worker.



Surprisingly, there was little significant variation in success
between using full-duplex audio and video. Here I suggest a few
reasons for that which were not considered in the article, based
around the actions that an instruction-giver may take in a shared
space that may not be taken otherwise.



The article uses the opportunity to make physical gestures indicating
specific objects in a shared space in order to ground discussion of
tasks to be performed and the lack of an equivalent opportunity in a
remote shared-media environment as one reason that a side-by-side
environment is more effective. The availability of a shared visual
environment in the shared video setup helped by allowing deictic
references to items in the shared visual space. I suggest also that
it is easier to partake in interactive dialogue when the
instruction-giver has more than passive control over their field of
view. The experimenters have suggested that the limited field of view
of the video apparatus may have restricted its usefulness; I suggest
also that the fact that the instruction-giver was merely "along for
the ride" and not able to direct their field of view independently of
the worker was significant in the dialogue between instruction-giver
and worker.



One way of testing this would be to mount the video camera that the
worker wears in such a fashion that its field of view may be directed
remotely by the instruction-giver. This, coupled with the ability of
the worker to see the video image that the instruction-giver sees,
takes some of the burden off of the worker to position themself so
that the instruction-giver can see what they are talking about. It
also would provide some feedback to the worker as to what specifically
the instruction-giver is concerned with at a given moment; this would
permit a part of the shared physical environment circumstances to be
duplicated in the remote shared video environment.



Eye-tracking devices similar to those found in camcorders could also
be employed to focus the instruction-giver's camera (which would be
controlled by what the instruction-giver is looking at) or to "box" on
the instruction-giver's viewscreen what the worker is looking at and
to mark in a similar fashion on the worker's HUD what the
instruction-giver is looking at. This is, indeed, suggested as
important for design of such a system in the future within the paper.



This would introduce to the shared video environment further aspects
of some of the nonverbal, unspoken communication that is available to
those who are in a shared physical environment.



A further issue is that of the familiarity of participants with the
video equipment and its limitations. Greater facility at providing
good video input to the instruction-giver might be attained by giving
workers and instruction-givers a chance to experience the experimental
setup from each others' sides as a part of a warm-up exercise. This
could help to defeat some of the asymmetry inherent in the remote
location of the instruction-giver.



Another suggestion I have would be to set up a similar scenario in a
fully virtual environment. It is common in multi-player first-person
computer games to allow players who are out of play to follow those
who are in play using different over-the-shoulder, through-the-eyes,
or roaming cameras. The issue of whether limitations in the video
feed were at fault for a lesser measure of success in the shared video
environment could be addressed by creating a task for a worker to
carry out in a virtual environment, such as running through a maze and
performing tasks at different points throughout it, with an "expert"
to help. Limiting the points of view which would be available to the
expert could provide further edification as to whether different
vantages on another's activities are more or less conducive to giving
instructions in completing a task.

#8 - option 2

In Kraut et. al's article, the main focus was to study "the ways in which visual information is used as a conversational resource in the accomplishment of collaborative physical tasks." By varying the amount of visual information avaliable in their experiment (which consisted of a worker and a expert helper completing a collaborative physical task--repairing a bicycle), Kraut et. al were able to study the part that visual information played in maintaining situational awareness as well as achieving conversational grounding (i.e. mutual understanding). In order to achieve situational awareness, both participants must be conscious of where they are at in completing the task (i.e. how much more is there to do before the bicycle is repaired), and what one another is doing at the moment. This will enable them to coordinate their communication to the other's needs. To achieve conversational grounding requires 1) the helper to phrase their utterances so that the worker can understand them as well as the intended meaning and 2) the worker to acknowledge that they have understood the worker.

The first prediction was that: 1) the worker and helper who were co-present would achieve the highest performance on completing the task, followed by the pair who used the video system devised by the experimenters, followed by the pair who used audio cues alone. The second prediction was that: the less the amount of available cues as to provide situational awareness, the more explicit the worker's requests to helpers.

Contrary to their predictions, it was actually found that the use of the video system (which consisted of head-mounting the worker with a camera so that the helper can see, from a first person perspective, what the worker sees) did not improve performance significantly compared to the lack of the video system. The second hypothesis, however, was proven.

A concern I have is that the experimenters never really clarified what the shared visual space consisted of. Only by introducing and telling the participants what each can see will mutual knowledge and common ground be achieved. Perhaps this affected the outcome of the experiment?

Another concern that I have is that if the experimenters used a different kind of camera (e.g. one of higher quality with little bandwidth limitation), or positioned the camera differently, would that affect the outcome of the experiment?

Assignment #8- Option #2

Visual Information as a Conversational Resource in Collaborative Physical Tasks discusses videoconferencing and compares it to audio-only and physical presence scenarios. It describes two experiments with a collaborative task, bicycle repair, and analyzes performance and language issues as it varies across the different media.
Some ideas that were expected to be relevant in each situation are the need to comply with Gricean norms and the needs to maintain situational awareness and common ground. Situational awareness is directly influenced by the amount of visual information available because it relates to maintaining a person's mental model of the environment. Common ground is affected by both language and environment because all senses can gather information to ground two people.
The experiments evaluated the effectiveness of a head-mounted video system that allowed both the worker and the helper see what the worker is looking at while repairing the bicycle. The first experiment compared solo performers and worker-helper pairs, with the latter varying between three media (audio-only and video with two types of audio). In this experiment, having video did not improve performance-either speed or accuracy-over having audio only. But the language used was different.
The second experiment combined the first one with a third state, that of physical co-presence. This side-by-side condition contained more efficient dialogue as well as better performance. It was again clear that video had shortcomings that canceled out most of its advantages.
To me, there seems to be two components to the problem of visual co-presence. One is the ability to maintain a full view of the environment that is rich enough and stable enough to closely mimic physical co-presence. The other, closely related component, is the interaction between the conversation partners, the worker and helper. If there was lots of feedback between the two and this feedback was natural and easy to provide, the visual image of the environment need not be so rich and complex.
This brings me to the point of "affordances", which Kraut et al mention on p. 21. In the HCI sense of the term, affordance is an object's natural fit or function in its environment. For example, the most natural way to hold a cup is by its handle; that is its affordance. I'm not sure how related this is to the topic of videoconferencing, but it may be worthwhile for Kraut et al to expand on their discussion of affordances. How, in a very general sense, can a videoconferencing system be made to provide very natural, common-sense tools for its users? Maybe these systems should be highly specialized by task (such as bicycle repair) or maybe collaboration can be generalized.

#8 — option 2

The Kraut et al. paper revealed some interesting characteristics about visual co-presence, particularly as it operates as part of and separate from physical co-presence. Their results that indicated that duplex video resulted in communication no more efficient than audio-only, but still less efficient than FTF, indicates that there is something in physical co-presence that all existing implementations of visual co-presence are missing. Otherwise, the extra linguistic turns in visual co-present media to establish common ground would not be needed. Their own observations also indicate that it may merely be an issue of finding the right implementation, such as systems that indicate data about focus or angle and context, rather than trying to approach the efficiency of FTF by just throwing more video at people, since their experiments and the experiments of others have shown that doesn't work. The problem with their video system (and other video systems) seems to be that people don't actually use visual information directly: they translate visual perceptions into implicit statements (e.g. "So-and-so is looking at the whoosit") which can be much harder to deduce from video feeds than physical co-presence (likely because of issues of depth, resolution, etc.), and so technological communication solutions should try and reproduce the types of data people try and deduce from their visual perceptions more than the visual perceptions themselves.

Since the video system developed in the experiments appears to have not been a successful solution, what might be a more successful design? The conclusion of the paper lists some enticing suggestions and non-video oriented aspects of what might be successful, but how could these be synthesized into a real design? What would be a good balance of raw video data and data distilled from video or other input to present to users? How could such a system be robust enough to be useful, but still usable?

Monday, April 03, 2006

Assignment 8

Kraut et al's paper explores the function of visual information in communication that is focused on the completion of a collaborative physical task. First, visual information provides situational awareness - knowledge of the actions of the other participant and the progress of the task. This ability to draw conclusions about the state of the task allows people to plan out relevant utterances geared toward the completion of the task. Second, when people share the same visual information, they increase their common ground, allowing for more efficient communication. When no visual cues are present, one expects to observe many utterances which contain long, explicit descriptions and clarifications.

The researchers conducted two experiments to analyze the effect of visual technology on task-based communication, in which a worker was aided by an experienced helper with different technologies. Surprisingly, it was found that communication that utilized both audio and video did not result in substantially higher performance in the completion of the task than communication which just used audio. However, the types of utterances varied considerably, as predicted. The lack of visual information prompted workers to give more explicit descriptions and helpers to provide more statements of acknowledgement. Also, workers' descriptions were followed by help in the visual condition more often than in the audio condition because in the former, descriptions were viewed as implicit requests for help, while in the latter, they were interpreted as attempts to ground the helper.

One of my concerns (also mentioned by Kraut et al) is that the results may be too dependent on the specific type of video equipment used. How significantly might the data change if the camera were in a fixed position (on a tabletop, say) rather than attached to the worker's head? A constant view of the entire scene may allow for faster completion of the task. Could this be the reason why there was no notable difference between setups that used video and those that did not? Why did the experimenters choose to use a head-mounted camera rather than a fixed one?

I would also like to know whether gender played a role in the outcome of the experiment. For example, if the two experts in Experiment 1 were comprised of one male and one female, was there a difference was there a difference in how effectively they helped the workers? Similarly, were male or female workers more successful in completely the task efficiently?

Assignment 8, Option 2

The Kraut et al paper takes many ideas from Clark, and considers them in the visual realm. Kraut et al are interested in seeing how different visual spaces affect the performance and the efficiency of language use while doing a task. In all of their hypotheses they predicted that performance would be better and more efficient in the spaces with the most visual cues – face to face being the “best,” then video conference, then audio conference. In general their hypotheses were correct, except sometimes when they found that simple presence of visual cues made a difference, but it didn’t matter if they were actually co-present or simply visually co-present. Based on all of their findings, the authors were able to make some great suggestions at the end as to how to improve video communication technologies.

One of the main points from Clark that Kraut et al consider is the effect of grounding. In the visual realm, grounding is able to be accomplished many different ways. There can still be verbal input, in the form of acknowledgements and other utterances to show you share common ground. Also, grounding can occur in the knowledge that you can see the same things. When the worker looked at something, the helper could also see it, and they shared that knowledge. Because the technology was not perfect, however, sometimes it needed to be clarified what could be seen and what was out of view. This is something the authors address in their implications.

I am a bit concerned as to the whole premise behind the experiment – the participants were given a small benefit for participating, which is common practice, but they were also promised a $20 prize if they did it the fastest and the best. I am curious if you believe or saw evidence that this may have affected the participants in any way. Being motivated to go fast might have caused them to act certain ways that they wouldn’t have acted otherwise. In many collaborative discussions, there is no prize for getting done quickly, and participants are more at ease. Do you think the promise of reward/stress of going quickly affected the data or experiment in any way? (If allowed to take their time, might the results have been different?)

I agree with your concern about not being able to include any data about non-verbal communication. It is clear from this experiment how beneficial it is to be able to see each other, and part of this is due to the occurrence of non-verbal communication such as gestures or emotions. I believe that non-verbal communication is critical to understanding video communication. What do you think you might have found out if you could have incorporated data from their non-verbal communication? For example, if you could have included gestures, or studied how they reacted to emotion, how do you think this would have affected visual/non-visual communication.