Our research is designed to tap into the potential of this record to study the overall effectiveness of email comments. In this presentation, we will discuss our work examining asynchronous email tutoring and how we determined whether tutor comments on papers emailed to our writing center were effective.
asynchronous, email, written comments, draft improvement, comment effectiveness, revision effectiveness, Faigley and Witte’s taxonomy, surface changes, text-base changes, research study, content analysis
One goal of writing centers is to improve student writing, but we often don’t have a paper trail of the results of a session. However, email tutoring allows a record of first drafts, comments, and revisions if a student resubmits their paper. Our research is designed to tap into the potential of this record to study the overall effectiveness of email comments. In this presentation, we will discuss our work examining asynchronous email tutoring and how we determined whether tutor comments on papers emailed to our writing center were effective.
In our research, we utilized papers that were submitted twice to the Writing Center, categorizing the tutor comments made on the first draft and the respective changes on the second draft using the taxonomy created by Witte and Faigley (1981). We used the taxonomy and “improvement ratio” of Stay (1983) to gauge draft improvement: how do writers respond to tutors’ comments and do papers improve overall? By examining the changes made (or not made), we can determine the comments’ effectiveness and which tutoring strategies elicit effective revisions.
Our research is especially relevant to any writing center engaged in email (and face-to-face) conferencing. Ideally our study can educate writing center directors and tutors and encourage them to implement our findings in their centers, allowing writers to take flight. We know you have many options when it comes to conferences today and so we thank you for soaring with us! Have an elevated day!
Type of Source: Conference Presentation
Presenters: Courtney Buck, Emily Nolan, Jamie Spallino
Year of Presentation: 2019
Title of Presentation: “Look Out Below!”: The Effectiveness of Email Comments
Conference: East Central Writing Centers Association (ECWCA)
Location of Conference: University of Dayton, Dayton, Ohio
Introduction [slide 1][Emily] Welcome Aboard! Have any of you ever wondered what happens to a student’s paper after a session? How many of a tutor’s suggestions will a student adhere to? How many changes will a student make? Will a student make any changes at all? Our research project, entitled “Look Out Below!: The Effectiveness of Email Comments,” strives to answer these questions and more.
My name is Emily Nolan, and this is Courtney Buck and Jamie Spallino. We are first-year English majors from Wittenberg University in Springfield, Ohio. This year we had the wonderful opportunity to conduct research with the guidance of the Director of the Wittenberg Writing Center. This opportunity was made possible through “First Year Research Award” (FYRA) scholarships. The purpose of these scholarships was to pair motivated students with faculty members to work passionately together on an exciting research project during a student’s first year at Wittenberg.
As many of you may have realized from our opening, one of the challenges in writing center practice is that often we don’t have a paper trail of the results of a session. However, email tutoring allows a record of first drafts, comments, and revisions if a student resubmits their paper. Our research is designed to tap into the potential of this record to study the overall effectiveness of email comments. In this presentation, we will discuss our work examining asynchronous email tutoring and how we determined whether tutor comments on papers emailed to our writing center were effective.
Relevant Scholarship [slides 2-3][Jamie] During the first semester of the FYRA program, we grounded ourselves in the writing center field by familiarizing ourselves with the literature. We decided that two articles had a direct relation to our research: Bowden’s (2018) “Comments on Student Papers: Student Perspectives” and Williams’ (2004) “Tutoring and Revision: Second Language Writers in the Writing Center.” Bowden (2018) explored an idea similar to ours in her article, which combines video recordings and post-session interviews to examine the comments ignored by students. For example, why did a student implement certain suggestions from the tutor but not others? By expanding Bowden’s (2018) work to asynchronous email tutoring, we examined not only which comments students heed but also the revisions they make in response.
We then drew from Williams’ (2004) article on the revisions made by L2 learners in the writing center. As she discovered, changes in a draft don’t always lead to an overall improvement for the paper.
Starting in January, we categorized the changes we saw between drafts using a combination of Faigley and Witte’s (1981) taxonomy and Stay’s (1983) “improvement ratio.” We analyzed the changes made (and not made) to gauge draft improvement. Which comments elicited change? How did those changes compare to the tutor’s initial request? Did the changes improve the paper? And, determined as a combination of those factors, were the comments ultimately effective?
Our Center [slides 4-5][Emily] Our study was conducted at Wittenberg University, a small private institution with roughly 2,000 students. The Writing Center employs an average of 30 tutors who must graduate from a semester long training course before they begin working. Our center runs through WCOnline, the website where students are able to schedule either a face-to-face or an email session with a tutor. When a student is scheduling an email session, they must choose an hour-long time slot and attach their paper to the website so a tutor can make comments directly on the document. [Courtney] Our email sessions are structured with three general sections: the front note, the side comments, and the end note. The front note is where a student will introduce themselves to a writer, thank the writer for sending in their paper, and address the concerns the student identified. The side comments are where the tutors leave suggestions and other comments that are meant to entice the reader to improve the paper. These side comments were the target of our research. The tutor can also make track changes, a function in Microsoft Word. In the image on the screen, a consultant has used Track Changes within a student’s paper and their changes appear in red within the text. With Track Changes, the tutor makes a generally small, in-text change that the writer can choose to either accept or decline. Lastly, the end note is where the tutor will give a summary of their comments, thank the writer again for utilizing the Writing Center, and finally encourage the writer to make another session.
Methodology [slide 5][Emily] To begin our examination, we took a sample of applicable papers from students who utilized the Writing Center. Our complete sample set includes 43 folders of submitted papers. For this presentation, we decided to examine a portion of those papers, and this research includes statistics from 24 papers. We plan on categorizing and examining the rest of these papers for future presentations.
Because our research involved papers written by current students, we gained the approval of the IRB at Wittenberg by having our mentor replace the names of the students with letters of the alphabet to ensure confidentiality.
Faigley and Witte’s (1981) Taxonomy [slides 6-15][Emily] As we mentioned above, we utilized the taxonomy created by Faigley and Witte (1981).
Figure 1: A Taxonomy of Revision Changes
Figure 1 shows 3 levels of revision changes, as described below:
- Revision Changes (Level 1)
- Surface Changes (Level 2)
- Formal Changes (Level 3)
- Tense, Number, and Modality
- Meaning Preserving Changes (Level 3)
- Text-Base Changes (Level 2)
- Microstructure Changes
- Macrostructure Changes
- Microstructure Changes
- Formal Changes (Level 3)
- Surface Changes (Level 2)
According to this taxonomy, there are two main types of revision changes: Surface Changes and Text-Base/Meaning Changes.
Surface Changes [slides 7-8][Emily] As the name implies, Surface Changes do not alter the meaning of a sentence or text. There are two subcategories under Surface Changes: Formal and Meaning-Preserving. Formal changes include spelling, tense, and format. Meaning-Preserving changes do not alter the meaning of a sentence. We have some examples of these on the PowerPoint above. [Courtney] In our Formal example, the tutor says, “I think your quotes may need to be set off using a colon instead of a comma.” They’re just making a simple sentence-level request of the writer by asking them to make a punctuation change. Our Meaning-Preserving example says, “I would suggest giving a definition to this word since many other musical terms are defined in this essay.” Here, all the tutor is asking for is a definition. The word is already there, so by providing a definition, no new meaning is being added to the sentence.
Text-Base Changes [slides 9-11][Emily] On the other side of the taxonomy, we have the changes that do alter the meaning of a sentence or text. Under Text-Base changes, there are two subcategories: Microstructure and Macrostructure. Microstructure changes alter the meaning of a sentence or small passage, but do not change the summary of the entire text. Macrostructure changes, on the other hand, do alter the meaning of the entire text. We have some examples of Microstructure and Macrostructure changes on the PowerPoint above. [Jamie] In this Microstructure example, the tutor says: “This quote has been used previously. Would it be effective to use another portion of that same article or to change this paragraph somewhat?” In this case, the student would be changing the support for a certain point but not altering the main point of the paragraph, or that of the paper as a whole.
In this Macrostructure example, on the other hand, the tutor says: “I still think you need a concluding paragraph to restate your thesis.” In this case, the tutor is asking the student to add an entire paragraph to their paper, therefore adding a new facet to their argument and changing the paper as a whole.
Additional Categories [slides 12-15][Emily] As you may have noticed, Formal, Meaning-Preserving, Microstructure, and Macrostructure are umbrella categories that are further broken down into different types. For this presentation, we elected to only examine the four main categories for simplicity reasons and to get a broader look at our data.
Before we began categorizing the papers individually, we spent a few weeks norming the same paper to make sure we were all comfortable utilizing the taxonomy.
As our research progressed and we began categorizing the comments, there were a few things that we decided to add onto Faigley and Witte’s (1981) taxonomy. For one, we found that a couple of our comments did not fit in the four categories, and so we added on two categories of our own. We added Praise, which is a pretty self-explanatory category. We also added Sayback, which is the category of comments that were mere reader-response statements and were not asking for anything in particular. What is important to note about these two additional categories is that unlike the four main categories from Faigley and Witte (1981), neither Praise nor Sayback comments are explicitly requesting a change from the writer. We have examples of both Praise and Sayback on the PowerPoint above.[Courtney] Praise is pretty straightforward. The tutor says, “I really like this idea of available resources!” They like what the writer is doing and offer some motivation.
For Sayback, the tutor says, “Then she does have her own kind of freedom it seems.” This comment is likely from a literary analysis, and the tutor isn’t directly asking for a revision here. They’re just restating what they think the writer’s argument is and saying it back.[Emily] Furthermore, we decided to split up comments that included more than one category. For example, if a comment led with Praise and then proceeded to ask for a Formal change like a change in tense, we split up the comment in two parts: one for Praise and one for Formal change. We decided to also categorize end notes if they included a suggestion that was not mentioned in any of the side comments. Additionally, after we categorized the comments, we also categorized the revisions the students made as higher, lower, or even, based on Stay’s (1983) improvement ratio. Higher means the revision made was more than what the tutor was asking for and conversely for lower. Even means that the student made the same level of revision the tutor was suggesting.
Results [slides 16-18][Jamie] So, with all the categorization finally complete, I’m sure you’re all just as excited as we were to see how the numbers came out.
First up, we have the average number of tutor comments per paper. Because there was significant variation in the length of the drafts we analyzed, the number of comments on each individual paper varied greatly as well, ranging from 4 to 47. Once we computed the average, however, we ended up with a nice, round 17.
We organized the data based on our categorizations using the modified version of Faigley and Witte’s (1981) taxonomy, and you can see the resulting graphs on your handout as well as on the PowerPoint. The largest proportion of comments—approximately 38.97%–requested Microstructure Changes, with the next runner up being Formal Changes with approximately 17.16%. Though there were relatively few Sayback and Praise comments, and we can’t always assume the effects of those comments, we have reason to believe that they are important categories to examine nevertheless, which we will discuss later.
Out of those approximately 17 comments per draft, an average of 9.083 directly elicited revisions. When we compared the change requested with the revision on the second draft, we discovered that a vast majority of these revisions—78.91% to be exact—were equal to the revision the tutor requested.
After all was said and done, we classified a comment as “effective” when it did all of the following: elicited a change, effected a revision higher or equal to its request, and caused an improvement in the paper. Because Praise and Sayback comments don’t request revisions, we didn’t include them in this number. Based on these criteria, our calculations show that 56.02% of comments that directly asked for change were effective.
So, while we’re glad to see the times when our tutors’ comments are effective, we have to wonder: what makes them ineffective? While our study is limited because we can’t infer causality, we did notice some interesting correlations. For example, many of the lower-level revisions occurred when a student deleted the entire sentence that a comment referred to rather than simply making the change. In addition to refusing to revise based on a tutor’s comment, in that case the student would be removing important information, therefore negatively impacting the paper as a whole.
Another pattern we noticed was the effect of Sayback comments. We didn’t expect them to directly elicit revisions in the papers, but our data implies otherwise. Over the course of 24 papers, we noticed several instances in which Sayback caused writers to revise their papers, one of the largest and most interesting being several paragraphs added to a draft by Student O. The effects of Sayback comments support a theme that spans much of writing center literature: the idea of responding as a reader. Even though Sayback doesn’t directly ask for a revision, we assume that writers see it as interest and an opportunity to make sure their point is being properly conveyed.
Though our numerical data doesn’t yet extend to these patterns, our continuing research on this topic will focus on the exceptions as well as the commonalities to help guide this development of writing center practice.
What Does This Mean for Writing Centers? [slide 19][Courtney] Now that we have the data, we can ask the big question: What does this mean for Writing Centers? As mentioned before, about 56% of the comments that requested a revision were effective. In writing centers, we strive to help writers and want to provide them with the best services possible. Tutors are knowledgeable and give advice they believe will help not just the paper improve, but the writer improve their skills as well. If comments are going to help, we want writers to make the revisions tutors suggest. 56% effectiveness is clearly not where we want to be in that case. So, why are writers not making the changes?
There are many possible reasons for this. Confusion over what a comment is asking for, disagreement with the tutor’s request, and lack of knowledge about how to actually make a certain kind of revision could all contribute to changes not being made. Bowden (2018) in her study found that confusion was one of the largest factors contributing to writers’ lack of revision. In addition, sometimes writers simply may not have the time or desire to put in the effort to make the suggested revision.
Confusion may not pertain solely to what a tutor’s comment is asking for. Technology is at the basis of email sessions. In Wittenberg’s Writing Center, we use Microsoft Word. If students don’t have track changes turned on, they may be unable to see the tutor’s comments—this is what we believe happened with Student A’s drafts. Student A made no revisions between the first and second draft and did not accept the track changes, we presume due to their inability to see the tutor’s comments. We do not believe this is a super common issue, but still one worth addressing for those of you that offer email sessions.
Limitations and Future Research [slides 20-21][Courtney] As with any research, there were limitations to our study. We reviewed about half of the papers from the sample, but as noted before that means there are still around 20 papers left with uncategorized and unanalyzed data. Our findings are still representative, though we do intend to go through the rest of the papers. The categories in Faigley and Witte’s (1981) taxonomy of revision changes were another limitation. Not every request for revision fell perfectly into one of the categories, so we had to go by our judgments. Also, we made the decision not to further categorize the comments into the lowest subcategories (spelling, addition, deletion, etc.) as mentioned before. This is something we hope to do with both the categorized and uncategorized drafts. [Courtney] A few ideas for further research came about while examining the data and discussing trends we noticed in these email sessions, as well as in face-to-face sessions from our personal experience in the Wittenberg Writing Center. It appeared that writers tended to come in with less complete drafts for email sessions and more complete drafts for face-to-face, based on what we have seen during our time in the center and our experience with our own papers. We wondered if one may be more conducive to productivity than the other, and if completeness of a draft may affect how much change is made. Now that we have a sense of which kinds of tutor comments are made most often in an email session, it begs two questions: which kinds of comments are actually most effective, and how can we improve the percentage of comments that are effective? These are questions we plan to look into more and hope to have answers to when our research is complete.
If any of you are interested or will be attending, we hope to present more of our research at the National Conference on Peer Tutoring in Writing in Columbus this October and to write and publish a complete article about our findings sometime in the next year.
Research into email sessions is a very interesting new topic that gives writing centers potential for growth. We hope you’ve enjoyed our presentation and that it has been informative/helpful and given you ideas for your own writing center. We know you have many options when it comes to which presentations to attend, so we thank you for flying with us! Have an elevated day!
Bowden, D. (2018). Comments on student papers: Student perspectives. The Journal of Writing Assessment, 11(1).
Faigley, L., & Witte, S. (1981). Analyzing revision. College composition and communication, 32(4), 400-414.
Stay, B. (1983). When re-writing succeeds: An analysis of student revisions. The Writing Center Journal, 4(1), 15-28.
Williams, J. (2004). Tutoring and revision: Second language writers in the writing center. Journal of Second Language Writing, 13(3), 173-201.