Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Assessing pragmatic aspects of L2 communication: Why, how and what for

Assessing pragmatic aspects of L2 communication: Why, how and what for 1IntroductionAssessment can be defined as expressing a value judgement on something/someone, that is, as explicitly indicating where a given person/thing stands in terms of their intrinsic and/or perceived qualities. It is a multi-faceted phenomenon not only because it may focus on the emotional reaction that the object of assessment may determine, the properties it displays as a member of a given category, and/or its social-normative adequacy and appropriacy in a given context, These concepts are called Affect, Appreciation and Judgement in Appraisal Theory (www.grammatics.com/appraisal) within the framework of Systemic Functional Linguistics. but also because it is an act of reflection and communication, combining a careful consideration of the object of assessment and the expression of the opinion formed and attitude developed as a result of that careful examination (Hunston 1994: 191). Assessment is also a practice that affects interpersonal relationships. Since it consists in taking a favourable or unfavourable stand on what is being assessed, and thus conveying a positive or negative description of it, it may, respectively, enhance or threaten the positive face of the person whose behaviour or work is being assessed. Finally, assessment may impact the scope of action of the recipient of assessment. That is, positive assessment may entitle them to a given right and/or encourage them to take a future course of action, while negative assessment may involve depriving them of that right or discourage them from embarking on a given plan. Therefore the outcome of assessment has implications for their negative face too.In general terms, the rationale of assessment may be said to comprise at least three aspects: raising awareness (informing), affecting behaviour (determining future courses of action), and allocating resources (assigning rewards). First, assessment makes explicit what something is worth: it reveals or clarifies its value with regard to given standards. In this respect, it is an interpretive description of the object of assessment, which provides insights into its nature, strengths and weaknesses. Second, it is a way to determine how suitable or successful the object of assessment is with respect to the purposes it is supposed to serve. This can then serve as the basis for deciding whether to maintain the object of assessment in its present state or whether, and in what respects, to modify it. Third, the information gathered through assessment may be used to decide whether and how to reward or penalise the recipient of assessment. The outcome of assessment can thus serve as positive reinforcement or negative punishment.Of the three above-mentioned aspects of assessment, the first, that is raising awareness, tends to be the focus of linguistic research as a means to the end of better accounting of patterns in language and language use. Indeed, as a source of information about a linguistic phenomenon, it involves defining (i.e. identifying by delimiting) its object (e.g. a genre); detailing the features that are more likely to accurately reveal its value (e.g. sequencing of topics); establishing the criteria, or comparable objects of assessment, against which to assess those features (e.g. cohesion).But all three aspects of assessment are relevant to language education. However, Kohn (2011) observes that academic assessment is only a two-way process, consisting in gathering and sharing information, adding that neither requires testing or grade assignment, the latter defined as a system of rewards and punishments. Indeed, first and foremost, assessment “serves to gather information about students’ understanding and skills” (i.e. for instructional purposes; Cheng and Fox 2017: 7). Second, as a way to highlight how successful learners’ and/or teachers’ performance may be, assessment is meant to monitor and influence behaviour (i.e. assessment of and for learning; Cheng and Fox 2017: 4) so that later study habits and pedagogical interventions may be suitably planned for future good, or even better, performance (Black and Wiliam 1998: 2; see, e.g. Ishihara 2010). This applies to formative or summative assessment, which takes place during or after pedagogical intervention, respectively, rather than placement or diagnostic assessment, which is carried out before. Finally, the outcome of assessment on learners’ and teachers’ performance may serve to record and ratify the validity of the object of assessment (i.e. assessment for administrative purposes; Cheng and Fox 2017: 8), and to reward the behaviour the stakeholders involved (e.g. good marks for students and good standing for teachers), thus having interpersonal and also social effects (Messick 1989). Additional issues are involved in assessment in language education, such as selecting and training assessors; detailing the threshold at which assessment criteria can be said to express a positive value; determining the feasibility of the implementation of assessment; establishing procedures for interpreting the findings from assessment practice; determining how to report on the findings, how to use them in future courses of action, and how long their validity will last. These last two aspects are called assessment decisions in language education (Taylor and Nolen 2008). The other components of assessment activities are events, tools and processes.Assessment in linguistic research and language education is a challenging enterprise. The main reason is that the object of assessment, namely language, is a composite construct, organised at several levels simultaneously (e.g. grammar, lexis, meaning, letters/sounds). The task becomes harder when it comes to assessing overall communication skills (i.e. language use), because additional variables come into play (e.g. structure, amount of content, rhetorical strategies) as relevant to the context of communication, and contribute to the degree of success of an interactional event.The adequacy of an interactional event depends on the participants’ pragmatic skills, that is, “the ability to use language effectively in order to achieve a specific purpose and to understand a language in context” (Thomas 1983: 92). It is based on “knowledge of the appropriate contextual use of the particular language’s linguistic resources” (Barron 2003: 10), which is put into practice in social interaction in adherence to shared values and established practices. This goal-oriented receptive and productive interactional activity, which produces effects (e.g. (mis)understanding, social harmony/friction) that matter to communication participants, is shaped by socio-cultural conventions. These are norms of interaction, which people are socialised into as members of given socio-cultural communities, and which often operate below the level of consciousness.Assessment of pragmatic skills, therefore, involves describing and evaluating not only what language is used by interactants, but also how it is used, why and what for, with whom and when (cf. Bardovi-Harlig 2013: 68), how it is adapted across contexts, and with what effects. It is thus a way to determine in what ways and to what extent communication succeeds or fails from the point of view of language users (cf. Crystal 1997: 301) who are motivated by real-world interactional-transactional goals.Although still relatively understudied (see Sydorenko et al. 2014: 20), the assessment of pragmatic skills is becoming a growing area of research (e.g. Roever 2011) and pedagogy (e.g. Hudson, Detmer and Brown 1995), which has led to the design and development of test batteries of learners’ pragmatic competence (e.g. Roever 2005) and also methods for gauging pragmatic skills such as DCTs, multiple choice tasks, retrospective verbal reports (e.g. Hinkel 1997; Cohen 2004). There are, however, at least three sub-fields that are still especially under-explored: the assessment of extensive discourse (but see, e.g. Sydorenko et al. 2014), teacher-based assessment of learners’ pragmatic skills in the classroom (but see, e.g. Ishihara 2009, 2010), and perception studies on the effects of discourse on the addressee (but see, e.g. Wolfe et al. 2016).Given the vastness of the field or pragmatics, on the one hand, and the multi-facetedness of assessment on the other, each contribution is bound to be selective, that is, focused on specific pragmatic aspects. Thus, pragmatics assessment research may target different types of discursive behaviour like errors (e.g. Janopoulos 1992; Beason 2001; Wolfe et al. 2016) or speech acts, including apologies (e.g. Tajeddin and Alemi 2014), refusals (e.g. Alemi and Tajeddin 2013) and compliment responses (e.g. Alemi, Eslami and Rezanejad 2014). It can also be relevant to different competences, namely pragmatic-declarative knowledge (e.g. Bardovi-Harlig and Dörnyei 1998); metapragmatic-“reflective” knowledge and pragmatic ability or procedural knowledge (e.g. Ishihara 2009). It may be oriented toward the analysis of language users’ productive and/or receptive communicative skills (e.g. Koike 1989), as well as toward their ability to judge the acceptability of given discursive events (e.g. Bardovi-Harlig and Dörnyei 1998). Also, it may explore the technical (de)merits of language production in terms of its linguistic and discursive features (e.g. Krulatz 2015; Taguchi 2006) and/or its contextual effects, that is, its cognitive, emotional and behavioural reactions (e.g. Janopoulos 1992), and/or the connection between the two (e.g. Scher and Darley 1997). Pragmatic assessment may consider the value of communicative practices from the point of researchers, who want to be able to account for discursive behaviour (research assessment; e.g. Bektas-Cetinkaya 2012), or that of teachers, who need to provide feedback to students at the end of a teaching-learning cycle (classroom assessment; e.g. Ishihara 2009). Alternatively, it may analyse the design, implementation, characteristics and effects of the assessment process itself (e.g. Alemi and Khanlarzadeh 2017). For example, it may examine the assessment practices of teachers (e.g. Alcón 2015), other experts (e.g. Härmälä 2010; Sirikhan and Prapphal 2011), ordinary language users (e.g. Culpeper et al. 2010; Schauer 2017; Chen and Liu 2016), or learners/trainees (e.g. Ishihara 2010). Finally, it may focus on the degree of suitability and reliability of different types of rating instruments, such as rating scales (e.g. Youn 2018), comparisons of texts (e.g. Wolfe et al. 2016), open-ended comments (e.g. Economidou-Kogetsidis 2015), the variety of rating criteria adopted: positive traits like appropriateness (e.g. Hacking 2008), negative traits like unacceptability (e.g. Bektas-Cetinkaya 2012) and neutral traits such as phrasing (e.g. Chen and Liu 2016).The ultimate goal of pragmatics assessment research is making assessment accurate, fair and useful to all stakeholders involved. The present issue is a small contribution to these goals.To sum up, implementing and validating suitable assessment procedures for gauging learners’ pragmatic competence and performance is crucial for both research and teaching purposes, yet it is fraught with difficulties. Research on pragmatics assessment strives to maximise the accurateness, fairness, reliability, validity and usefulness of assessment instruments and methods for the benefit of all the stakeholders involved. This special issue of Łodz Papers in Pragmatics represents a small contribution to this strand of research.2On this special issueMotivated by the above considerations, we held an international conference – Exploring and Assessing Pragmatic Aspects of L1 and L2 Communication: From Needs Analysis through Monitoring to Feedback (Dept. of Linguistic and Literary Studies, University of Padua, Italy, 25-27 July 2018) – with the goal of promoting a focused reflection on the description, exploration and assessment of pragmatic competence across registers, text types and contexts. The participants discussed topics including how teacher (non)nativeness may influence the teaching of target-language pragmatics, through how to foster EFL teacher trainees’ pragmatic awareness, and how to approach the assessment of L2 language learners’ pragmatic appropriateness. This issue of Lodz Papers in Pragmatics, titled Assessing pragmatic aspects of L2 communication: reflections and practices, includes four conference presentations as well as two papers authored by scholars who, being strongly interested in the conference themes, generously accepted to contribute to our publication project.The issue opens with a paper by Andrew D. Cohen, “Issues in the assessment of L2 pragmatics”, which provides an overview of current issues in the assessment of pragmatics, an increasingly important yet not well-established area of investigation (Cohen 2019). The author discusses the abilities and communicative practices that should be assessed in L2 pragmatics (e.g. fluency, sociolinguistics), the factors that might influence pragmatic behaviour (e.g. L1 background, prosody, dysfluency), and the trade-off between the feasibility of obtaining pragmatic data by means of a given method (e.g. DCTs, oral production) and its relevance to pragmatic assessment. Cohen also makes the important distinction between assessing pragmatics for research purposes vs for classroom instruction. With regard to the former, he examines the benefits of mixed methods (i.e. combining qualitative and quantitative approaches) and of data elicitation procedures (e.g. naturalistic data, data elicited through DCT), and the importance of choosing the norms to evaluate the appropriateness of a given pragmatic performance. These norms include the identification of a specific variety of English (e.g. British English, ELF), the degree of rater calibration and consistency, and the judgement of experts in a given domain (e.g. tourism). As regards the assessment of pragmatics for classroom instruction, Cohen discusses face validity, that is, the extent to which language learners perceive a given assessment method as valid and enjoyable, and the value of collecting verbal report data from respondents as a means of validating the assessment measures. The author concludes by calling for more collaboration between instructors and learners, with a view to giving more prominence to the assessment of pragmatics in the classroom.Karen Glaser’s study “Assessing the L2 pragmatic awareness of non-native EFL teacher candidates: Is spotting a problem enough?” focuses on language learner awareness of grammatical (in)accuracies and pragmatic (in)felicities. Replicating and adapting Bardovi-Harlig and Dörnyei’s (1998) study, Glaser administered a metalinguistic judgement questionnaire to 84 German advanced EFLs who were training to become primary school English instructors. The participants were presented with 15 scenarios, the last part of which might contain a pragmatically incorrect item, a grammatically incorrect one or no problem at all. They were asked to indicate instances of incorrectness and/or inappropriateness, to identify the nature of the grammatical vs pragmatic violation, if present, and to suggest a repair. By applying Flöck and Pfingsthorn’s (2017) Signal Detection Matrix, she reported participants’ Hits, Misses, False Alarms and Correct Rejections. The participants correctly identified inaccuracies, infelicities and unproblematic sentences 75% of the time, being the strongest in recognising unproblematic utterances, and the least strong in recognising grammatical errors. On the other hand, while they successfully repaired most grammatical errors, they had difficulties repairing pragmatic infelicities, creating new problems in the process. Her analysis shows that: correct problem identification could not necessarily be equated with adequate repair abilities, at least with pragmatic problems; particularly challenging were situations exemplifying excessive politeness and formality; and for both the grammar and the pragmatics items, the responses varied considerably across individual situations. The author argues that, when comparing ‘grammar’ to ‘pragmatics’ situations, it is crucial to examine the specific phenomena involved, since their respective, highly variable challenges may influence the overall findings. She also suggests that it may be useful to assess learners’ recognition and repair of overpolite/formal situations, which also illustrate pragmatic infelicities, and concludes that non-native English-speaking trainee teachers may benefit from focused training in pragmatic awareness and production.In their paper “Rater variation in pragmatic assessment: The impact of the linguistic background on peer-assessment and self-assessment”, Sunni L. Sonnenburg-Winkler, Zohreh R. Eslami and Ali Derakhshan investigate the effect of language learners’ L1 backgrounds on both self-assessment and peer-assessment of pragmatic aspects of learner production (e.g. directness, politeness, formality). The authors had 10 MA level students from different linguistic backgrounds studying ESL in the US complete two DCTs. The students were then asked to assess their own responses, those of their peers, and, finally, to provide an explanation for their decisions. Overall, the raters tended to give similar ratings to the same samples, and raters from the same language background showed a higher level of agreement than raters from different language backgrounds. When assessing their peers, most raters tended to evaluate samples by participants sharing the same L1 in a similar way. When assessing themselves, the learners were sometimes more lenient than when assessing their peers, although findings were quite varied, showing no distinctive patterns. In line with previous research, this study indicates that there may be a link between linguistic background and rater scoring patterns. The authors encourage future research on the influence of raters’ personal characteristics on the reliability of their ratings.Bárbara Eizaga-Rebollar and Cristina Heras-Ramírez’s contribution “Assessing pragmatic competence in oral proficiency interviews at the C1 level with the new CEFR descriptors” analyses how the updated descriptors of the CEFR at the C1 level define pragmatic competence. It then explores the extent to which the CEFR descriptions of pragmatic competence are operationalised in two popular Oral Proficiency Interviews (OPIs) at the C1 level, namely Cambridge’s Certificate in Advanced English (CAE) and Trinity’s Integrated Skills in English (ISE) III. In particular, CAE focuses mostly on discourse competence and fluency, thus aligning closely with the CEFR, while ISE III prioritises functional competence, which includes speaker meaning and propositional precision functional competence. The findings show that pragmatic competence is a recurring aspect in the descriptors of the scales of both OPIs, even though, in both cases, it does not feature as a distinct assessment criterion, but is part of L2 speaking proficiency. At the same time, it appears that both tests fail to accommodate all aspects of pragmatic competence and that there is a mismatch between the task competences and the rating scale competences. Finally, sample analyses of assessment practice in both OPIs reveal that examiners’ ratings do not always appear to be directly motivated by the tests’ descriptors. The authors conclude with some recommendations for examiner training and construct validity. These include: investigating the aspects of pragmatic competence in the scales to which examiners give more importance in their ratings; checking whether these coincide with those that examiners take into account in their ratings; and defining the proficiency threshold required for test-takers to be considered pragmatically competent at the C1 level.The last two articles in this issue turn the reader’s attention to the pragmatic competence of Chinese learners of English and that of English learners of Chinese. In her “Developing pragmatic competence in English academic discussions: An EAP classroom investigation”, Marcella Caprario investigates the development of pragmatic competence among advanced EAP students at an English-medium University in China, who were attending a semester-long EAP course. The focus of the course was the academic discussion, and one of its overt objectives was developing the ability to interact with group members effectively and respectfully. An explicit-inductive approach was adopted for providing instruction in the sociopragmatics and pragmalinguistics of English-language academic discussions. Throughout the semester, the students engaged in ongoing reflective writing, which was meant to make them aware of their process of developing pragmatic competence. The reflective writing of five students was qualitatively examined through template analysis (Hanks 2017). The analysis revealed some key issues faced by the students (e.g. lack of clarity when speaking), their causes (e.g. limited linguistic competence), and the corrective steps taken (e.g. better time management). Content analysis also brought to the fore the impact of students’ emotional lives on their learning and performance, with negative emotions causing hesitation or avoidance of oral participation, but at times also acting as a catalyst for change after an unsatisfactory performance. The results show that self-reflection was useful for the students to take ownership of their own learning process, and for the instructor to notice communal and individual needs to be addressed with targeted instruction. Caprario concludes that teaching pragmatic competence in academic discussions can foster collaborative teaching and learning, favour the development of students’ critical thinking skills, and empower learners to develop autonomy.In their paper “Evaluating the appropriacy of Ritual Frame Indicating Expressions (RFIEs) – A case study of learners of Chinese and English”, Juliane House and Dániel Z. Kádár set out to study RFIEs, that is, conventionalised expressions by means of which the speaker expresses his/her awareness of rights and obligations (Goffman 1967). Specifically, they investigate the equivalence and contextual appropriateness of the Chinese RFIE 请 (qing) and its English counterpart please, as well as that of the Chinese RFIE 对不起(duibuqi) and of the corresponding English expression sorry. They administered a questionnaire to and conducted follow-up interviews with seven British learners of Mandarin Chinese and even Chinese learners of British English. They asked the learners to evaluate a series of appropriate and inappropriate uses of these RFIEs in the target languages along dimensions such as formality and politeness. The results revealed linguacultural differences between the two groups. On the one hand, most British respondents were better at identifying the appropriate uses of the RFIEs than the inappropriate ones, and to be influenced by stereotypes in their answers. On the other hand, the Chinese respondents tended to apply their own cultural views to the evaluation of the target language RFIEs. Implications are drawn for teaching and learning pragmatic aspects of the target languages and for successful intercultural communication.The contributions to this issue illustrate some of the many directions in which the various aspects of pragmatic skills assessment can be explored. They show that not only various facets of assessment need to be investigated, but also that they can be approached by using a variety of quantitative and qualitative methods, which often fruitfully complement each other. Their findings, obtained following rigorous analytical procedures, lead us to a better understanding of assessment and raise new questions worth exploring in future studies. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Lodz Papers in Pragmatics de Gruyter

Assessing pragmatic aspects of L2 communication: Why, how and what for

Lodz Papers in Pragmatics , Volume 16 (1): 13 – Jul 1, 2020

Loading next page...
 
/lp/de-gruyter/assessing-pragmatic-aspects-of-l2-communication-why-how-and-what-for-QWXxzElRfx

References (54)

Publisher
de Gruyter
Copyright
© 2020 Walter de Gruyter GmbH, Berlin/Boston
ISSN
1898-4436
eISSN
1898-4436
DOI
10.1515/lpp-2020-0001
Publisher site
See Article on Publisher Site

Abstract

1IntroductionAssessment can be defined as expressing a value judgement on something/someone, that is, as explicitly indicating where a given person/thing stands in terms of their intrinsic and/or perceived qualities. It is a multi-faceted phenomenon not only because it may focus on the emotional reaction that the object of assessment may determine, the properties it displays as a member of a given category, and/or its social-normative adequacy and appropriacy in a given context, These concepts are called Affect, Appreciation and Judgement in Appraisal Theory (www.grammatics.com/appraisal) within the framework of Systemic Functional Linguistics. but also because it is an act of reflection and communication, combining a careful consideration of the object of assessment and the expression of the opinion formed and attitude developed as a result of that careful examination (Hunston 1994: 191). Assessment is also a practice that affects interpersonal relationships. Since it consists in taking a favourable or unfavourable stand on what is being assessed, and thus conveying a positive or negative description of it, it may, respectively, enhance or threaten the positive face of the person whose behaviour or work is being assessed. Finally, assessment may impact the scope of action of the recipient of assessment. That is, positive assessment may entitle them to a given right and/or encourage them to take a future course of action, while negative assessment may involve depriving them of that right or discourage them from embarking on a given plan. Therefore the outcome of assessment has implications for their negative face too.In general terms, the rationale of assessment may be said to comprise at least three aspects: raising awareness (informing), affecting behaviour (determining future courses of action), and allocating resources (assigning rewards). First, assessment makes explicit what something is worth: it reveals or clarifies its value with regard to given standards. In this respect, it is an interpretive description of the object of assessment, which provides insights into its nature, strengths and weaknesses. Second, it is a way to determine how suitable or successful the object of assessment is with respect to the purposes it is supposed to serve. This can then serve as the basis for deciding whether to maintain the object of assessment in its present state or whether, and in what respects, to modify it. Third, the information gathered through assessment may be used to decide whether and how to reward or penalise the recipient of assessment. The outcome of assessment can thus serve as positive reinforcement or negative punishment.Of the three above-mentioned aspects of assessment, the first, that is raising awareness, tends to be the focus of linguistic research as a means to the end of better accounting of patterns in language and language use. Indeed, as a source of information about a linguistic phenomenon, it involves defining (i.e. identifying by delimiting) its object (e.g. a genre); detailing the features that are more likely to accurately reveal its value (e.g. sequencing of topics); establishing the criteria, or comparable objects of assessment, against which to assess those features (e.g. cohesion).But all three aspects of assessment are relevant to language education. However, Kohn (2011) observes that academic assessment is only a two-way process, consisting in gathering and sharing information, adding that neither requires testing or grade assignment, the latter defined as a system of rewards and punishments. Indeed, first and foremost, assessment “serves to gather information about students’ understanding and skills” (i.e. for instructional purposes; Cheng and Fox 2017: 7). Second, as a way to highlight how successful learners’ and/or teachers’ performance may be, assessment is meant to monitor and influence behaviour (i.e. assessment of and for learning; Cheng and Fox 2017: 4) so that later study habits and pedagogical interventions may be suitably planned for future good, or even better, performance (Black and Wiliam 1998: 2; see, e.g. Ishihara 2010). This applies to formative or summative assessment, which takes place during or after pedagogical intervention, respectively, rather than placement or diagnostic assessment, which is carried out before. Finally, the outcome of assessment on learners’ and teachers’ performance may serve to record and ratify the validity of the object of assessment (i.e. assessment for administrative purposes; Cheng and Fox 2017: 8), and to reward the behaviour the stakeholders involved (e.g. good marks for students and good standing for teachers), thus having interpersonal and also social effects (Messick 1989). Additional issues are involved in assessment in language education, such as selecting and training assessors; detailing the threshold at which assessment criteria can be said to express a positive value; determining the feasibility of the implementation of assessment; establishing procedures for interpreting the findings from assessment practice; determining how to report on the findings, how to use them in future courses of action, and how long their validity will last. These last two aspects are called assessment decisions in language education (Taylor and Nolen 2008). The other components of assessment activities are events, tools and processes.Assessment in linguistic research and language education is a challenging enterprise. The main reason is that the object of assessment, namely language, is a composite construct, organised at several levels simultaneously (e.g. grammar, lexis, meaning, letters/sounds). The task becomes harder when it comes to assessing overall communication skills (i.e. language use), because additional variables come into play (e.g. structure, amount of content, rhetorical strategies) as relevant to the context of communication, and contribute to the degree of success of an interactional event.The adequacy of an interactional event depends on the participants’ pragmatic skills, that is, “the ability to use language effectively in order to achieve a specific purpose and to understand a language in context” (Thomas 1983: 92). It is based on “knowledge of the appropriate contextual use of the particular language’s linguistic resources” (Barron 2003: 10), which is put into practice in social interaction in adherence to shared values and established practices. This goal-oriented receptive and productive interactional activity, which produces effects (e.g. (mis)understanding, social harmony/friction) that matter to communication participants, is shaped by socio-cultural conventions. These are norms of interaction, which people are socialised into as members of given socio-cultural communities, and which often operate below the level of consciousness.Assessment of pragmatic skills, therefore, involves describing and evaluating not only what language is used by interactants, but also how it is used, why and what for, with whom and when (cf. Bardovi-Harlig 2013: 68), how it is adapted across contexts, and with what effects. It is thus a way to determine in what ways and to what extent communication succeeds or fails from the point of view of language users (cf. Crystal 1997: 301) who are motivated by real-world interactional-transactional goals.Although still relatively understudied (see Sydorenko et al. 2014: 20), the assessment of pragmatic skills is becoming a growing area of research (e.g. Roever 2011) and pedagogy (e.g. Hudson, Detmer and Brown 1995), which has led to the design and development of test batteries of learners’ pragmatic competence (e.g. Roever 2005) and also methods for gauging pragmatic skills such as DCTs, multiple choice tasks, retrospective verbal reports (e.g. Hinkel 1997; Cohen 2004). There are, however, at least three sub-fields that are still especially under-explored: the assessment of extensive discourse (but see, e.g. Sydorenko et al. 2014), teacher-based assessment of learners’ pragmatic skills in the classroom (but see, e.g. Ishihara 2009, 2010), and perception studies on the effects of discourse on the addressee (but see, e.g. Wolfe et al. 2016).Given the vastness of the field or pragmatics, on the one hand, and the multi-facetedness of assessment on the other, each contribution is bound to be selective, that is, focused on specific pragmatic aspects. Thus, pragmatics assessment research may target different types of discursive behaviour like errors (e.g. Janopoulos 1992; Beason 2001; Wolfe et al. 2016) or speech acts, including apologies (e.g. Tajeddin and Alemi 2014), refusals (e.g. Alemi and Tajeddin 2013) and compliment responses (e.g. Alemi, Eslami and Rezanejad 2014). It can also be relevant to different competences, namely pragmatic-declarative knowledge (e.g. Bardovi-Harlig and Dörnyei 1998); metapragmatic-“reflective” knowledge and pragmatic ability or procedural knowledge (e.g. Ishihara 2009). It may be oriented toward the analysis of language users’ productive and/or receptive communicative skills (e.g. Koike 1989), as well as toward their ability to judge the acceptability of given discursive events (e.g. Bardovi-Harlig and Dörnyei 1998). Also, it may explore the technical (de)merits of language production in terms of its linguistic and discursive features (e.g. Krulatz 2015; Taguchi 2006) and/or its contextual effects, that is, its cognitive, emotional and behavioural reactions (e.g. Janopoulos 1992), and/or the connection between the two (e.g. Scher and Darley 1997). Pragmatic assessment may consider the value of communicative practices from the point of researchers, who want to be able to account for discursive behaviour (research assessment; e.g. Bektas-Cetinkaya 2012), or that of teachers, who need to provide feedback to students at the end of a teaching-learning cycle (classroom assessment; e.g. Ishihara 2009). Alternatively, it may analyse the design, implementation, characteristics and effects of the assessment process itself (e.g. Alemi and Khanlarzadeh 2017). For example, it may examine the assessment practices of teachers (e.g. Alcón 2015), other experts (e.g. Härmälä 2010; Sirikhan and Prapphal 2011), ordinary language users (e.g. Culpeper et al. 2010; Schauer 2017; Chen and Liu 2016), or learners/trainees (e.g. Ishihara 2010). Finally, it may focus on the degree of suitability and reliability of different types of rating instruments, such as rating scales (e.g. Youn 2018), comparisons of texts (e.g. Wolfe et al. 2016), open-ended comments (e.g. Economidou-Kogetsidis 2015), the variety of rating criteria adopted: positive traits like appropriateness (e.g. Hacking 2008), negative traits like unacceptability (e.g. Bektas-Cetinkaya 2012) and neutral traits such as phrasing (e.g. Chen and Liu 2016).The ultimate goal of pragmatics assessment research is making assessment accurate, fair and useful to all stakeholders involved. The present issue is a small contribution to these goals.To sum up, implementing and validating suitable assessment procedures for gauging learners’ pragmatic competence and performance is crucial for both research and teaching purposes, yet it is fraught with difficulties. Research on pragmatics assessment strives to maximise the accurateness, fairness, reliability, validity and usefulness of assessment instruments and methods for the benefit of all the stakeholders involved. This special issue of Łodz Papers in Pragmatics represents a small contribution to this strand of research.2On this special issueMotivated by the above considerations, we held an international conference – Exploring and Assessing Pragmatic Aspects of L1 and L2 Communication: From Needs Analysis through Monitoring to Feedback (Dept. of Linguistic and Literary Studies, University of Padua, Italy, 25-27 July 2018) – with the goal of promoting a focused reflection on the description, exploration and assessment of pragmatic competence across registers, text types and contexts. The participants discussed topics including how teacher (non)nativeness may influence the teaching of target-language pragmatics, through how to foster EFL teacher trainees’ pragmatic awareness, and how to approach the assessment of L2 language learners’ pragmatic appropriateness. This issue of Lodz Papers in Pragmatics, titled Assessing pragmatic aspects of L2 communication: reflections and practices, includes four conference presentations as well as two papers authored by scholars who, being strongly interested in the conference themes, generously accepted to contribute to our publication project.The issue opens with a paper by Andrew D. Cohen, “Issues in the assessment of L2 pragmatics”, which provides an overview of current issues in the assessment of pragmatics, an increasingly important yet not well-established area of investigation (Cohen 2019). The author discusses the abilities and communicative practices that should be assessed in L2 pragmatics (e.g. fluency, sociolinguistics), the factors that might influence pragmatic behaviour (e.g. L1 background, prosody, dysfluency), and the trade-off between the feasibility of obtaining pragmatic data by means of a given method (e.g. DCTs, oral production) and its relevance to pragmatic assessment. Cohen also makes the important distinction between assessing pragmatics for research purposes vs for classroom instruction. With regard to the former, he examines the benefits of mixed methods (i.e. combining qualitative and quantitative approaches) and of data elicitation procedures (e.g. naturalistic data, data elicited through DCT), and the importance of choosing the norms to evaluate the appropriateness of a given pragmatic performance. These norms include the identification of a specific variety of English (e.g. British English, ELF), the degree of rater calibration and consistency, and the judgement of experts in a given domain (e.g. tourism). As regards the assessment of pragmatics for classroom instruction, Cohen discusses face validity, that is, the extent to which language learners perceive a given assessment method as valid and enjoyable, and the value of collecting verbal report data from respondents as a means of validating the assessment measures. The author concludes by calling for more collaboration between instructors and learners, with a view to giving more prominence to the assessment of pragmatics in the classroom.Karen Glaser’s study “Assessing the L2 pragmatic awareness of non-native EFL teacher candidates: Is spotting a problem enough?” focuses on language learner awareness of grammatical (in)accuracies and pragmatic (in)felicities. Replicating and adapting Bardovi-Harlig and Dörnyei’s (1998) study, Glaser administered a metalinguistic judgement questionnaire to 84 German advanced EFLs who were training to become primary school English instructors. The participants were presented with 15 scenarios, the last part of which might contain a pragmatically incorrect item, a grammatically incorrect one or no problem at all. They were asked to indicate instances of incorrectness and/or inappropriateness, to identify the nature of the grammatical vs pragmatic violation, if present, and to suggest a repair. By applying Flöck and Pfingsthorn’s (2017) Signal Detection Matrix, she reported participants’ Hits, Misses, False Alarms and Correct Rejections. The participants correctly identified inaccuracies, infelicities and unproblematic sentences 75% of the time, being the strongest in recognising unproblematic utterances, and the least strong in recognising grammatical errors. On the other hand, while they successfully repaired most grammatical errors, they had difficulties repairing pragmatic infelicities, creating new problems in the process. Her analysis shows that: correct problem identification could not necessarily be equated with adequate repair abilities, at least with pragmatic problems; particularly challenging were situations exemplifying excessive politeness and formality; and for both the grammar and the pragmatics items, the responses varied considerably across individual situations. The author argues that, when comparing ‘grammar’ to ‘pragmatics’ situations, it is crucial to examine the specific phenomena involved, since their respective, highly variable challenges may influence the overall findings. She also suggests that it may be useful to assess learners’ recognition and repair of overpolite/formal situations, which also illustrate pragmatic infelicities, and concludes that non-native English-speaking trainee teachers may benefit from focused training in pragmatic awareness and production.In their paper “Rater variation in pragmatic assessment: The impact of the linguistic background on peer-assessment and self-assessment”, Sunni L. Sonnenburg-Winkler, Zohreh R. Eslami and Ali Derakhshan investigate the effect of language learners’ L1 backgrounds on both self-assessment and peer-assessment of pragmatic aspects of learner production (e.g. directness, politeness, formality). The authors had 10 MA level students from different linguistic backgrounds studying ESL in the US complete two DCTs. The students were then asked to assess their own responses, those of their peers, and, finally, to provide an explanation for their decisions. Overall, the raters tended to give similar ratings to the same samples, and raters from the same language background showed a higher level of agreement than raters from different language backgrounds. When assessing their peers, most raters tended to evaluate samples by participants sharing the same L1 in a similar way. When assessing themselves, the learners were sometimes more lenient than when assessing their peers, although findings were quite varied, showing no distinctive patterns. In line with previous research, this study indicates that there may be a link between linguistic background and rater scoring patterns. The authors encourage future research on the influence of raters’ personal characteristics on the reliability of their ratings.Bárbara Eizaga-Rebollar and Cristina Heras-Ramírez’s contribution “Assessing pragmatic competence in oral proficiency interviews at the C1 level with the new CEFR descriptors” analyses how the updated descriptors of the CEFR at the C1 level define pragmatic competence. It then explores the extent to which the CEFR descriptions of pragmatic competence are operationalised in two popular Oral Proficiency Interviews (OPIs) at the C1 level, namely Cambridge’s Certificate in Advanced English (CAE) and Trinity’s Integrated Skills in English (ISE) III. In particular, CAE focuses mostly on discourse competence and fluency, thus aligning closely with the CEFR, while ISE III prioritises functional competence, which includes speaker meaning and propositional precision functional competence. The findings show that pragmatic competence is a recurring aspect in the descriptors of the scales of both OPIs, even though, in both cases, it does not feature as a distinct assessment criterion, but is part of L2 speaking proficiency. At the same time, it appears that both tests fail to accommodate all aspects of pragmatic competence and that there is a mismatch between the task competences and the rating scale competences. Finally, sample analyses of assessment practice in both OPIs reveal that examiners’ ratings do not always appear to be directly motivated by the tests’ descriptors. The authors conclude with some recommendations for examiner training and construct validity. These include: investigating the aspects of pragmatic competence in the scales to which examiners give more importance in their ratings; checking whether these coincide with those that examiners take into account in their ratings; and defining the proficiency threshold required for test-takers to be considered pragmatically competent at the C1 level.The last two articles in this issue turn the reader’s attention to the pragmatic competence of Chinese learners of English and that of English learners of Chinese. In her “Developing pragmatic competence in English academic discussions: An EAP classroom investigation”, Marcella Caprario investigates the development of pragmatic competence among advanced EAP students at an English-medium University in China, who were attending a semester-long EAP course. The focus of the course was the academic discussion, and one of its overt objectives was developing the ability to interact with group members effectively and respectfully. An explicit-inductive approach was adopted for providing instruction in the sociopragmatics and pragmalinguistics of English-language academic discussions. Throughout the semester, the students engaged in ongoing reflective writing, which was meant to make them aware of their process of developing pragmatic competence. The reflective writing of five students was qualitatively examined through template analysis (Hanks 2017). The analysis revealed some key issues faced by the students (e.g. lack of clarity when speaking), their causes (e.g. limited linguistic competence), and the corrective steps taken (e.g. better time management). Content analysis also brought to the fore the impact of students’ emotional lives on their learning and performance, with negative emotions causing hesitation or avoidance of oral participation, but at times also acting as a catalyst for change after an unsatisfactory performance. The results show that self-reflection was useful for the students to take ownership of their own learning process, and for the instructor to notice communal and individual needs to be addressed with targeted instruction. Caprario concludes that teaching pragmatic competence in academic discussions can foster collaborative teaching and learning, favour the development of students’ critical thinking skills, and empower learners to develop autonomy.In their paper “Evaluating the appropriacy of Ritual Frame Indicating Expressions (RFIEs) – A case study of learners of Chinese and English”, Juliane House and Dániel Z. Kádár set out to study RFIEs, that is, conventionalised expressions by means of which the speaker expresses his/her awareness of rights and obligations (Goffman 1967). Specifically, they investigate the equivalence and contextual appropriateness of the Chinese RFIE 请 (qing) and its English counterpart please, as well as that of the Chinese RFIE 对不起(duibuqi) and of the corresponding English expression sorry. They administered a questionnaire to and conducted follow-up interviews with seven British learners of Mandarin Chinese and even Chinese learners of British English. They asked the learners to evaluate a series of appropriate and inappropriate uses of these RFIEs in the target languages along dimensions such as formality and politeness. The results revealed linguacultural differences between the two groups. On the one hand, most British respondents were better at identifying the appropriate uses of the RFIEs than the inappropriate ones, and to be influenced by stereotypes in their answers. On the other hand, the Chinese respondents tended to apply their own cultural views to the evaluation of the target language RFIEs. Implications are drawn for teaching and learning pragmatic aspects of the target languages and for successful intercultural communication.The contributions to this issue illustrate some of the many directions in which the various aspects of pragmatic skills assessment can be explored. They show that not only various facets of assessment need to be investigated, but also that they can be approached by using a variety of quantitative and qualitative methods, which often fruitfully complement each other. Their findings, obtained following rigorous analytical procedures, lead us to a better understanding of assessment and raise new questions worth exploring in future studies.

Journal

Lodz Papers in Pragmaticsde Gruyter

Published: Jul 1, 2020

There are no references for this article.