The following address will take you to my new blog
http://blogforassessment.blogspot.com/
Tuesday 27 May 2008
TMA03
[The O’Donnell et al (2006) paper will be referred to as the U-study; the Rekkedal and Qvist-Eriksen (2004) paper will be referred to as the NKI-study.]
Settings and Theoretical Contexts
While both evaluations considered online support for learners, substantially different perspectives were evident. The U-study was based within a European HEI, focusing on two in-house, on-line support packages, available to all HE students. This study assumed ICT would enhance educational opportunities, with the content of the packages being developed to assist new students, through an induction programme and a support facility for those expected to use ICT. The induction package focused on orientation and study skills, including ‘learning to learn’ skills (such as time management), as well as confidence building blocks; the support package featured ICT literacy components, including technical aspects and access issues. This was an evaluation study of the packages, but for what purpose(s) was unclear.
The NKI-study was set within an organisation driven by commercial needs to provide flexible learning solutions, which afforded the delivery to be taken at an individual pace; this was stated as their ‘philosophy’ for internet education. NKI’s previous research (for example, Rekkedal and Paulsen, 1998) confirmed flexibility was a key driver of their strategy and this study intended to evaluate if customer demands were being fulfilled. The intention was two-fold; to examine student’s support needs and to assess their satisfaction with this support available.
The U-study suggested on-line packages would support learners and this could enhance their educational opportunities; reference is made to Salmon (2000) in relation to this. However, while there is evidence that new opportunities could exist to support learning through online means, the use of Salmon’s theory was perhaps irrelevant, as this model of on-line learning stages was aimed at the development of collaboration and moderation skills, not at packages to support the initial stages of student learning. The implications of students being unable to learn online effectively was of greater relevance and perhaps the views of, for example, Tyler-Smith (2006) in relation to developing ‘learn to learn skills’ would have provided greater weight to the rationale; indeed, the importance of preparing students for online study appears to be crucial to this research. The study assumed that students would need to develop some ICT skills and there was also an awareness that HEI providers may assume that a certain level of IT literacy exists when this is often not the case (Somekh, 2000).
The NKI-study suggested the student group could be an important social and academic support in learning; however, it was not clear how this was prompted, with the emphasis being on individual student learning. While forums are encouraged, these are not used to any large extent as part of the learning experience, so theories of collaboration cannot realistically be applied.
The products were originally developed along the lines of didactical teaching, where the learner is guided through input models; although literature on developing theories has been reviewed, particularly relating to the possible importance of collaboration and social learning, the solutions continue to be student-led, based on personal knowledge construction. Interaction on-line was not considered important for many and one might concur with this if agreeing with the evidence of Davies and Graff (2005), who established that improved performance was not necessarily achieved by those who interacted more (although a greater volume of online activity could lead to higher grades and interaction may support ‘potential failers’). The NKI-study agreed that a social constructivist perspective may appear ideal as a social theory of learning, however, if online collaboration is not taking place and students have already stated that individual learning is of prime importance, NKI-study’s perspective is more greatly dictated by Piagian roots.
Strengths of Methodological approach
The NKI-study was easy to follow and logically structured. As a commercial aware organisation, NKI clearly based much of their study on their own previous research and are attempting to continually assess the viability of their internet learning packages. As there is access to paying clients as a sample population, their market research is a valuable method of assessing the product value; the study was targeted to the same audience, within the same context. A key strength is the clarity of the link between the need to review client’s perception of learner support and satisfaction, with the method of the data collection. The use of open-ended questions for depth of response and to follow up previous market research ensured that the data would be of relevance and current value to the organisation.
In relation to what the report uncovered, it declared that clients were overall satisfied with the support elements of the learning packages; however, to examine student’s need for support services may have been more meaningfully measured by greater quantitative analysis.
The U-study provided a clear overview of the content and purpose of the packages (if not the intention) of the study and this was justified using of a wide range. The study was aided by access to the whole student population within the HEI and the technical ability to offer on-line questionnaires, which would encourage those already accessing on-line resources, to respond. Some free response questions were available and both pre and post questionnaires were used; a method also used by Bos et al (2000), which allows comparative analysis.
Limitations and weaknesses
For the NKI-study, the intention was for the study support elements (classified as the parts of the course additional to the course content) to be analysed; however, might some students have difficulty in separating these components? A key criticism would be the sampling; to use statements such as ‘…nearly at random…’, the lack of clarity in explaining the relevance of a ‘student number’ in the selection process and the variety of selection judgements allowed by the interviewers, makes bring in to doubt the validity of the study. The quantitative data collated was not analysed in any detail and only ‘revealed’ in the final summary; the qualitative statements were not summarised.
While NKI established that the first phase of study was important, there seemed to be little emphasis on research of developing trust in the learning technology and learner interface; Bos et al (2002) suggest that it is difficult to establish this online trust, so it may be crucial to explore this aspect and possible links to lack of study support use, as an assumption is being made of ‘trust’ in the NKI systems.
The main weakness of the U-study was the lack of clarity of purpose; the aim was not stated until the end of the report, under the discussion; only at this point does it become known that the evaluation was to evaluate effectiveness and ‘…if it could be delivered appropriately online…’. There was no scale defined for measuring effectiveness and one would also question the meaning of ‘appropriately’ here; this should be explicitly defined.
In the U-study, demographics were discussed, but not stated as a relevant issue at the outset; nor was it clear how this discussion was linked to the support packages. It may have aided the reader to have a clear classification of what the authors’ meant by a ‘Primer’, as terminology familiarity was assumed. Another weakness of this study is the amount of time spent on research, analysis and summary of the location of the IT access and the connection type; this appeared irrelevant to the issue of support and was not stated as a mainstream of the study. In fact, there was a volume of quantitative data, which did not appear to ‘answer the question’; for example, satisfaction is not a measure of effectiveness, unless it is clearly stated that there is a scale defining this term. While it was acknowledged that packages ‘improved confidence’, this again is not an indication of ‘being effective’. Additionally, the authors did not respond to negative free response questions regarding time consumption and volume of information; this could have been linked with the issues Davies and Gaff (2005) considered, in relation to time pressures in student’s studies being a major component of the on-line learning experience.
In relation to validity, it is more likely that those who accessed and complete the packages are more likely to be IT competent anyway and would also be more likely to complete on-line questionnaires, thus skewing the data.
Lastly, related to ethical issues, it is unclear how students could remain ‘anonymous’ when clearly a logon to the system would be required, where this could be traced back to an individual.
Proposals
The key proposals for the NKI-study would be
Offer alternative feedback methods
Focus questions on specific elements to aid development of course components
Specify sampling selection methods
For the NKI-study, as learners are perceived as competent online users, providing online questionnaires may have allowed a higher response rate. It may also have been of higher value to the organisation if some focus had been placed on the usefulness which students attached to support elements; perhaps specific questions on tutor support and/or feedback is required. However, the recommendation would be for responses to be provided in a different format; the personal interaction of telephone interviews may lead to some respondents being uncomfortable in providing negative statements, particularly regarding tutors. The selection of questions from the personal choice of the interviewees could also be addressed in this focussing of content and NKI could also have targeted questioning to the reasons for lack of collaboration; if learners can be encouraged to ‘think by learning’ through online activities (as Nardi et al, 2004, discuss), and this is seen as a support mechanism of value, then learning as a social process may be encouraged.
While any students may provide reliable data, the selection of the sample should be stated; this would increase the validity of the study results.
The key proposals for the U-study would be
Clarify reasons for evaluation
Control group completing package as compulsory
Analysis of free response questions
Collection of qualitative data through interviews
Greater clarity of graphical representations
Feedback from non-student agents
Aims must be stated at the outset and these should be clearly defined, including boundaries of the study and terminology used; this would ensure the design matched the intended purpose, as well as allowing the reader to ‘see where its going’.
If a control group were to complete the package as compulsory, this would mean judgements could be made in relation to support components which were most used and valued by students, rather than making assumptions on information provided by those who freely selected to participate (as these may well have the higher ICT skills anyway). If all members of the control group participated, it may be possible to determine what additional support was actually required, much as was evidenced in the Cox (2007) study, where individual differences in learning were made more transparent. In relation to the use of the data, free response answers should be analysed and fully summarised; researchers cannot ‘ignore’ negative input or aspects brought to light, which were not perhaps anticipated at the start of the study.
Qualitative data could be collected through interviews, to encourage a depth of response; it would be particularly important to gain insight into why some students felt their support was ‘effective’. As Bell (2005) suggests, a guided interview could be used, so that questions would be focussed around support package issues, but with greater allowances made for issues not covered by the researchers’ initial ideas on support and its effectiveness. If data is collected only from those who freely access the packages, then the results would be unreliable when trying to apply results to the whole HEI population.
Best use should be made of the data collated and simplicity of graphical representations would support the discussions more strongly; for example, the ‘attitudes’ presentation as a graph would have more clearly illustrated the ‘pre’ and ‘post’ results differences, rather than by a table format.
Feedback from non-student agents would be an integral part of a well balanced study, where the ‘effectiveness’ of the packages is being measured; quantitative data may express grades, but showing true correlation to use of support may be challenging. Insight into actual performance could be carried out by asking tutors if weaknesses were still apparent in those who had completed the packages; from experience, it is not the taking part in the learning that provides the results, but the ability to transfer the acquired skills gained.
Overall, the NKI-study is seen as ‘better’, due to the intentions being stated and, to a large extent, met. However, as Tolmie (2001) suggests, perhaps the complete online experience needs ‘…examined, not just the use of the technology in isolation…’ (p237)
(Word count 2112)
References
Bell, J. (2005) Doing Your Research Project: A guide for first-time researchers in education, health and social science, (4th edn), Buckingham, Open University Press.
Bos, N., Olson, J., Gergle, D., Olson, G. and Wright, Z. (2002) ‘Effects of four computer-mediated communications channels on trust development’ in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Changing Our World, Changing Ourselves, Minneapolis, Minnesota, 2002, New York, NY, ACM; also available online at http://libezproxy.open.ac.uk/login?url=http://doi.acm.org/10.1145/503376.503401 (Accessed 3 May 2008).
Cox, R. (2007) ‘Technology-enhanced research: educational ICT systems as research instruments’, Technology, Pedagogy and Education, vol. 16, no. 3, pp. 337–56; also available online at http://libezproxy.open.ac.uk/login?url=http://dx.doi.org/10.1080/14759390701614470 (Accessed 19 May 2008).
Davies, J. and Graff, M. (2005) ‘Performance in e-learning: online participation and student grades’, British Journal of Educational Technology, vol. 36, no. 4, pp. 657–63; also available online at http://libezproxy.open.ac.uk/login?url=http://dx.doi.org/10.1111/j.1467-8535.2005.00542.x (Accessed 5 May 2008).
Nardi, B., Schiano, D., and Gumbrecht, M. (2004) ‘Blogging as social activity, or, would you let 900 million people read your diary?’ in Proceedings of the 2004 ACM Conference on Computer Supported Cooperative Work, Chicago, Illinois, New York, NY, ACM; also available online at http://libezproxy.open.ac.uk/login?url=http://doi.acm.org/10.1145/1031607.1031643 (Accessed 12 May 2008).
O’Donnell, C.M., Sloan, D.J. and Mulholland, C.W. (2006) ‘Evaluation of an online student induction and support package for online learners’, European Journal of Open, Distance and E-Learning [online], 2006/I, http://www.eurodl.org/materials/contrib/2004/Rekkedal_Qvist-Eriksen.htm (Accessed 03 May 2008).
Tolmie, A. (2001) ‘Examining learning in relation to the contexts of use of ICT’, Journal of Computer Assisted Learning, vol. 17, no. 3, pp. 235–41; also available online at http://libezproxy.open.ac.uk/login?url=http://www.blackwell-synergy.com/links/doi/10.1046/j.0266-4909.2001.00178.x (Accessed 2 April 2008).
Rekkedal, T. and Qvist-Eriksen, S. (2004) ‘Support services in e-learning – an eevaluation study of students’ needs and satisfaction’, European Journal of Open, Distance and E-Learning [online], 2004/I, http://www.eurodl.org/materials/contrib/2006/Catherine_M_O_Donnell.htm (Accessed 01 May 2008).
Rekkedal, T. and Paulsen M.F. (1998) NKI-STUDY Internet Students - Experiences and Attitudes. The Third Generation NKI-STUDY Electronic College: An Evaluation Report Written for the Leonardo Online Training Project [on line] http://nettskolen.NKI-study.no/forskning/33/evaluati.htm (accessed 20th May 2008).
Salmon, G. (2000) E-moderating: The key to teaching and learning online, London, Kogan Page.
Somekh, B. (2000) New Technology and Learning: Policy and Practice in the UK, 1980–2010 Education and Information Technologies Volume 5, Number 1, Netherlands, Springer. Available online at http://www.springerlink.com/content/m632j4315m210154/. (Accessed 10th May 2008).
Tyler-Smith, K. (2006) Early Attrition among First Time eLearners, Journal of Online Learning and Teaching Volume 2, Number 2 [on line]
http://jolt.merlot.org/Vol2_No2_TylerSmith.htm (accessed 15th May 2008).
Settings and Theoretical Contexts
While both evaluations considered online support for learners, substantially different perspectives were evident. The U-study was based within a European HEI, focusing on two in-house, on-line support packages, available to all HE students. This study assumed ICT would enhance educational opportunities, with the content of the packages being developed to assist new students, through an induction programme and a support facility for those expected to use ICT. The induction package focused on orientation and study skills, including ‘learning to learn’ skills (such as time management), as well as confidence building blocks; the support package featured ICT literacy components, including technical aspects and access issues. This was an evaluation study of the packages, but for what purpose(s) was unclear.
The NKI-study was set within an organisation driven by commercial needs to provide flexible learning solutions, which afforded the delivery to be taken at an individual pace; this was stated as their ‘philosophy’ for internet education. NKI’s previous research (for example, Rekkedal and Paulsen, 1998) confirmed flexibility was a key driver of their strategy and this study intended to evaluate if customer demands were being fulfilled. The intention was two-fold; to examine student’s support needs and to assess their satisfaction with this support available.
The U-study suggested on-line packages would support learners and this could enhance their educational opportunities; reference is made to Salmon (2000) in relation to this. However, while there is evidence that new opportunities could exist to support learning through online means, the use of Salmon’s theory was perhaps irrelevant, as this model of on-line learning stages was aimed at the development of collaboration and moderation skills, not at packages to support the initial stages of student learning. The implications of students being unable to learn online effectively was of greater relevance and perhaps the views of, for example, Tyler-Smith (2006) in relation to developing ‘learn to learn skills’ would have provided greater weight to the rationale; indeed, the importance of preparing students for online study appears to be crucial to this research. The study assumed that students would need to develop some ICT skills and there was also an awareness that HEI providers may assume that a certain level of IT literacy exists when this is often not the case (Somekh, 2000).
The NKI-study suggested the student group could be an important social and academic support in learning; however, it was not clear how this was prompted, with the emphasis being on individual student learning. While forums are encouraged, these are not used to any large extent as part of the learning experience, so theories of collaboration cannot realistically be applied.
The products were originally developed along the lines of didactical teaching, where the learner is guided through input models; although literature on developing theories has been reviewed, particularly relating to the possible importance of collaboration and social learning, the solutions continue to be student-led, based on personal knowledge construction. Interaction on-line was not considered important for many and one might concur with this if agreeing with the evidence of Davies and Graff (2005), who established that improved performance was not necessarily achieved by those who interacted more (although a greater volume of online activity could lead to higher grades and interaction may support ‘potential failers’). The NKI-study agreed that a social constructivist perspective may appear ideal as a social theory of learning, however, if online collaboration is not taking place and students have already stated that individual learning is of prime importance, NKI-study’s perspective is more greatly dictated by Piagian roots.
Strengths of Methodological approach
The NKI-study was easy to follow and logically structured. As a commercial aware organisation, NKI clearly based much of their study on their own previous research and are attempting to continually assess the viability of their internet learning packages. As there is access to paying clients as a sample population, their market research is a valuable method of assessing the product value; the study was targeted to the same audience, within the same context. A key strength is the clarity of the link between the need to review client’s perception of learner support and satisfaction, with the method of the data collection. The use of open-ended questions for depth of response and to follow up previous market research ensured that the data would be of relevance and current value to the organisation.
In relation to what the report uncovered, it declared that clients were overall satisfied with the support elements of the learning packages; however, to examine student’s need for support services may have been more meaningfully measured by greater quantitative analysis.
The U-study provided a clear overview of the content and purpose of the packages (if not the intention) of the study and this was justified using of a wide range. The study was aided by access to the whole student population within the HEI and the technical ability to offer on-line questionnaires, which would encourage those already accessing on-line resources, to respond. Some free response questions were available and both pre and post questionnaires were used; a method also used by Bos et al (2000), which allows comparative analysis.
Limitations and weaknesses
For the NKI-study, the intention was for the study support elements (classified as the parts of the course additional to the course content) to be analysed; however, might some students have difficulty in separating these components? A key criticism would be the sampling; to use statements such as ‘…nearly at random…’, the lack of clarity in explaining the relevance of a ‘student number’ in the selection process and the variety of selection judgements allowed by the interviewers, makes bring in to doubt the validity of the study. The quantitative data collated was not analysed in any detail and only ‘revealed’ in the final summary; the qualitative statements were not summarised.
While NKI established that the first phase of study was important, there seemed to be little emphasis on research of developing trust in the learning technology and learner interface; Bos et al (2002) suggest that it is difficult to establish this online trust, so it may be crucial to explore this aspect and possible links to lack of study support use, as an assumption is being made of ‘trust’ in the NKI systems.
The main weakness of the U-study was the lack of clarity of purpose; the aim was not stated until the end of the report, under the discussion; only at this point does it become known that the evaluation was to evaluate effectiveness and ‘…if it could be delivered appropriately online…’. There was no scale defined for measuring effectiveness and one would also question the meaning of ‘appropriately’ here; this should be explicitly defined.
In the U-study, demographics were discussed, but not stated as a relevant issue at the outset; nor was it clear how this discussion was linked to the support packages. It may have aided the reader to have a clear classification of what the authors’ meant by a ‘Primer’, as terminology familiarity was assumed. Another weakness of this study is the amount of time spent on research, analysis and summary of the location of the IT access and the connection type; this appeared irrelevant to the issue of support and was not stated as a mainstream of the study. In fact, there was a volume of quantitative data, which did not appear to ‘answer the question’; for example, satisfaction is not a measure of effectiveness, unless it is clearly stated that there is a scale defining this term. While it was acknowledged that packages ‘improved confidence’, this again is not an indication of ‘being effective’. Additionally, the authors did not respond to negative free response questions regarding time consumption and volume of information; this could have been linked with the issues Davies and Gaff (2005) considered, in relation to time pressures in student’s studies being a major component of the on-line learning experience.
In relation to validity, it is more likely that those who accessed and complete the packages are more likely to be IT competent anyway and would also be more likely to complete on-line questionnaires, thus skewing the data.
Lastly, related to ethical issues, it is unclear how students could remain ‘anonymous’ when clearly a logon to the system would be required, where this could be traced back to an individual.
Proposals
The key proposals for the NKI-study would be
Offer alternative feedback methods
Focus questions on specific elements to aid development of course components
Specify sampling selection methods
For the NKI-study, as learners are perceived as competent online users, providing online questionnaires may have allowed a higher response rate. It may also have been of higher value to the organisation if some focus had been placed on the usefulness which students attached to support elements; perhaps specific questions on tutor support and/or feedback is required. However, the recommendation would be for responses to be provided in a different format; the personal interaction of telephone interviews may lead to some respondents being uncomfortable in providing negative statements, particularly regarding tutors. The selection of questions from the personal choice of the interviewees could also be addressed in this focussing of content and NKI could also have targeted questioning to the reasons for lack of collaboration; if learners can be encouraged to ‘think by learning’ through online activities (as Nardi et al, 2004, discuss), and this is seen as a support mechanism of value, then learning as a social process may be encouraged.
While any students may provide reliable data, the selection of the sample should be stated; this would increase the validity of the study results.
The key proposals for the U-study would be
Clarify reasons for evaluation
Control group completing package as compulsory
Analysis of free response questions
Collection of qualitative data through interviews
Greater clarity of graphical representations
Feedback from non-student agents
Aims must be stated at the outset and these should be clearly defined, including boundaries of the study and terminology used; this would ensure the design matched the intended purpose, as well as allowing the reader to ‘see where its going’.
If a control group were to complete the package as compulsory, this would mean judgements could be made in relation to support components which were most used and valued by students, rather than making assumptions on information provided by those who freely selected to participate (as these may well have the higher ICT skills anyway). If all members of the control group participated, it may be possible to determine what additional support was actually required, much as was evidenced in the Cox (2007) study, where individual differences in learning were made more transparent. In relation to the use of the data, free response answers should be analysed and fully summarised; researchers cannot ‘ignore’ negative input or aspects brought to light, which were not perhaps anticipated at the start of the study.
Qualitative data could be collected through interviews, to encourage a depth of response; it would be particularly important to gain insight into why some students felt their support was ‘effective’. As Bell (2005) suggests, a guided interview could be used, so that questions would be focussed around support package issues, but with greater allowances made for issues not covered by the researchers’ initial ideas on support and its effectiveness. If data is collected only from those who freely access the packages, then the results would be unreliable when trying to apply results to the whole HEI population.
Best use should be made of the data collated and simplicity of graphical representations would support the discussions more strongly; for example, the ‘attitudes’ presentation as a graph would have more clearly illustrated the ‘pre’ and ‘post’ results differences, rather than by a table format.
Feedback from non-student agents would be an integral part of a well balanced study, where the ‘effectiveness’ of the packages is being measured; quantitative data may express grades, but showing true correlation to use of support may be challenging. Insight into actual performance could be carried out by asking tutors if weaknesses were still apparent in those who had completed the packages; from experience, it is not the taking part in the learning that provides the results, but the ability to transfer the acquired skills gained.
Overall, the NKI-study is seen as ‘better’, due to the intentions being stated and, to a large extent, met. However, as Tolmie (2001) suggests, perhaps the complete online experience needs ‘…examined, not just the use of the technology in isolation…’ (p237)
(Word count 2112)
References
Bell, J. (2005) Doing Your Research Project: A guide for first-time researchers in education, health and social science, (4th edn), Buckingham, Open University Press.
Bos, N., Olson, J., Gergle, D., Olson, G. and Wright, Z. (2002) ‘Effects of four computer-mediated communications channels on trust development’ in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Changing Our World, Changing Ourselves, Minneapolis, Minnesota, 2002, New York, NY, ACM; also available online at http://libezproxy.open.ac.uk/login?url=http://doi.acm.org/10.1145/503376.503401 (Accessed 3 May 2008).
Cox, R. (2007) ‘Technology-enhanced research: educational ICT systems as research instruments’, Technology, Pedagogy and Education, vol. 16, no. 3, pp. 337–56; also available online at http://libezproxy.open.ac.uk/login?url=http://dx.doi.org/10.1080/14759390701614470 (Accessed 19 May 2008).
Davies, J. and Graff, M. (2005) ‘Performance in e-learning: online participation and student grades’, British Journal of Educational Technology, vol. 36, no. 4, pp. 657–63; also available online at http://libezproxy.open.ac.uk/login?url=http://dx.doi.org/10.1111/j.1467-8535.2005.00542.x (Accessed 5 May 2008).
Nardi, B., Schiano, D., and Gumbrecht, M. (2004) ‘Blogging as social activity, or, would you let 900 million people read your diary?’ in Proceedings of the 2004 ACM Conference on Computer Supported Cooperative Work, Chicago, Illinois, New York, NY, ACM; also available online at http://libezproxy.open.ac.uk/login?url=http://doi.acm.org/10.1145/1031607.1031643 (Accessed 12 May 2008).
O’Donnell, C.M., Sloan, D.J. and Mulholland, C.W. (2006) ‘Evaluation of an online student induction and support package for online learners’, European Journal of Open, Distance and E-Learning [online], 2006/I, http://www.eurodl.org/materials/contrib/2004/Rekkedal_Qvist-Eriksen.htm (Accessed 03 May 2008).
Tolmie, A. (2001) ‘Examining learning in relation to the contexts of use of ICT’, Journal of Computer Assisted Learning, vol. 17, no. 3, pp. 235–41; also available online at http://libezproxy.open.ac.uk/login?url=http://www.blackwell-synergy.com/links/doi/10.1046/j.0266-4909.2001.00178.x (Accessed 2 April 2008).
Rekkedal, T. and Qvist-Eriksen, S. (2004) ‘Support services in e-learning – an eevaluation study of students’ needs and satisfaction’, European Journal of Open, Distance and E-Learning [online], 2004/I, http://www.eurodl.org/materials/contrib/2006/Catherine_M_O_Donnell.htm (Accessed 01 May 2008).
Rekkedal, T. and Paulsen M.F. (1998) NKI-STUDY Internet Students - Experiences and Attitudes. The Third Generation NKI-STUDY Electronic College: An Evaluation Report Written for the Leonardo Online Training Project [on line] http://nettskolen.NKI-study.no/forskning/33/evaluati.htm (accessed 20th May 2008).
Salmon, G. (2000) E-moderating: The key to teaching and learning online, London, Kogan Page.
Somekh, B. (2000) New Technology and Learning: Policy and Practice in the UK, 1980–2010 Education and Information Technologies Volume 5, Number 1, Netherlands, Springer. Available online at http://www.springerlink.com/content/m632j4315m210154/. (Accessed 10th May 2008).
Tyler-Smith, K. (2006) Early Attrition among First Time eLearners, Journal of Online Learning and Teaching Volume 2, Number 2 [on line]
http://jolt.merlot.org/Vol2_No2_TylerSmith.htm (accessed 15th May 2008).
TMA02
Part 1
There was no personal experience of wikis prior to participating in the activity; a posting and a comment were made, but virtually no interaction took place. This would imply that learning was independent – learning as a personal construction of knowledge (Von Glasersfeld, 1995). This appeared to be at odds with my comprehension of a potential strength of the wiki, that it could enable learning from a socio-cultural perspective (Cobb, 1994), through knowledge sharing. Was there, then, a lack of motivation to complete this task?
If, as many suggest (Roschelle, 1992; Cole, 1996; Rogoff, 1999), learning is achieved through some form of social collaboration, a number of theoretical positions support this; for example, the observation of learning as a social activity is the foundation of Activity Theory (Vygotsky, 1974), with the focus of learning being creating over time, through tasks. This theory is explicit in its use of cultural tools (whether through language or use of physical tools), which may have implications for who has access to the learning space, if it is technological determined. However, it is made clear that it is the activity, which contributes to learning, not merely the presence of others; more recent suggestions have also viewed that this is activity and collaboration is a key aspect of on-line learning (Salmon, 2000). There are, therefore, a range of issues to consider from an activity theory perspective.
Some may view the wiki as a Community of Practice (Lave and Wenger, 2002), but I would oppose this analysis, due to the lack of interaction, little evidence of ‘learning from masters’ and the fact that ‘situated opportunities’ did not occur; additionally, as Wenger suggests, Communities of Practice (CoP) cannot be created as such, but are developed from the practice itself. The wiki could, however, be seen as correlating more strongly with the Jones and Preece (2006) concept of ‘Communities of Interest’ (CoI). While I do not necessarily agree with their assumption of CoP being more organisationally-led, the view of the wiki being a CoI may prevail, as it has the valid component of a common interest (in carrying out the course task), whether or not the learning goal occurs. In opposition to this, though, the wiki did not fit other suggested criteria of a CoI, as it did not develop in an organic way and was not ‘open to all’, but determined by an educational institution’s constraints. Jones and Preece (2006) also discussed ‘usability’ (p118) - the ‘features and functions’ of technological access - and this may have particular relevance to the wiki issues, where the level of IT cognition and the accessibility issues may have determined the low level of participation, rather than the task itself.
The Tolmie (2001) research on software prompts was of interest; an initial view was taken that the software part of the technology tools was not part of the learning process. In the case of the wiki, participants may have had to negotiate a learning path with new technological tools; learning to use these would be very much a part of the learning experience as a whole. Some participants may have learned much from this, although these would have been unintended outcomes, more concurrent with the usability of learning tools and collaboration through IT (Jones and Preece, 2006).
Another key matter is narratives, which is raised regularly in educational research and appears to be a crucial component of learning; from Bruner’s tenets (1996), where narratives were a way of explaining things, (how they happened); to more recent discussions of narratives in a non-linear form (Laurilland et al, 2000). This may be one component that is missing from the wiki and possibly why it was found difficult to ‘learn from’; there appeared to be no narrative thread, no order, nor any method to easily follow a narrative line of a particular topic. To use the idea of Crook and Dymott (2005), in comparing reading practices between page and screen text, the wiki does not afford the opportunity to easily reflect on the narrative thread; as Greeno et al (1999, as cited in McCormick and Paechter) suggested, readers use many ‘repair strategies’ to aid comprehension and these may not be as accessible in an on-line format. Additionally, as these technological tools did not support this ‘narrative thread’ facility and, being a fairly visual learner, it was difficult to create an overall picture of a topic; perhaps diagrams such as a concept map would have aided knowledge construction.
If using the wiki for educational research purposes, one would need to be clear on which aspect was being presented; for example, if the wiki was viewed purely as a technological tool, then this would involve very different research to the view of a wiki as an activity system, as Jonassen and Rohrer-Murphy (1999) discuss. Using any of the socio-cultural theories in education would imply designing the learning structure to aid the construction of knowledge through interactions and activity, but this would also be dependent on the context (Tolmie, 2001), in which the learning is set. Crook and Dymott (2005), state that ‘…computers are a technology for collaboration…’ (p106); in linking this with activity theory, wikis could be identified as a meeting point of the technological tools and theories of learning through collaboration.
To organise these ideas, two classifications from Conole et al (2004) have been used; ‘Activity based’ and ‘Socially situated’ theories (see appendix 1 for a personal knowledge construction of this).
Activity based theories appear to imply that the use of methodologies which analyse the social interactions during tasks and/or the resultant changes in independent knowledge, would be appropriate, suggesting some statistical analysis. However, the activities are bound in with the context in which the learning takes place, as Jonassen and Rohrer-Murphy (1999) suggest, and therefore qualitative methods would be just as appropriate.
As socially situated theories are generally dependent on selected variables, specific not only the learner, but also to the artefacts, setting, culture, etc.; taking this theoretical stand could assume that qualitative methods may be more appropriate, gather the depth of data.
Wiki research raises such questions as
Are some learning subjects more fitting to wiki use?
How often are specific artefacts used; when and where?
How could usability features be assessed?
Did learners find it easy to negotiate the material/course?
Considering the first two questions, if quantitative methods were used, this could be in the form of analysing statistical results from previous research, gathered from a range of faculties and institutions; or tracking the level of usage of different artefacts. While Jonassen and Rohrer-Murphy (1999) suggest a qualitative approach is required to analyse activity theory, numerical data on, for example, the number of interactions in a particular subject’s collaboration space, would be of value in assessing trends of usage. One could also look at tracking how often the various parts of courses were used (such as case studies) and this could feed forward to future course design. A clear disadvantage in using these methods would be the lack of context; a loss of any temporal significance and the nuances of group dynamics, which may be a part of the learning development. Using quantitative methods to research what is viewed as a collaborative learning theory, would need to be taken from the perspective of addressing a need for a stakeholder to have ‘real evidence’, in the form of numerical data.
From the perspective of socially situated theories, the most appropriate research methods would be qualitative, to analyse existing teaching and learning structures and to gain insights to the thoughts of learners; for example, Laurilland et al (2000) saw ‘real learning’ observations and transcripts of the dialogue as crucial to their investigation. Ethnographic observations and transcribing are, of course, time consuming activities; however, a key benefit would be in using results to identify what could be better supported, much as suggested by Conole et al (2004), when using a framework such as the learning design model, to analyse current practices. There may be limitations of this approach, as the researcher has great influence in making assumptions and objective analysis; there may also be pressures of an institutional or organisational nature. A key weakness may be in the narrowness of any conclusions drawn and the problem of trying to make theoretical ideas or models ‘fit’ the evidence; for example, in using the PCGE case and fitting this to their framework, Conole et al did not appear to have considered the vital issue of how important it may be to have external input to this course and what perhaps needed to be changed were the tasks in relation to this visit, not the entire component.
It would be appropriate to state that any theoretical perspective and any methodology could be used in research, providing that this was explicitly stated and clarified within the context of the investigation or evaluation being carried out. The wiki use overall raises too many questions; the research questions would, therefore, be ultimately determined by what and who wanted the answers.
Part 2
Laurilland’s (2002) Conversational Framework (CF) was a model developed to make the process of learning, through teacher and learner interactions, more transparent. These ‘narrative lines’ were seen as operating discursively and interactively, suggesting two levels; however, the process appears cyclical in nature, much as any system with a feedback loop mechanism (Fleming and Levie, 1993).
A strength of the CF is that it is a clear, simple model, applicable to almost any learning landscape; this may be one reason researchers have appropriated the framework as an introductory point, to emphasise the role dialogue has in building knowledge through interactions (for example, McAndrew et al, 2002). A weakness of the framework is the difficulty in viewing the quantity and quality of the interactions; a three dimensional model would be necessary to analyse this depth. Another weakness, cited by researchers (such as Lee, 2006) was the incompleteness of the research; this was then used as a platform to extend the framework to their specific area of interest, particularly in the use of technological dialogue.
Conole et al (2004) introduced the CF as a model of learning and suggests that such models support specific theoretical views; this is used as a starting point to consider how most pedagogical theories are not designed to deal with e-learning. In an attempt to highlight theory features and match these to appropriate e-learning methods, a range of theories were categorised, with the CF allocated as literature of ‘socially situated’ learning, which is dependent on language and culture; as these variables are very closely aligned with conversations, this seemed appropriate. However, Conole et al suggests that all aspects of their model are used, except ‘non reflection’, so this appears to indicate that the CF is a ‘good’ theory for online e-learning.
McAndrew et al (2002) used the CF to suggest a change in the power dynamics of learning, but it was not clear if the statement of ‘…weakening the image of a teacher in the system…’ (p155) was a positive reinforcement of increasing facilitative skills, which are perhaps more greatly needed in online communities; or referring to a general move towards a greater level of collaboration in learning environments.
Weller et al (2005) appropriated the CF to analyse audio conferencing and it was shown that this was a useful model to clarify the interactions of the narrative lines. However, it would be debatable if the discussions taking place were actually ‘……in Laurilland’s (2002) conversational framework….’ (P.70); this is where the researcher has placed the dialogue, but is not a known fact.
Lee (2006) is keen to use the CF to highlight the issues of both cyclical learning and the use of activity-based theory framework. This is taken further to suggest the model also distinguishes between academic and experiential knowledge, but his analysis may only be valid against particular learning contexts.
It is accepted that assessment of the CF here is very brief, due to time constraints and further research would have allowed much greater depth of evaluation.
(Word count 2000)
Appendix 1
(Unable to load the diagram here - apologies)
C:\Documents and Settings\Carole\Desktop\Educational technology research Management\H809 TMAs\H809 TMA02 diagram.mht
(Needs to have link with subject matter
Dependent on cultural context; changes over time
Which subject or discipline is most appropriate?
No of times different tools used; where and when?
Collect quantitative data; primary statistical methods, secondary references to surveys
Usability factors (such as access)
Requires clear structure and navigability
How could usability features be assessed? Was it easy?
Did learners make sense of it; was it logical?
Collect qualitative data; observational, dialogue analysis
Either could be an evaluation of previous research)
Reference list
Bruner, J. (1996) The culture of education, Cambridge, Harvard University Press.
Cobb, P. (1994) ‘Where Is the Mind? Constructivist and Sociocultural Perspectives on Mathematical Development’, Educational Researcher, vol. 23, no. 7, pp. 13-20 American Educational Research Association.
Cole, M. (1996) Cultural Psychology: a once and future discipline, Cambridge, Harvard University Press.
Conole, G., Dyke, M., Oliver, M. and Seale, J. (2004) ‘Mapping pedagogy and tools for effective learning design’, Computers & Education, vol. 43, nos. 1–2, pp. 17–33. Available online at http://libezproxy.open.ac.uk/login?url=http://dx.doi.org/10.1016/j.compedu.2003.12.018 (Accessed 5 February 2008).
Crook, C. and Dymott, R. (2005) ‘ICT and the literacy practices of student writing’ in Monteith, M. (ed.) Teaching Secondary School Literacies with ICT, Maidenhead, Open University Press.
Fleming, M. and Levie, W.H. (1993) ‘Instructional message design: principles from the behavioral and cognitive sciences’ (2nd edition), Educational Technology Publications, Englewood Cliffs, NJ.
Greeno, J.G., Pearson, P.D. and Schoenfeld, A.H. (1999) ‘Achievement and Theories of Knowing and Learning’ in McCormick, R. and Paechter, C. (eds) Learning and Knowing, London, Sage Publishing.
Jonassen, D. and Rohrer-Murphy, L. (1999) ‘Activity theory as a framework for designing constructivist learning environments’, Educational Technology Research and Development, vol. 47, no. 1, pp. 61–79. Available online at http://libezproxy.open.ac.uk/login?url=http://dx.doi.org/10.1007/BF02299477 (Accessed 5 February 2008).
Jones, A. and Preece, J. (2006) ‘Online communities for teachers and lifelong learners: a framework for comparing similarities and identifying differences in communities of practice and communities of interest’, International Journal of Learning Technology, vol. 2, no. 2–3, pp. 112–37.
Lave J. and Wenger E. (2002) ‘Legitimate peripheral participation in community of practice’ in Harrison, R., Reeve, F., Hanson, A., Clarke, J. (eds.) Supporting Lifelong Learning Volume 1 Perspectives on learning, London, RoutledgeFalmer in association with the Open University.
Laurilland, D. (2002) Rethinking University Teaching: A Conversational Framework for the Effective Use of Learning Technologies (2nd edn), London, RoutledgeFalmer.
Laurilland, D., Stratfold, M., Luckin, R., Plowman, L. and Taylor, J. (2000) ‘Affordances for learning in a non-linear narrative medium’, Journal of Interactive Media in Education (JIME), vol. 2. Available online at http://www-jime.open.ac.uk/00/2/ (accessed 1st April, 2008).
Lee, N. (2006) ‘Design as a learning cycle: A conversational experience’, Studies in Learning, Evaluation, Innovation and Development [online], vol. 3, no.2, pp. 12-22. Available at http://sleid.cqu.edu.au (accessed 5th April 2008).
McAndrew, P., Mackinnon, L. and Rist R. (2002) ‘A framework for Work-Based Networked Learning’, Journal of Interactive Learning Research, vol. 13, no. 1-2, pp. 151-168. Available at
http://kn.open.ac.uk/document.cfm?documentid=6503 (Accessed 9th April 2008).
Rogoff, B. (1999) ‘Cognitive Development through Social Interaction: Vygotsky and Piaget’ in Murphy, P. (ed.) Learners, Learning and Assessment London, Sage Publishing.
Roschelle, J. (1992) ‘Learning by collaborating: convergent conceptual change’, Journal of the Learning Sciences, vol. 2, no. 3, pp. 235–76.
Salmon, G. (2000) E-moderating: the key to teaching and learning online, London, Kogan Page.
Tolmie, A. (2001) ‘Examining learning in relation to the contexts of use of ICT’, Journal of Computer Assisted Learning, vol. 17, no. 3, pp. 235–41.
Von Glasersfeld, E. (1995) Radical constructivism: A way of knowing and learning. London, Falmer Press.
Vygotsky, L.S. (1974) Mind in Society, Cambridge, Harvard University Press.
Weller, M., Pegler C. and Mason, R. (2005) ‘Use of innovative technologies on an e-learning course’, Internet and Higher Education, vol. 8, pp. 61–71.
There was no personal experience of wikis prior to participating in the activity; a posting and a comment were made, but virtually no interaction took place. This would imply that learning was independent – learning as a personal construction of knowledge (Von Glasersfeld, 1995). This appeared to be at odds with my comprehension of a potential strength of the wiki, that it could enable learning from a socio-cultural perspective (Cobb, 1994), through knowledge sharing. Was there, then, a lack of motivation to complete this task?
If, as many suggest (Roschelle, 1992; Cole, 1996; Rogoff, 1999), learning is achieved through some form of social collaboration, a number of theoretical positions support this; for example, the observation of learning as a social activity is the foundation of Activity Theory (Vygotsky, 1974), with the focus of learning being creating over time, through tasks. This theory is explicit in its use of cultural tools (whether through language or use of physical tools), which may have implications for who has access to the learning space, if it is technological determined. However, it is made clear that it is the activity, which contributes to learning, not merely the presence of others; more recent suggestions have also viewed that this is activity and collaboration is a key aspect of on-line learning (Salmon, 2000). There are, therefore, a range of issues to consider from an activity theory perspective.
Some may view the wiki as a Community of Practice (Lave and Wenger, 2002), but I would oppose this analysis, due to the lack of interaction, little evidence of ‘learning from masters’ and the fact that ‘situated opportunities’ did not occur; additionally, as Wenger suggests, Communities of Practice (CoP) cannot be created as such, but are developed from the practice itself. The wiki could, however, be seen as correlating more strongly with the Jones and Preece (2006) concept of ‘Communities of Interest’ (CoI). While I do not necessarily agree with their assumption of CoP being more organisationally-led, the view of the wiki being a CoI may prevail, as it has the valid component of a common interest (in carrying out the course task), whether or not the learning goal occurs. In opposition to this, though, the wiki did not fit other suggested criteria of a CoI, as it did not develop in an organic way and was not ‘open to all’, but determined by an educational institution’s constraints. Jones and Preece (2006) also discussed ‘usability’ (p118) - the ‘features and functions’ of technological access - and this may have particular relevance to the wiki issues, where the level of IT cognition and the accessibility issues may have determined the low level of participation, rather than the task itself.
The Tolmie (2001) research on software prompts was of interest; an initial view was taken that the software part of the technology tools was not part of the learning process. In the case of the wiki, participants may have had to negotiate a learning path with new technological tools; learning to use these would be very much a part of the learning experience as a whole. Some participants may have learned much from this, although these would have been unintended outcomes, more concurrent with the usability of learning tools and collaboration through IT (Jones and Preece, 2006).
Another key matter is narratives, which is raised regularly in educational research and appears to be a crucial component of learning; from Bruner’s tenets (1996), where narratives were a way of explaining things, (how they happened); to more recent discussions of narratives in a non-linear form (Laurilland et al, 2000). This may be one component that is missing from the wiki and possibly why it was found difficult to ‘learn from’; there appeared to be no narrative thread, no order, nor any method to easily follow a narrative line of a particular topic. To use the idea of Crook and Dymott (2005), in comparing reading practices between page and screen text, the wiki does not afford the opportunity to easily reflect on the narrative thread; as Greeno et al (1999, as cited in McCormick and Paechter) suggested, readers use many ‘repair strategies’ to aid comprehension and these may not be as accessible in an on-line format. Additionally, as these technological tools did not support this ‘narrative thread’ facility and, being a fairly visual learner, it was difficult to create an overall picture of a topic; perhaps diagrams such as a concept map would have aided knowledge construction.
If using the wiki for educational research purposes, one would need to be clear on which aspect was being presented; for example, if the wiki was viewed purely as a technological tool, then this would involve very different research to the view of a wiki as an activity system, as Jonassen and Rohrer-Murphy (1999) discuss. Using any of the socio-cultural theories in education would imply designing the learning structure to aid the construction of knowledge through interactions and activity, but this would also be dependent on the context (Tolmie, 2001), in which the learning is set. Crook and Dymott (2005), state that ‘…computers are a technology for collaboration…’ (p106); in linking this with activity theory, wikis could be identified as a meeting point of the technological tools and theories of learning through collaboration.
To organise these ideas, two classifications from Conole et al (2004) have been used; ‘Activity based’ and ‘Socially situated’ theories (see appendix 1 for a personal knowledge construction of this).
Activity based theories appear to imply that the use of methodologies which analyse the social interactions during tasks and/or the resultant changes in independent knowledge, would be appropriate, suggesting some statistical analysis. However, the activities are bound in with the context in which the learning takes place, as Jonassen and Rohrer-Murphy (1999) suggest, and therefore qualitative methods would be just as appropriate.
As socially situated theories are generally dependent on selected variables, specific not only the learner, but also to the artefacts, setting, culture, etc.; taking this theoretical stand could assume that qualitative methods may be more appropriate, gather the depth of data.
Wiki research raises such questions as
Are some learning subjects more fitting to wiki use?
How often are specific artefacts used; when and where?
How could usability features be assessed?
Did learners find it easy to negotiate the material/course?
Considering the first two questions, if quantitative methods were used, this could be in the form of analysing statistical results from previous research, gathered from a range of faculties and institutions; or tracking the level of usage of different artefacts. While Jonassen and Rohrer-Murphy (1999) suggest a qualitative approach is required to analyse activity theory, numerical data on, for example, the number of interactions in a particular subject’s collaboration space, would be of value in assessing trends of usage. One could also look at tracking how often the various parts of courses were used (such as case studies) and this could feed forward to future course design. A clear disadvantage in using these methods would be the lack of context; a loss of any temporal significance and the nuances of group dynamics, which may be a part of the learning development. Using quantitative methods to research what is viewed as a collaborative learning theory, would need to be taken from the perspective of addressing a need for a stakeholder to have ‘real evidence’, in the form of numerical data.
From the perspective of socially situated theories, the most appropriate research methods would be qualitative, to analyse existing teaching and learning structures and to gain insights to the thoughts of learners; for example, Laurilland et al (2000) saw ‘real learning’ observations and transcripts of the dialogue as crucial to their investigation. Ethnographic observations and transcribing are, of course, time consuming activities; however, a key benefit would be in using results to identify what could be better supported, much as suggested by Conole et al (2004), when using a framework such as the learning design model, to analyse current practices. There may be limitations of this approach, as the researcher has great influence in making assumptions and objective analysis; there may also be pressures of an institutional or organisational nature. A key weakness may be in the narrowness of any conclusions drawn and the problem of trying to make theoretical ideas or models ‘fit’ the evidence; for example, in using the PCGE case and fitting this to their framework, Conole et al did not appear to have considered the vital issue of how important it may be to have external input to this course and what perhaps needed to be changed were the tasks in relation to this visit, not the entire component.
It would be appropriate to state that any theoretical perspective and any methodology could be used in research, providing that this was explicitly stated and clarified within the context of the investigation or evaluation being carried out. The wiki use overall raises too many questions; the research questions would, therefore, be ultimately determined by what and who wanted the answers.
Part 2
Laurilland’s (2002) Conversational Framework (CF) was a model developed to make the process of learning, through teacher and learner interactions, more transparent. These ‘narrative lines’ were seen as operating discursively and interactively, suggesting two levels; however, the process appears cyclical in nature, much as any system with a feedback loop mechanism (Fleming and Levie, 1993).
A strength of the CF is that it is a clear, simple model, applicable to almost any learning landscape; this may be one reason researchers have appropriated the framework as an introductory point, to emphasise the role dialogue has in building knowledge through interactions (for example, McAndrew et al, 2002). A weakness of the framework is the difficulty in viewing the quantity and quality of the interactions; a three dimensional model would be necessary to analyse this depth. Another weakness, cited by researchers (such as Lee, 2006) was the incompleteness of the research; this was then used as a platform to extend the framework to their specific area of interest, particularly in the use of technological dialogue.
Conole et al (2004) introduced the CF as a model of learning and suggests that such models support specific theoretical views; this is used as a starting point to consider how most pedagogical theories are not designed to deal with e-learning. In an attempt to highlight theory features and match these to appropriate e-learning methods, a range of theories were categorised, with the CF allocated as literature of ‘socially situated’ learning, which is dependent on language and culture; as these variables are very closely aligned with conversations, this seemed appropriate. However, Conole et al suggests that all aspects of their model are used, except ‘non reflection’, so this appears to indicate that the CF is a ‘good’ theory for online e-learning.
McAndrew et al (2002) used the CF to suggest a change in the power dynamics of learning, but it was not clear if the statement of ‘…weakening the image of a teacher in the system…’ (p155) was a positive reinforcement of increasing facilitative skills, which are perhaps more greatly needed in online communities; or referring to a general move towards a greater level of collaboration in learning environments.
Weller et al (2005) appropriated the CF to analyse audio conferencing and it was shown that this was a useful model to clarify the interactions of the narrative lines. However, it would be debatable if the discussions taking place were actually ‘……in Laurilland’s (2002) conversational framework….’ (P.70); this is where the researcher has placed the dialogue, but is not a known fact.
Lee (2006) is keen to use the CF to highlight the issues of both cyclical learning and the use of activity-based theory framework. This is taken further to suggest the model also distinguishes between academic and experiential knowledge, but his analysis may only be valid against particular learning contexts.
It is accepted that assessment of the CF here is very brief, due to time constraints and further research would have allowed much greater depth of evaluation.
(Word count 2000)
Appendix 1
(Unable to load the diagram here - apologies)
C:\Documents and Settings\Carole\Desktop\Educational technology research Management\H809 TMAs\H809 TMA02 diagram.mht
(Needs to have link with subject matter
Dependent on cultural context; changes over time
Which subject or discipline is most appropriate?
No of times different tools used; where and when?
Collect quantitative data; primary statistical methods, secondary references to surveys
Usability factors (such as access)
Requires clear structure and navigability
How could usability features be assessed? Was it easy?
Did learners make sense of it; was it logical?
Collect qualitative data; observational, dialogue analysis
Either could be an evaluation of previous research)
Reference list
Bruner, J. (1996) The culture of education, Cambridge, Harvard University Press.
Cobb, P. (1994) ‘Where Is the Mind? Constructivist and Sociocultural Perspectives on Mathematical Development’, Educational Researcher, vol. 23, no. 7, pp. 13-20 American Educational Research Association.
Cole, M. (1996) Cultural Psychology: a once and future discipline, Cambridge, Harvard University Press.
Conole, G., Dyke, M., Oliver, M. and Seale, J. (2004) ‘Mapping pedagogy and tools for effective learning design’, Computers & Education, vol. 43, nos. 1–2, pp. 17–33. Available online at http://libezproxy.open.ac.uk/login?url=http://dx.doi.org/10.1016/j.compedu.2003.12.018 (Accessed 5 February 2008).
Crook, C. and Dymott, R. (2005) ‘ICT and the literacy practices of student writing’ in Monteith, M. (ed.) Teaching Secondary School Literacies with ICT, Maidenhead, Open University Press.
Fleming, M. and Levie, W.H. (1993) ‘Instructional message design: principles from the behavioral and cognitive sciences’ (2nd edition), Educational Technology Publications, Englewood Cliffs, NJ.
Greeno, J.G., Pearson, P.D. and Schoenfeld, A.H. (1999) ‘Achievement and Theories of Knowing and Learning’ in McCormick, R. and Paechter, C. (eds) Learning and Knowing, London, Sage Publishing.
Jonassen, D. and Rohrer-Murphy, L. (1999) ‘Activity theory as a framework for designing constructivist learning environments’, Educational Technology Research and Development, vol. 47, no. 1, pp. 61–79. Available online at http://libezproxy.open.ac.uk/login?url=http://dx.doi.org/10.1007/BF02299477 (Accessed 5 February 2008).
Jones, A. and Preece, J. (2006) ‘Online communities for teachers and lifelong learners: a framework for comparing similarities and identifying differences in communities of practice and communities of interest’, International Journal of Learning Technology, vol. 2, no. 2–3, pp. 112–37.
Lave J. and Wenger E. (2002) ‘Legitimate peripheral participation in community of practice’ in Harrison, R., Reeve, F., Hanson, A., Clarke, J. (eds.) Supporting Lifelong Learning Volume 1 Perspectives on learning, London, RoutledgeFalmer in association with the Open University.
Laurilland, D. (2002) Rethinking University Teaching: A Conversational Framework for the Effective Use of Learning Technologies (2nd edn), London, RoutledgeFalmer.
Laurilland, D., Stratfold, M., Luckin, R., Plowman, L. and Taylor, J. (2000) ‘Affordances for learning in a non-linear narrative medium’, Journal of Interactive Media in Education (JIME), vol. 2. Available online at http://www-jime.open.ac.uk/00/2/ (accessed 1st April, 2008).
Lee, N. (2006) ‘Design as a learning cycle: A conversational experience’, Studies in Learning, Evaluation, Innovation and Development [online], vol. 3, no.2, pp. 12-22. Available at http://sleid.cqu.edu.au (accessed 5th April 2008).
McAndrew, P., Mackinnon, L. and Rist R. (2002) ‘A framework for Work-Based Networked Learning’, Journal of Interactive Learning Research, vol. 13, no. 1-2, pp. 151-168. Available at
http://kn.open.ac.uk/document.cfm?documentid=6503 (Accessed 9th April 2008).
Rogoff, B. (1999) ‘Cognitive Development through Social Interaction: Vygotsky and Piaget’ in Murphy, P. (ed.) Learners, Learning and Assessment London, Sage Publishing.
Roschelle, J. (1992) ‘Learning by collaborating: convergent conceptual change’, Journal of the Learning Sciences, vol. 2, no. 3, pp. 235–76.
Salmon, G. (2000) E-moderating: the key to teaching and learning online, London, Kogan Page.
Tolmie, A. (2001) ‘Examining learning in relation to the contexts of use of ICT’, Journal of Computer Assisted Learning, vol. 17, no. 3, pp. 235–41.
Von Glasersfeld, E. (1995) Radical constructivism: A way of knowing and learning. London, Falmer Press.
Vygotsky, L.S. (1974) Mind in Society, Cambridge, Harvard University Press.
Weller, M., Pegler C. and Mason, R. (2005) ‘Use of innovative technologies on an e-learning course’, Internet and Higher Education, vol. 8, pp. 61–71.
The end of the blog....
Well, it's now some time since I posted anything to this blog. As far as the H809 course goes, I seemed to do so much for Block 2 of the course (readings, input to wikkis and TGF discussions, own research, as well as this blog) that I felt quite exhausted by the time the TMA02 was submitted. I didn't seem to have the same motivation and energy and was probably suffering from the 'blog burnout' that Nardi at al discuss.
I am going to post TMA02 and TMA03 here, so that this blog reflects the course up to this date. I am then planning to start a new blog for the ECA component, as I think this could be a 'working document' and be continued after the course end, as I further research blog use. I'll post the 'new' blog address here, as soon as I have started this and will, of course, be open to comments on the development of my project.
I am going to post TMA02 and TMA03 here, so that this blog reflects the course up to this date. I am then planning to start a new blog for the ECA component, as I think this could be a 'working document' and be continued after the course end, as I further research blog use. I'll post the 'new' blog address here, as soon as I have started this and will, of course, be open to comments on the development of my project.
Thursday 3 April 2008
Laurilland et al (2000)
Laurillard et al. (2000)
Affordances for learning in a non-linear narrative medium
The first thing I had to ‘get my head round’ was the idea of ‘affordances’; how did I perceive this concept? Well…as how we can use something; what we can ‘take’ from an available set of tools; the value or equity of the way we use something? The affordance could also be a suggestion of how something could be used or be the method most suitable to lend itself to the task. I also saw here one of the key issues of a previous reading I had reviewed (Lave and Wenger, a ‘teaching’ versus a ‘learning’ curriculum) – the issue of the teaching may be designed in a certain way, but does this mean it is accessible (in all dimensions) and of most use to the student?
In reading this paper, I was also struck by the fact that I was carrying out a reading task as discussed by Crook and Dymott; I printed the reading out and viewed the diagram on page 4 with the text on page 5. If I had read this online, I would have been distracted by the technology and used a lot of time in trying to have these pages side by side on the screen. The diagram certainly made more sence when viewed here - http://www2.smumn.edu/deptpages/~instructtech/lol/laurillard/ (interactive CF diagram).
1. What are the central arguments? How do the authors intend to link these to the framework described?
The central argument is that learning is a process carried out between teacher and learner and a ‘narrative line’ is required for the learning to take place. The process of interaction and feedback is shown diagrammatically, as a conversational framework (although this seems to be cyclical in nature – ‘cycles within a sequence’, p5).
2. What are the key strengths of the CF in terms of what it highlights about the learning process? Can you see any limitations in this approach?
The strengths of the CF is that it is a clear, simple model and could be applicable to almost any learning situation. From the figure (1) on p4, the limitations are that the quantity and depth of interactions cannot be seen (the interactive version assists this understanding).
3. Re the sequence of teacher–student interactions; can you think of an alternative sequence, perhaps from your own experience, which could be mapped using the CF?
There is, of course, the problem of lack of interaction, which can occur at times. The teacher may ask for help from ‘stronger’ students and peer interventions in feedback have also aided learning. Some subject matter lends itself to greater group discussions and peer feedback; also, problem solving activities create different lines of feedback and interaction.
Data collection and analytical techniques – limitations, alternative methods, benefits?
The data collected included dialogue of lesson interactions, observational notes and transcripts of dialogue (video), assignment reviews and feedback on work carried out. Using short dialogue clips can be used to highlight project issues, but there may be a danger of bias, as the whole context is not available to the reader (e.g. prior experience of this type of learning, level of IT skills). It may have been useful to record the number of tasks/process interactions; does this have implications for learning taking place, or taking place at a deeper level?
The pros and cons of the two-stage design process?
If wider data is collected and analysed in the second design, how will it be possible to compare the ‘results’? The second design highlighted ‘testing’ as an integral component, which was not evident in the first design; however, the original design was used to see areas where improvement of the desing could take pace, so there must be an argument of why ‘waste time’ in carrying out the same process, when the potential design benefits have been identified?
I had some other thoughts on this paper.
On page 8, it appears that the author is almost ‘blaming the technology’ when the students did not seem to have goals or progress towards it; is this not just poor teaching practice, that the students were not completely clear on the session objectives? This is apparent on page 11, where the new design included ‘…a clear statement of an overall goal…’. I was unclear here why the use of a Notepad was included – was there not access to a Word or similar method of creating notes? Why use this? Did students already have the skills to use this or not? Would this impact on how students used this method of recording the task? I considered that the use of an audio summary (p 12) a much greater affordance to constructing their conceptual knowledge, rather than ‘writing’ tasks. Will ‘creating affordances’ mean that greater interaction, and therefore learning, will actually occur?
Affordances for learning in a non-linear narrative medium
The first thing I had to ‘get my head round’ was the idea of ‘affordances’; how did I perceive this concept? Well…as how we can use something; what we can ‘take’ from an available set of tools; the value or equity of the way we use something? The affordance could also be a suggestion of how something could be used or be the method most suitable to lend itself to the task. I also saw here one of the key issues of a previous reading I had reviewed (Lave and Wenger, a ‘teaching’ versus a ‘learning’ curriculum) – the issue of the teaching may be designed in a certain way, but does this mean it is accessible (in all dimensions) and of most use to the student?
In reading this paper, I was also struck by the fact that I was carrying out a reading task as discussed by Crook and Dymott; I printed the reading out and viewed the diagram on page 4 with the text on page 5. If I had read this online, I would have been distracted by the technology and used a lot of time in trying to have these pages side by side on the screen. The diagram certainly made more sence when viewed here - http://www2.smumn.edu/deptpages/~instructtech/lol/laurillard/ (interactive CF diagram).
1. What are the central arguments? How do the authors intend to link these to the framework described?
The central argument is that learning is a process carried out between teacher and learner and a ‘narrative line’ is required for the learning to take place. The process of interaction and feedback is shown diagrammatically, as a conversational framework (although this seems to be cyclical in nature – ‘cycles within a sequence’, p5).
2. What are the key strengths of the CF in terms of what it highlights about the learning process? Can you see any limitations in this approach?
The strengths of the CF is that it is a clear, simple model and could be applicable to almost any learning situation. From the figure (1) on p4, the limitations are that the quantity and depth of interactions cannot be seen (the interactive version assists this understanding).
3. Re the sequence of teacher–student interactions; can you think of an alternative sequence, perhaps from your own experience, which could be mapped using the CF?
There is, of course, the problem of lack of interaction, which can occur at times. The teacher may ask for help from ‘stronger’ students and peer interventions in feedback have also aided learning. Some subject matter lends itself to greater group discussions and peer feedback; also, problem solving activities create different lines of feedback and interaction.
Data collection and analytical techniques – limitations, alternative methods, benefits?
The data collected included dialogue of lesson interactions, observational notes and transcripts of dialogue (video), assignment reviews and feedback on work carried out. Using short dialogue clips can be used to highlight project issues, but there may be a danger of bias, as the whole context is not available to the reader (e.g. prior experience of this type of learning, level of IT skills). It may have been useful to record the number of tasks/process interactions; does this have implications for learning taking place, or taking place at a deeper level?
The pros and cons of the two-stage design process?
If wider data is collected and analysed in the second design, how will it be possible to compare the ‘results’? The second design highlighted ‘testing’ as an integral component, which was not evident in the first design; however, the original design was used to see areas where improvement of the desing could take pace, so there must be an argument of why ‘waste time’ in carrying out the same process, when the potential design benefits have been identified?
I had some other thoughts on this paper.
On page 8, it appears that the author is almost ‘blaming the technology’ when the students did not seem to have goals or progress towards it; is this not just poor teaching practice, that the students were not completely clear on the session objectives? This is apparent on page 11, where the new design included ‘…a clear statement of an overall goal…’. I was unclear here why the use of a Notepad was included – was there not access to a Word or similar method of creating notes? Why use this? Did students already have the skills to use this or not? Would this impact on how students used this method of recording the task? I considered that the use of an audio summary (p 12) a much greater affordance to constructing their conceptual knowledge, rather than ‘writing’ tasks. Will ‘creating affordances’ mean that greater interaction, and therefore learning, will actually occur?
Tolmie, Crook and Dymott
While Tolmie seemed to take a psychological stand on learning taking place through a constructivist perspective ('in the head'), and looking at how this learning was achieved through technology, the issues raised appeared to lie more strongly with a socio-cultural perspective. I saw this through the issues of differences raised, which were culturally determined, such as gender and background, as well as being set within a pre-existing educational framework. Crook and Dymott appeared to be more transparent in their context setting and stated that the research has its roots in activity theory, where the complex relationship of person, system and culture determine the learning and how it may be achieved. It was clear that one could not view the technology and the various ways in which it might be used, as separate from the society and culture to which it is a part.
The Tolmie reading certainly made allowances for differences and while this may be seen as 'good practice', it led to research evidence which was 'unseen' and this meant it was difficult to draw any solid conclusions - just build awareness of differences. Crook and Dymott appeared to have more visible and recordable evidence, but there still appeared to be an assumption that technology was changing the writing activities, with little input from other aspects of changes in society.
The style of the Tolmie research would be appropriate when there may be identifiable differences between groups to be included in a project, but the measuring tools would need to be reviewed to clarify exactly what is being measured. Crook and Dymott's methods could be realistically used to review other applications, such as the use of software in the financial sector of learning; their narrative style seemed very appropriate for this type of observational study.
The Tolmie reading certainly made allowances for differences and while this may be seen as 'good practice', it led to research evidence which was 'unseen' and this meant it was difficult to draw any solid conclusions - just build awareness of differences. Crook and Dymott appeared to have more visible and recordable evidence, but there still appeared to be an assumption that technology was changing the writing activities, with little input from other aspects of changes in society.
The style of the Tolmie research would be appropriate when there may be identifiable differences between groups to be included in a project, but the measuring tools would need to be reviewed to clarify exactly what is being measured. Crook and Dymott's methods could be realistically used to review other applications, such as the use of software in the financial sector of learning; their narrative style seemed very appropriate for this type of observational study.
Wednesday 26 March 2008
Crook and Dymott (2005)
Crook and Dymott (2005) ICT and the literacy practices of student writing
Crook and Dymott adopt a different theoretical approach to learning and context and propose that writing and technology are mutually constitutive; the context is not one which is a ‘surround’, as discussed by Tolmie, but a part of the whole learning.
The current theoretical school of Activity Theory is a modern descendant of Vygotsky’s approach to learning, who suggested things are ‘done through’ (mediated by) cultural artefacts, such as language, manmade tools, socially categorizing and institutionalising. Cultural psychologists define learning as becoming expert in the use of these cultural artefacts and therefore learning is classified as a social activity (even if this involves reading in physical isolation, as the reading and text and the book are culturally determined). Vygotsky (1981) wrote that ‘by being included in the process of behavior, the tool [artefact] alters the entire flow and structure of mental functions’ (p. 137). This is taken further by those who see learning as participation, using artefacts, which are a fundamental part of the learning (‘distributed cognition’ where the team and artefacts work together). If this is the view, then situated learning is specifically dependent on the context (culture and artefacts) within which the setting is taking place (Rogoff, for example); our learning is dependent on the situation we are present in and the ‘way of doing things round here’, in a social world.
The seminal text in situated cognition was Brown, Collins and Duguid’s article in Educational Researcher (1989). The authors described an apprenticeship model of learning closely related to the work of Lave (1988) and the idea of ‘legitimate peripheral participation’. Socio-cultural theory does not focus on measuring learning with ICT in terms of what is in an individual’s head or in terms of learning outcomes. Instead the focus is on individuals-using-technology-in-settings (Crook, 1994). Learning is conceptualised as a social endeavour and qualitative research methodologies have been adopted to investigate this area because the focus of inquiry is on the processes of learning and on meaning making in social settings.
As much of the above is very relevant to the reading, this (from the H809 course text) has been included, with some adaptations.
Thoughts on the reading:
The five aspects of reading play very specific parts in describing the activity of writing, to ensure the reader can create a rich picture of the relationship between the technology and how the processes related to writing are carried out. I think it would be fair to say that I had not considered the different ways in which we use text on the screen as opposed to text on a page. The issue of ‘…pages …could be turned without distracting visual attention from reading…’ (p102) is very relevant and may be an insight as to the difficulty I have in ‘screen reading’; there are too many potential distractions and the ease of moving to other tasks is not one which necessarily supports a physical book reading task. The other issue I considered of high importance was in relation to the socialising aspect of using the PC; the use of technology in collaborative activity, but, interestingly, not as in creating essays collaboratively. Perhaps this is still an ‘in the head’ task, whether through exam, personal task or within the classroom context. Overall, the five faces of writing as stated here offer opportunities to develop ideas of writing as a shared, culture-based activity; activity theory that is technologically driven?
The different aspects will necessarily have an ‘effect’ on the writing; this may be to some extent linked with the skills of the learner and the range of tools (technological artefacts) that the individual engages with. The processes carried out, however, may not ‘constitute’ writing; this will depend on the end product and if the text is formatted in a coherent and logical fashion (as dictated by the cultural setting, perhaps?). It appears that, while the information gathering tasks can be viewed as situated, possibly collaborative (distributive, if consensus agreed?), the actual writing is a task carried out ‘from the head’ to the ‘paper’. As students used the resources in such diverse ways, we should firstly ask what learning it is that we are considering? Is it the production of a piece of work or is it learning the cultural construction of this (a ‘learning to learn’ task)? The assignment produced will be mediated through a selected range of artefacts; the prescribed marking scheme for this piece of work may dictate that some marks are achieved by specifically referring to use of some of these or using these explicitly (for example, on line research references or a document formatting expectation), so in this sense the learning is mediated. If I was to assess (I am assuming ‘mark’ and not ‘assess for research’?) some of the students’ assignments described in this chapter, I would be guided by what the assessment criteria was, so necessarily on the end product and not the process.
Crook and Dymott adopt a different theoretical approach to learning and context and propose that writing and technology are mutually constitutive; the context is not one which is a ‘surround’, as discussed by Tolmie, but a part of the whole learning.
The current theoretical school of Activity Theory is a modern descendant of Vygotsky’s approach to learning, who suggested things are ‘done through’ (mediated by) cultural artefacts, such as language, manmade tools, socially categorizing and institutionalising. Cultural psychologists define learning as becoming expert in the use of these cultural artefacts and therefore learning is classified as a social activity (even if this involves reading in physical isolation, as the reading and text and the book are culturally determined). Vygotsky (1981) wrote that ‘by being included in the process of behavior, the tool [artefact] alters the entire flow and structure of mental functions’ (p. 137). This is taken further by those who see learning as participation, using artefacts, which are a fundamental part of the learning (‘distributed cognition’ where the team and artefacts work together). If this is the view, then situated learning is specifically dependent on the context (culture and artefacts) within which the setting is taking place (Rogoff, for example); our learning is dependent on the situation we are present in and the ‘way of doing things round here’, in a social world.
The seminal text in situated cognition was Brown, Collins and Duguid’s article in Educational Researcher (1989). The authors described an apprenticeship model of learning closely related to the work of Lave (1988) and the idea of ‘legitimate peripheral participation’. Socio-cultural theory does not focus on measuring learning with ICT in terms of what is in an individual’s head or in terms of learning outcomes. Instead the focus is on individuals-using-technology-in-settings (Crook, 1994). Learning is conceptualised as a social endeavour and qualitative research methodologies have been adopted to investigate this area because the focus of inquiry is on the processes of learning and on meaning making in social settings.
As much of the above is very relevant to the reading, this (from the H809 course text) has been included, with some adaptations.
Thoughts on the reading:
The five aspects of reading play very specific parts in describing the activity of writing, to ensure the reader can create a rich picture of the relationship between the technology and how the processes related to writing are carried out. I think it would be fair to say that I had not considered the different ways in which we use text on the screen as opposed to text on a page. The issue of ‘…pages …could be turned without distracting visual attention from reading…’ (p102) is very relevant and may be an insight as to the difficulty I have in ‘screen reading’; there are too many potential distractions and the ease of moving to other tasks is not one which necessarily supports a physical book reading task. The other issue I considered of high importance was in relation to the socialising aspect of using the PC; the use of technology in collaborative activity, but, interestingly, not as in creating essays collaboratively. Perhaps this is still an ‘in the head’ task, whether through exam, personal task or within the classroom context. Overall, the five faces of writing as stated here offer opportunities to develop ideas of writing as a shared, culture-based activity; activity theory that is technologically driven?
The different aspects will necessarily have an ‘effect’ on the writing; this may be to some extent linked with the skills of the learner and the range of tools (technological artefacts) that the individual engages with. The processes carried out, however, may not ‘constitute’ writing; this will depend on the end product and if the text is formatted in a coherent and logical fashion (as dictated by the cultural setting, perhaps?). It appears that, while the information gathering tasks can be viewed as situated, possibly collaborative (distributive, if consensus agreed?), the actual writing is a task carried out ‘from the head’ to the ‘paper’. As students used the resources in such diverse ways, we should firstly ask what learning it is that we are considering? Is it the production of a piece of work or is it learning the cultural construction of this (a ‘learning to learn’ task)? The assignment produced will be mediated through a selected range of artefacts; the prescribed marking scheme for this piece of work may dictate that some marks are achieved by specifically referring to use of some of these or using these explicitly (for example, on line research references or a document formatting expectation), so in this sense the learning is mediated. If I was to assess (I am assuming ‘mark’ and not ‘assess for research’?) some of the students’ assignments described in this chapter, I would be guided by what the assessment criteria was, so necessarily on the end product and not the process.
Subscribe to:
Posts (Atom)