Tuesday 27 May 2008

TMA03

[The O’Donnell et al (2006) paper will be referred to as the U-study; the Rekkedal and Qvist-Eriksen (2004) paper will be referred to as the NKI-study.]

Settings and Theoretical Contexts
While both evaluations considered online support for learners, substantially different perspectives were evident. The U-study was based within a European HEI, focusing on two in-house, on-line support packages, available to all HE students. This study assumed ICT would enhance educational opportunities, with the content of the packages being developed to assist new students, through an induction programme and a support facility for those expected to use ICT. The induction package focused on orientation and study skills, including ‘learning to learn’ skills (such as time management), as well as confidence building blocks; the support package featured ICT literacy components, including technical aspects and access issues. This was an evaluation study of the packages, but for what purpose(s) was unclear.

The NKI-study was set within an organisation driven by commercial needs to provide flexible learning solutions, which afforded the delivery to be taken at an individual pace; this was stated as their ‘philosophy’ for internet education. NKI’s previous research (for example, Rekkedal and Paulsen, 1998) confirmed flexibility was a key driver of their strategy and this study intended to evaluate if customer demands were being fulfilled. The intention was two-fold; to examine student’s support needs and to assess their satisfaction with this support available.

The U-study suggested on-line packages would support learners and this could enhance their educational opportunities; reference is made to Salmon (2000) in relation to this. However, while there is evidence that new opportunities could exist to support learning through online means, the use of Salmon’s theory was perhaps irrelevant, as this model of on-line learning stages was aimed at the development of collaboration and moderation skills, not at packages to support the initial stages of student learning. The implications of students being unable to learn online effectively was of greater relevance and perhaps the views of, for example, Tyler-Smith (2006) in relation to developing ‘learn to learn skills’ would have provided greater weight to the rationale; indeed, the importance of preparing students for online study appears to be crucial to this research. The study assumed that students would need to develop some ICT skills and there was also an awareness that HEI providers may assume that a certain level of IT literacy exists when this is often not the case (Somekh, 2000).

The NKI-study suggested the student group could be an important social and academic support in learning; however, it was not clear how this was prompted, with the emphasis being on individual student learning. While forums are encouraged, these are not used to any large extent as part of the learning experience, so theories of collaboration cannot realistically be applied.
The products were originally developed along the lines of didactical teaching, where the learner is guided through input models; although literature on developing theories has been reviewed, particularly relating to the possible importance of collaboration and social learning, the solutions continue to be student-led, based on personal knowledge construction. Interaction on-line was not considered important for many and one might concur with this if agreeing with the evidence of Davies and Graff (2005), who established that improved performance was not necessarily achieved by those who interacted more (although a greater volume of online activity could lead to higher grades and interaction may support ‘potential failers’). The NKI-study agreed that a social constructivist perspective may appear ideal as a social theory of learning, however, if online collaboration is not taking place and students have already stated that individual learning is of prime importance, NKI-study’s perspective is more greatly dictated by Piagian roots.

Strengths of Methodological approach
The NKI-study was easy to follow and logically structured. As a commercial aware organisation, NKI clearly based much of their study on their own previous research and are attempting to continually assess the viability of their internet learning packages. As there is access to paying clients as a sample population, their market research is a valuable method of assessing the product value; the study was targeted to the same audience, within the same context. A key strength is the clarity of the link between the need to review client’s perception of learner support and satisfaction, with the method of the data collection. The use of open-ended questions for depth of response and to follow up previous market research ensured that the data would be of relevance and current value to the organisation.
In relation to what the report uncovered, it declared that clients were overall satisfied with the support elements of the learning packages; however, to examine student’s need for support services may have been more meaningfully measured by greater quantitative analysis.

The U-study provided a clear overview of the content and purpose of the packages (if not the intention) of the study and this was justified using of a wide range. The study was aided by access to the whole student population within the HEI and the technical ability to offer on-line questionnaires, which would encourage those already accessing on-line resources, to respond. Some free response questions were available and both pre and post questionnaires were used; a method also used by Bos et al (2000), which allows comparative analysis.

Limitations and weaknesses
For the NKI-study, the intention was for the study support elements (classified as the parts of the course additional to the course content) to be analysed; however, might some students have difficulty in separating these components? A key criticism would be the sampling; to use statements such as ‘…nearly at random…’, the lack of clarity in explaining the relevance of a ‘student number’ in the selection process and the variety of selection judgements allowed by the interviewers, makes bring in to doubt the validity of the study. The quantitative data collated was not analysed in any detail and only ‘revealed’ in the final summary; the qualitative statements were not summarised.
While NKI established that the first phase of study was important, there seemed to be little emphasis on research of developing trust in the learning technology and learner interface; Bos et al (2002) suggest that it is difficult to establish this online trust, so it may be crucial to explore this aspect and possible links to lack of study support use, as an assumption is being made of ‘trust’ in the NKI systems.

The main weakness of the U-study was the lack of clarity of purpose; the aim was not stated until the end of the report, under the discussion; only at this point does it become known that the evaluation was to evaluate effectiveness and ‘…if it could be delivered appropriately online…’. There was no scale defined for measuring effectiveness and one would also question the meaning of ‘appropriately’ here; this should be explicitly defined.
In the U-study, demographics were discussed, but not stated as a relevant issue at the outset; nor was it clear how this discussion was linked to the support packages. It may have aided the reader to have a clear classification of what the authors’ meant by a ‘Primer’, as terminology familiarity was assumed. Another weakness of this study is the amount of time spent on research, analysis and summary of the location of the IT access and the connection type; this appeared irrelevant to the issue of support and was not stated as a mainstream of the study. In fact, there was a volume of quantitative data, which did not appear to ‘answer the question’; for example, satisfaction is not a measure of effectiveness, unless it is clearly stated that there is a scale defining this term. While it was acknowledged that packages ‘improved confidence’, this again is not an indication of ‘being effective’. Additionally, the authors did not respond to negative free response questions regarding time consumption and volume of information; this could have been linked with the issues Davies and Gaff (2005) considered, in relation to time pressures in student’s studies being a major component of the on-line learning experience.
In relation to validity, it is more likely that those who accessed and complete the packages are more likely to be IT competent anyway and would also be more likely to complete on-line questionnaires, thus skewing the data.
Lastly, related to ethical issues, it is unclear how students could remain ‘anonymous’ when clearly a logon to the system would be required, where this could be traced back to an individual.

Proposals
The key proposals for the NKI-study would be
Offer alternative feedback methods
Focus questions on specific elements to aid development of course components
Specify sampling selection methods

For the NKI-study, as learners are perceived as competent online users, providing online questionnaires may have allowed a higher response rate. It may also have been of higher value to the organisation if some focus had been placed on the usefulness which students attached to support elements; perhaps specific questions on tutor support and/or feedback is required. However, the recommendation would be for responses to be provided in a different format; the personal interaction of telephone interviews may lead to some respondents being uncomfortable in providing negative statements, particularly regarding tutors. The selection of questions from the personal choice of the interviewees could also be addressed in this focussing of content and NKI could also have targeted questioning to the reasons for lack of collaboration; if learners can be encouraged to ‘think by learning’ through online activities (as Nardi et al, 2004, discuss), and this is seen as a support mechanism of value, then learning as a social process may be encouraged.
While any students may provide reliable data, the selection of the sample should be stated; this would increase the validity of the study results.

The key proposals for the U-study would be
Clarify reasons for evaluation
Control group completing package as compulsory
Analysis of free response questions
Collection of qualitative data through interviews
Greater clarity of graphical representations
Feedback from non-student agents

Aims must be stated at the outset and these should be clearly defined, including boundaries of the study and terminology used; this would ensure the design matched the intended purpose, as well as allowing the reader to ‘see where its going’.
If a control group were to complete the package as compulsory, this would mean judgements could be made in relation to support components which were most used and valued by students, rather than making assumptions on information provided by those who freely selected to participate (as these may well have the higher ICT skills anyway). If all members of the control group participated, it may be possible to determine what additional support was actually required, much as was evidenced in the Cox (2007) study, where individual differences in learning were made more transparent. In relation to the use of the data, free response answers should be analysed and fully summarised; researchers cannot ‘ignore’ negative input or aspects brought to light, which were not perhaps anticipated at the start of the study.
Qualitative data could be collected through interviews, to encourage a depth of response; it would be particularly important to gain insight into why some students felt their support was ‘effective’. As Bell (2005) suggests, a guided interview could be used, so that questions would be focussed around support package issues, but with greater allowances made for issues not covered by the researchers’ initial ideas on support and its effectiveness. If data is collected only from those who freely access the packages, then the results would be unreliable when trying to apply results to the whole HEI population.
Best use should be made of the data collated and simplicity of graphical representations would support the discussions more strongly; for example, the ‘attitudes’ presentation as a graph would have more clearly illustrated the ‘pre’ and ‘post’ results differences, rather than by a table format.
Feedback from non-student agents would be an integral part of a well balanced study, where the ‘effectiveness’ of the packages is being measured; quantitative data may express grades, but showing true correlation to use of support may be challenging. Insight into actual performance could be carried out by asking tutors if weaknesses were still apparent in those who had completed the packages; from experience, it is not the taking part in the learning that provides the results, but the ability to transfer the acquired skills gained.

Overall, the NKI-study is seen as ‘better’, due to the intentions being stated and, to a large extent, met. However, as Tolmie (2001) suggests, perhaps the complete online experience needs ‘…examined, not just the use of the technology in isolation…’ (p237)

(Word count 2112)
















References
Bell, J. (2005) Doing Your Research Project: A guide for first-time researchers in education, health and social science, (4th edn), Buckingham, Open University Press.
Bos, N., Olson, J., Gergle, D., Olson, G. and Wright, Z. (2002) ‘Effects of four computer-mediated communications channels on trust development’ in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Changing Our World, Changing Ourselves, Minneapolis, Minnesota, 2002, New York, NY, ACM; also available online at http://libezproxy.open.ac.uk/login?url=http://doi.acm.org/10.1145/503376.503401 (Accessed 3 May 2008).
Cox, R. (2007) ‘Technology-enhanced research: educational ICT systems as research instruments’, Technology, Pedagogy and Education, vol. 16, no. 3, pp. 337–56; also available online at http://libezproxy.open.ac.uk/login?url=http://dx.doi.org/10.1080/14759390701614470 (Accessed 19 May 2008).
Davies, J. and Graff, M. (2005) ‘Performance in e-learning: online participation and student grades’, British Journal of Educational Technology, vol. 36, no. 4, pp. 657–63; also available online at http://libezproxy.open.ac.uk/login?url=http://dx.doi.org/10.1111/j.1467-8535.2005.00542.x (Accessed 5 May 2008).

Nardi, B., Schiano, D., and Gumbrecht, M. (2004) ‘Blogging as social activity, or, would you let 900 million people read your diary?’ in Proceedings of the 2004 ACM Conference on Computer Supported Cooperative Work, Chicago, Illinois, New York, NY, ACM; also available online at http://libezproxy.open.ac.uk/login?url=http://doi.acm.org/10.1145/1031607.1031643 (Accessed 12 May 2008).

O’Donnell, C.M., Sloan, D.J. and Mulholland, C.W. (2006) ‘Evaluation of an online student induction and support package for online learners’, European Journal of Open, Distance and E-Learning [online], 2006/I, http://www.eurodl.org/materials/contrib/2004/Rekkedal_Qvist-Eriksen.htm (Accessed 03 May 2008).

Tolmie, A. (2001) ‘Examining learning in relation to the contexts of use of ICT’, Journal of Computer Assisted Learning, vol. 17, no. 3, pp. 235–41; also available online at http://libezproxy.open.ac.uk/login?url=http://www.blackwell-synergy.com/links/doi/10.1046/j.0266-4909.2001.00178.x (Accessed 2 April 2008).

Rekkedal, T. and Qvist-Eriksen, S. (2004) ‘Support services in e-learning – an eevaluation study of students’ needs and satisfaction’, European Journal of Open, Distance and E-Learning [online], 2004/I, http://www.eurodl.org/materials/contrib/2006/Catherine_M_O_Donnell.htm (Accessed 01 May 2008).
Rekkedal, T. and Paulsen M.F. (1998) NKI-STUDY Internet Students - Experiences and Attitudes. The Third Generation NKI-STUDY Electronic College: An Evaluation Report Written for the Leonardo Online Training Project [on line] http://nettskolen.NKI-study.no/forskning/33/evaluati.htm (accessed 20th May 2008).
Salmon, G. (2000) E-moderating: The key to teaching and learning online, London, Kogan Page.

Somekh, B. (2000) New Technology and Learning: Policy and Practice in the UK, 1980–2010 Education and Information Technologies Volume 5, Number 1, Netherlands, Springer. Available online at http://www.springerlink.com/content/m632j4315m210154/. (Accessed 10th May 2008).

Tyler-Smith, K. (2006) Early Attrition among First Time eLearners, Journal of Online Learning and Teaching Volume 2, Number 2 [on line]
http://jolt.merlot.org/Vol2_No2_TylerSmith.htm (accessed 15th May 2008).

No comments: