Friday 29 February 2008

The last bit for week 4...

(Roschelle, 1992)
The claims in the evaluation are fair, given that the author is clear in stating the assumptions made (for example, which theoretical schools of thought are valued).
While the idea that ‘…conceptual change occurred…’ appears valid, I still view that it is difficult to justify that ‘…individual interpretations converged toward shared knowledge…’. I say this because I am still unclear if having a shared understanding can actually be viewed as ‘knowledge’ if the participants are unable to explain the concept in scientific terms and are unable to transfer the concept to other scenarios – an inability to apply, which, for most teaching and learning contexts, would be considered poor practice. The ‘reconstructing….to converge on the meanings shared by the…community…’ could surely not be guaranteed by collaboration; are ‘experts’ not needed to ensure understanding is ‘correct’? This brings me on to the issue of the Lave and Wenger citation, in relation to social constructivism; I had viewed this in the context of the concept of ‘legitimate peripheral participation’ and this does not seem to apply at all, as the task is set up as an ‘experiment’ rather than in a community of practice; who is the master and what is the experiment peripheral to?
The proposal of ‘…progressively higher standards of evidence for convergence…’ seems dependent on the construction being equally created; but there is no evidence of this and one contributor may well have been the ‘leader’ in constructing the ‘knowledge’. Would one additionally require comparisons of the level and number of interactions from each participant? I found this an interesting (if challenging) paper to read, as it is clarifies that any research is very dependent on one’s views and theoretical recommendations. Perhaps any reported research could be justified?

Thursday 28 February 2008

Still reading...

On reading Roschelle (1992), up to the evaluation
The argument being made is that it is possible to construct ‘an integrated approach to collaboration and conceptual change’ (p235). If you accept that the collaboration will occur in, for example, problem solving activities (and that this is not just unconnected conversations), then it would be fair to research the specific ‘collaborative conversational interaction’. Roschelle’s proposal that the ‘crux of collaboration is the problem of convergence’ (p236), seems to assume collaboration as a meeting of joint understanding, whereas this could just as easily be viewed as an agreement to complete or accomplish a given task; however, if we concur with the assumption, the type if research required would indeed necessitate analysis of detailed conversational interactions – ‘conversational analysis’. The research discussions provided are very detailed, with specific notes on the situational activities, such as use of hand gestures, which seems very relevant to setting the context.
The results of the research are presented in a narrative format, with supporting diagram; some use is made of statistical evidence (e.g. p242 ‘…the median point…’, but this is much less so than used in other papers. The evidence ranged from observer comments and analysis, transcriptions of conversations and their associated analysis, as well as post-experiment interviews. ‘Evidence’ is considered to be the conclusions reached by the participants; specifically, it is considered that a ‘non technical’ conclusions (in a language sense), which is equal to approximately the prized ‘correct’ scientific understanding, is acceptable. Is this sufficient ‘evidence’? The researcher also seems to fill in a lot of gaps with assumptions; for example, from p250…’It is plausible that her intention was…’, can hardly be concluded to be evidence?
If the research question is to consider the construction of an integrated approach, then there appears to be much more ‘evidence’ for the collaboration than for the conceptual change. However, the small matter of highlighting the comment ‘…the use of we’, is perhaps a solid indication that the participants were indeed collaborating and did experience conceptual change.
The article used the concept of ‘deep features’ (of situations) as a crucial component of the research framework; however, this definition did not appear to be clarified at the outset and was therefore dependent on personal interpretation. Due to the numerous references to these ‘deep features’ it seems difficult to guarantee that one has grasped the researcher’s meaning, without further reading of one of the cited authors (such as Anzai and Yokohama, 1984); plan to check this out before continuing with the paper and considering Roschelle’s evaluation.

Tuesday 26 February 2008

Issues of access and moving on from week 3

In the past week, those two great institutions of BT and Sky have conspire against, resulting in no use of internet access at home (and not for another week – help!!). Never mind, thought I; there is always the trusty local library; wrong! This institution is also beset by technology problems and is down to two slow-running PCs for the entire local community. Increased internet access in the UK? The evidence suggests not, for the ‘man/woman in the street’.
This is also the point where I also realised that I should store a list of ‘My favourites’ not just on my PC, but by using a web-based source.
Due to these little hurdles, I have not managed to carry out any of the tasks, which I have required access and downloading; for example, Google Scholar or creating lists with RefWorks. I will have to catch up on these tasks next week, along with viewing other’s blogs and comments.
Good to know I can go back to the old fashioned method of printing articles, reading them and highlighting/making notes by hand (is this perhaps what I should have done earlier?).

Now, on to the main point at this time – the Oliver et al reading.
I found this to be an interesting paper, as I have studied some of the theoretical issues raised, such as constructivism, on previous courses.
I was interested to review the approaches to knowledge, as well as approaches to learning; this seemed to set the context well for the discussion on methodological approaches and it may be difficult to separate these aspects. For example, if we were to discuss ‘positivism’ as an approach to knowledge, this may assume that we ‘see things’ ‘out there’ and can investigate and discuss these clearly when viewing the external actions or behaviours, to agree what is ‘true’. However, this makes the assumption of similar values and held beliefs and doesn’t explain the ‘in your head’ reasons behind the actions; nor the assumption that repetition makes something true for all cases. This approach to knowledge may indicate the associative and cogititive approaches to learning, to which Beetham (2005) refers, are most applicable. If this is the case, the methodologies which involved measuring behaviour and practices – ‘technical in action’ research – are likely to be most suitable, as the evaluation of the findings will be dictated by a set of values (theories and models) applied to the practice occurring. This research could then be posited as ‘true’, with an awareness of the limitations.
If the approach taken to knowledge is a social perspective, whether constructivism or critical theory, then this may lead to assumptions that the social constructivist and situativist approach to learning should be dominant. The methodologies likely to be used here would necessarily be leaning towards interactions and/or the evidence of interactions that have already taken place; for example, interviews and focus group feedback sessions. The use of activity theory would also sit well with these approaches, as an analysis could be made which was dependent on the situation and its occurrences.

One aspect of this paper I should further consider is the issue of ‘…tacit communitarian or post-theoretical perspectives…’ (Roberts and Huggins, 2004), as I feel this may relate to the subject of management teaching and learning; so much is ‘in the heads’ of mangers, which needs to be more explicit, in order to link theory and practice.

I feel this paper sums up a number of the ‘big questions’ I have in relation to e-learning and its possible use; one needs to be clear of ‘where you’re coming from’ in relation to approaches to knowledge and learning (as well as - as I personally feel strongly on this - approaches to assessment). One thought which strikes me is a thought from a previously read book by Patricia Murphy, related to collaboration – ‘…learning to collaborate versus learning through collaboration…’ – how will it be possible to achieve on-line collaboration of unconnected managers? How will they learn to do this?

I must read on…

Thursday 21 February 2008

Week 3

Having internet access issues this week, due to lack of broadband connection at home, so this has not helped the speed or ease of working on this course. In relation to this student experience, as educators, we perhaps have to build in more 'flexi-time' for online courses.
The task on Usign Acadmic Search Engines was, therefore, challenging, but having found myself internet access elsewhere, I was frustrated to find that the link from the course guide did not take me to the ISI Web of Knowledge or to Googal Scholar. However, I have previously used the OU on-line Library resources for my MEd and carried out research using both subject papers and journal databases. When I came to use the link for the 'demonstration of citation searches', this did not follow through either, so I will have to come back to the online tasks when more on-line time is avaliable, in order to review the search engines (ACM Digital Library is not one I have used before).
What I have had time to do this week is review reading 4 (Laurilland). The main arguement appears to be one of concurring with the view that previous studies continually arrive at a 'maybe' answer to the research question. In having the word 'improve' in the research questions, this implies that whatever technology is available is 'bound to' improve the elarnign experience. However, are there just too many assumptions being made here? Evaluation may be being made by those who are biased towards using and implementing these technologies.
I have posted this to the TGF -
'For me, this reading seemed to highlight a number of issues that are relevant to educational research as a whole - not just technology. For example, in relation to conclusions of '...well perhaps..' or '...if X...' or '...depending on Y...'; why is this so common in educational research? Yes, often because we are dealing with 'people', a range of boundaries and the context are stated, to take into consideration the variability of any study, but in practices, say, psychology or sociology, 'results' are often taken as 'true', regardless of the limitations of the study group. Does this mean that educational research has 'lower status', in some way?I was struck by the lack of any 'new' information being uncovered in this paper; the second half of the paper seemed to dominated by stating practices, which I would view as basic teaching skills, regardless of using any technology. Also, to highlight the aspect of assessment - '...to re-think the assessment...' - I see as being driven by the widening participation agenda, rather than by technological capabilities or accesss'

Tuesday 19 February 2008

Finishing week 2 a little late...

Reading 3: OECD (2005) E-learning in tertiary education: where do we stand?
Activity 2.8: Reflecting on methods (2 hours) - considering the report:
Did the survey elucidate both good practice and international trends?
The study uncovered a wide range of issues, but not necessarily good practice, as it was difficult to benchmark institutions against one another; what is ‘good practice’ would depend on the degree of e-learning in place, technological capabilities. Etc. so the practice heavily depends on the context in which it is placed.
While diversely geographical institutions were mentioned, it is not apparent that any trends were revealed, as much of the information related to correlation between western culture and level of engagement, rather than global indications of development.
How could the questionnaires be improved?
The questionnaires were difficult to follow, due to the variety of questioning syles used. The quality of the data collated would be questionable, as much was derived from ranking variables and providing estimates; results of these were personal perspectives, liable to great variability due to bias and preferences.
What alternative sampling strategies might be considered, and why?
Out of the total population for institutions within the OECD, a ‘fairer’ sample may have been obtained by ensuring the sample was selected to balance the range, size, status, locality, etc. of institutions. This would mean that any overall trend may have been easier to extract from the data and provided a clearer picture for the OECD as a whole.
If you were to do this research, what might you do differently? Would these alternative methods have disadvantages?
Any statistical data should be clearly derived from secondary data streams, to ensure the most realistic figures (not ‘guestimates’, as asked for in the survey). It may be difficult to ensure there are current and valid data available, but comparable years could be ‘index-linked’, for example. A decision should be made early on if the study was to produce findings for elearning in general or for the OECD area, as this did not seem clear; the survey is perhaps ‘too big’ to any draw generalisations. However, if an overall OECD survey is what is required, the sample selected would be vital to the research and this would be a key consideration. Also, the selection of who was to complete any questionnaires needs to be considered, as this was another variable in the collection process, which could perhaps have been more controlled. The use of greater standardisation in the style of questions and on the wording would need to be addressed, to try to reduce personal preferences; I would additionally look at using e-forms/on-line methods for distribution as the collation and analysis of data may be helped by PC software.

Friday 15 February 2008

The OECD report

Well, this reading (OECD report) was very interesting. I enjoyed this, as it seemed more relevant to my educational setting, used research methods and commentary styles with which I am more familiar and I could also envisage how some of the report may feed forward to researching and reporting on my own practice; I must earmark time to read the full report, though.
The researchers wanted to look at how learning was 'living up to' its historical claims. This was an investigation into a wide range of variables, such as who uses this, what methods are used, what the cost and pricing implications are; as well as considering both the teaching and the learning perspective.
The content of the overall report was clearly stated, as was the intention of the two studies.
The limitations stated were namely:
- the difficulty in generalising quslitiative data across such a diverse spread of groups and variables; indeed, it '...cannot be said to give a representative overview...' in the OECD group
- likely bias in relation to 'self selection', as respondents who had an interest in elearning would be more likely to respond to the survey anyway
On the last point, though; is this not likely to be true of any research in education? It can often seem as if there is 'too much going on' and we need to be very selctive in what we allocate time and other resouces to.

Thursday 14 February 2008

Later on in Week 2....

I was reading postings in the H809 TGF and found that there are some aspects/terminology that I was just not sure about; for example, ‘Web 2.0’ – checked online and found - http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html
Perhaps I should reconsider what are blogs and wikkis; why do people use them; what benefits are there? But have I time to carry out research on this now – maybe I should have looked at this more fully before the course started.
Thoughts on reading 2: Wegerif and Mercer (1997) Using computer-based text analysis to integrate qualitative and quantitative methods in research on collaborative learning
· The key research question was if computer-based text analysis would be of great value in research on child talk and knowledge construction in the classroom
· The setting is formal ‘taught’ learning, within primary school education
· Theories and concepts – collaborative learning; discourse analysis; coding
· Methods – use of small groups; pre and post questionnaires; analysis of word use
· Findings – inconclusive?
· Limitations – once groups have received ‘talk lessons’ how would future experiments be measured?; threads difficult to follow; likely to depend on intelligence level; prior skills in problem solving likely to impact on results for specific groups
· Ethics – should young children be used as ‘guinea pigs’? Specific ‘brands’ of the software was used; was this the researcher’s selection or was the research dependent on the funding from manufacturers?
· Implications – need investment in technology and interpretative skills; further research may still not provide ‘evidence’
Considered positivist versus interpretive research methods in reading the text and reflected that strengths were – using coding as a method of dealing with a large amount of data and discourse analysis to gain a depth of insight; weaknesses were that in coding the construction thread of the knowledge creation was not evident or visible and for data analysis, much would be dependent on the interpreter (‘intuitive understanding’ (Barnes), would raise concerns).
Further reflections on reading
1. The transcript data would be preferable, as it would be easier to ‘count’ and make comparisons of both volume and time; whereas in video data, replays may become unmanageable to create useful, relevant data
2. The assumption of preconceived categories rather than letting findings ‘emerge’ from the data was evident through the comment that ‘…much of their talk is off-task…’; this talk is being dismissed as unimportant to the construction of the problem-solving knowledge, but how can this be judged? It would probably not be possible to avoid the use of preconceived categories when analysing this data, but this does not mean that categories may still not be constructed through the emerging data; for example, if all talk had to be categorised and a rule created that any words classified as ‘other’ was not allowed to be greater than x% of the total transcript, then new categories would likely be necessary, which would re-focus the researchers on the talk content.
3. Evidence that might support the claim ‘In the context of John’s vocal objections to previous assertions made by his two partners his silence at this point implies a tacit agreement with their decision.’ – John was happy to voice his opinion and was unafraid of ‘being wrong’, so the lack of disagreement could be viewed as agreement, otherwise he would have voiced another disagreement.
4. Re the claim of the increase in group scores, I did not ask myself if the researchers had looked at this in relation to the control group. I had made an assumption that the researchers would have done so; why, I am not sure.
5. In the post-intervention talk it is plausible that John is giving a reason. This may be where video evidence might show a clearer understanding of what John is stating as ‘fact’, rather than the questioning statements provided by others, who perhaps do not feel their ‘evidence’ is strong enough. The video data may help substantiate this claim.
6. I am not convinced that the study effectively demonstrates that using PC methods on the study of talk necessarily ‘combines strengths of quantitative and qualitative methods of discourse analysis while overcoming some of their main weaknesses’. There still seems to be much ‘…but if…’ and dependence on the interpretation of the qualitative data.
7. The computer may add to the analysis in terms of visible trends and ease of manipulation of data.
This week is passing so fast - need to move on to the next reading.

Tuesday 12 February 2008

Week 2 - still having doubts...

Well, the reading side of this course has been fine, but I did feel I needed to add a comment to the TGF, which is self-explanatory -

'On the subject of online teaching being more difficult and more time consuming, I would like to bring in the student's perspective and make the perhaps controversial note that this may also be the case for students. I use VLEs and firstclass/TGFs in my work and regularly use the internet; not a 'techie', but quite okay with my level of proficiency. I have taken a number of 'traditional' OU courses at degree and PG level and have always studied the course books (highlighting and making margin notes where required), attended tutorials where I could manage and submitted all TMAs on time, without a problem.
Now, this course poses a complete mindshift for me...the readings themselves, fine, but - I am expected to find the time for entries into some sort of learning log (okay - a blog, if I can ever get access to it); I need to access other's blogs and read these AND I have to find additional time to comment on these blogs, as well as participate in the TGFs.
Also, I am 'missing' having 'real' books - is it just psychological that I don't want to look at everything on a screen, as I feel it takes longer? - the use of the highlighter pen doesn't 'feel' quite the same in a virtual world! There is a lot of 'jargon' in other's postings, which I am having to look up, whereas for a 'taught' course, a glossary would be provided.
Help!
I have not posted this to be negative or just have a good old moan, I wanted to point out that I am probably experiencing exactly what some students experience at the start of their course. And perhaps a student viewed is also that online learning can be '...more work, stressful and more time consuming too'

We are to develop a 'technology timeline' -
1. When did different technologies become avaliable?
2. When did their use become widespread in education?
Hmmm...without going back to cave paintings, as I noted some have, I would think along the lines of the following (but not sure of dates)
Television - 1950s (re governemtn education/advertising) - 1960s in schools (but not widespread?)
OHTs - when?
PCs - 1970s - 1990s
Internet - 1980s - 1990s
Whiteboard/SMART board - not sure when available, but only noticed in last 5-7 years
VLEs - not sure again, but notice in last 5 years or so
Will check the wikki and see what others have noted.

Should I really be doing this course? (H809), Week 1

This blog was to have started last week, when I started this course, but due to technology problems.......anyway, I did write down my thoughts then and will post these below. But I am now thinking - why am I doing this course? It seemed a good idea at the time - I do see a number of benefits to using elements of online learning as supplementary or flexible components of courses; whether I will be converted to completely online study remains to be seen. I want to know enough to be able to carry out valid research to see how online components could be used to develop traditional classroom-based management courses.
07/02/08

Initial thoughts – start of course - Help!
Reading postings from others, I am feeling that my technology knowledge may be lacking.
Felt a little lost without a box of books, etc. arriving; have had to print out the calendar and assignment book, as a security blanket!
Problems setting up the blog – will leave it a few days.
Found the first reading – Hiltz and Meinke – understandable and easy to follow.
Phoned IT for blog help – will get back to me within 3 working days.

Have decided to type up reflections, so they are ready to post when I do get the blog set up; so, for the first questions
The key research questions were stated on the first page. But I think asking if ‘VCs are viable’ is not the same as ‘outcomes being as good’ – this should have perhaps been clarified.
The setting is HE, so formal ‘taught’ learning, within an arts college and a technology college
Theories and concepts – VC; CMC; active participation; collaborative learning; cognitive maturity; expanding access; key skills; mastery of subject; motivation
Methods – 107 students - questionnaires (pre and post); statistical data collection (behaviour, grades); personal interviews; case reports; participant observation
Findings – inconclusive; being a viable option depends on the range of variables; need motivated students, access and key skills (the same as you would for good outcomes using other delivery methods?)
Limitations – small sample; dictated to by available technology and data collection system; assumption of typical participation
Ethics – monitoring of participants; use of pen names; using participants statements (have they been asked); poor access = poor results (unfair?)
Implications – need investment in technology access, study/key skills, teaching and learning via VCs; also teaching/facilitation skills; further research for higher level course or look at embedding skills earlier in life?
Accessed TGF and decided to post Activity 1.5, as someone else had been brave enough to do this and also because I can’t get the blog sorted out.
So, for Reading 1…
What counts as evidence?
I see this as the samples provided of the interactions and the qualitative statements from those taking part; this is key evidence used for the analysis. Also the statistical data on performance and the questionnaires, as any data generated is still ‘evidence’ – even if it is backing up the fact that conclusions cannot be made (?).

How do the two explicit research questions relate to the design of the research?
I think the first research question is a little unclear – I have a view that a ‘viable option’ (subjective? How do we measure?)is not the same as ‘good outcomes’ (measured against poorer ones, sounds reasonable). That said, the use of statistics was appropriate for this use and the collection of qualitative statements contributed to the assessment of variables (where I agree – there seemed too many).

In what ways is the wider literature used in the paper?
This was used to introduce existing views and relevant ideas, such as collaborative learning, to ‘back up’ the approach taken or explain behaviour, such as ‘cognitive maturity’.

What views of education and learning underpin the research?
The view that making ‘more learning’ available, through technology, means that it is more accessible to a wider audience and could increase participation; but this may not be the case.
Is there is also an assumption that teachers are still required to ‘deliver’ this online learning?