Final Reflection

This final reflection on my learning in EDEM630 is a chance to synthesise what I have learnt about change in education due to technology and how the new knowledge can be applied in my own educational context, now and in the future. EDEM630 has been an asynchronous e-Learning course on Change with Digital Technology in Education. One of the most valuable aspects has been the practical context of actually experiencing that change for myself as I used various new technologies to enhance my own learning. For example, creating and maintaining my own blog was challenging initially but the value of the tool for sharing and reflecting on learning soon became apparent. It and my fellow learners’ blogs are now resources I find myself returning to repeatedly to help clarify my thinking.

I enjoyed the flexibility of the online course. It suited my learning style as I like time to ponder on the material and what has been said by others. The asynchronous format allowed me to think more deeply about the issues, and writing the public forum forced me to consider my responses more carefully. I also liked the variety of resources, from written to visual and audio, and what I thought were a well-scaffolded range of activities. So, while I read about the impact of technology innovations on teachers and students as part of the course work and for my research article, I find myself critically reflecting on some of the findings. For example, I share Hrastinski’s (2008) views that asynchronous online learning is more appropriate for cognitive participation, including individual reflection and critical assessment of peers’ ideas.

We started by looking at change with technology from an ecological perspective. It was a good starting point as I immediately understood how complex innovation adoption is and the range of stakeholders that affect and are affected by it. I have seen time and again the truth of the teacher as the ‘keystone species’ (Davis, 2008). In the tertiary environment, the institutional ecosystem definitely exerts a powerful influence also as technology enhanced learning is driven as a strategic goal. As I refined my topic and reading it’s been reassuring to find empirical evidence that supports some of my own observations, for example that IT stakeholders often have the power to promote or restrict technology adoption (Davis, 2008).

We also looked at theories of change with technology in the personal context. We looked at conceptual frameworks that stress the process of innovation adoption and the individual concerns and priorities at different stages of the process. I focused mostly on the Concerns-Based Adoption Model (CBAM) and, to a lesser extent, its refinement in the Learning Adoption Trajectory (LAT). I currently find myself at the beginning of an innovation project with Microsoft tablets (nothing to do with my research topic). As I prepare a staff capability plan, the CBAM has been helpful in reminding me to identify personal concerns and implement professional development and support that directly addresses those concerns (Evans & Chauvin, 1993). It’ll be interesting to see how the theoretical framework plays out in reality.

One of the most rewarding aspects of the course for me was the scenario planning. This was because it was also the most challenging, stretching my limited creativity to the max. After a false start with my matrix, I revised my axes to better effect. I enjoyed focusing on my own vocational tertiary context as it forced me to examine the key drivers of change in the region. I think scenario planning is a very useful tool for forcing decision makers to look past current trends and assist planning for longer-term, possible futures.

The final topic of learning was Marshall’s (2007) e-Learning Maturity Model. It is a self-critiquing and reflection tool that can be used by tertiary institutions to evaluate their levels of maturity in five dimensions that categorise e-Learning. It is based on assessments of current normal practices, so is highly reliable. However, it is a very complex assessment which would require considerable resourcing and time commitment. One ITP had been using it for six years to benchmark its capability (Marshall, 2012).

Most of all, what this EDEM630 course has given me is the ability to be a knowledgeable participant in my workplace. I have been able to synthesise my learning about change theories, technology trends, scenario planning and the eMM and feel I can now make a more informed contribution to the direction, implementation and assessment of our Technology Enhanced Learning Strategy.

References

Davis, N. (2008). How may teacher learning be promoted for educational renewal with it? In J. Voogt and G. Knezek (Eds.), International Handbook of Information Technology in Primary and Secondary Education (pp. 507–520). Amsterdam, Netherlands: Springer.

Evans, L., & Chauvin, S. (1993). Faculty Developers as Change Facilitators: The Concerns-Based Adoption Model. To Improve the Academy. Paper 278. Retrieved from http://digitalcommons.unl.edu/podimproveacad/278

Hrastinski, S. (2008). Asynchronous and synchronous e-learning. Educause Quarterly, 4, 51-55. Retrieved from http://net.educause.edu/ir/library/pdf/EQM0848.pdf

Marshall, S. (2012). E-learning and higher education: Understanding and supporting organisational change [Case study report]. Wellington, New Zealand: Ako Aotearoa National Centre for Tertiary Teaching Excellence.

Marshall, S. (2007). E-Learning Maturity Model: Process descriptions [draft report]. Retrieved from http://learn.canterbury.ac.nz/mod/page/view.php?id=186287

Reflection #3

eMM

Way back in weeks 9-10 of this course, we looked at Stephen Marshall’s e-Learning Maturity Model (eMM). It is a self-critiquing and reflection tool that can be used by tertiary institutions to evaluate their levels of maturity in five dimensions that categorise e-Learning. When assessing an organisation’s capability using eMM the practice is to select a number of actual courses (e.g. three or four). The selected courses should be be representative of organisational practice rather than exceptional. The purpose of eMM is to assess actual activity rather than intended activity for the future. A 4-point scale focuses on colours which give an overall picture (‘carpet’) of capability in a particular dimension, visually identifying areas for action and guiding an institution towards maturity.

I did an intuitive assessment of an online distance postgraduate course from a New Zealand university, focusing on just one process dimension (Delivery). The activity helped me to understand the comprehensive nature of the assessments, in this case the institution’s capability around the course’s learning outcomes, documentation and student support. I also created a mindmap to highlight the relationship between the different structural components of the eMM. I found that quite difficult to do, maybe because I’m a more linear thinker, or perhaps because I was already distracted by the looming prospect of the research article.

I do think this is a useful tool for tertiary institutions to use to self-critique their own e-Learning capability or maturity. I’ve mentioned several times that I’m on my institution’s Learning Technology Steering Group. On our social media site we have recently been discussing Marshall’s case study on a mid-size ITP with a lot of similar e-Learning goals to our own (Marshall, 2012). Interestingly, the ITP had been using the eMM for six years to benchmark its capability, with the biggest gains coming when they gave responsibility to the teaching staff and focused on developing staff capability and new teaching models. Rather than Roger’s model of innovation focused on early adopters, systems were put in place for adoption by all staff. That’s quite a commitment, requiring considerable resourcing to engage all staff.

Research assignment

I am struggling somewhat to clarify my research topic, finding it difficult to move from the review essay to a research focus. I have no primary research nor case study to analyse, which means I’m relying on limited secondary research on the topic I chose – innovating with online synchronous technologies to develop students’ academic literacies. My reading and thinking has moved back and forth from Niki’s ecological perspective, looking at all the stakeholders that affect and are affected by adopting the technology, to a focus primarily on the tutor and student. While web-conferencing provides a viable, flexible alternative to face-to-face academic support in individual consultations, the literature suggests real issues around group or class engagement when the tutor is not the regular class teacher and has little time or opportunity to build rapport and trust. I’ve been reading about Michael Moore’s Theory of Transactional Distance and wondering if that is a relevant concept to use to analyse online learning practices. Moore’s theory examines three factors that influence the teacher/student relationship in distance learning: dialogue, structure and learner autonomy (Falloon, 2011).

So, anyway, I digress. I’m not sure how many theoretical concepts to use. Is three too many? (I want to include the CBAM). What I’m thinking is that I need to turn this around and consider what guidance other tertiary learning advisors would appreciate if asked to teach and support distance students online. What information and skills do they need? What support do they need? How do they engage students in a meaningful and timely way? What are the barriers and pitfalls to look out for? And that’s where I’m at; purpose identified and the article still in the conceptual stage.

All that’s left to do is write it… time

References

Falloon, G. (2011). Making the connection: Moore’s theory of transactional distance and its relevance to the use of a virtual classroom in postgraduate online teacher education. Journal of Research on Technology in Education, 43(3), 187–209.

Marshall, S. (2012). E-learning and higher education: Understanding and supporting organisational change [Case study report]. Wellington, New Zealand: Ako Aotearoa National Centre for Tertiary Teaching Excellence.

Marshall, S. (2007). E-Learning Maturity Model: Process descriptions [draft report]. Retrieved from http://learn.canterbury.ac.nz/mod/page/view.php?id=186287

eMM Mindmap and Mini-Evaluation

The following is a visual representation of the eMM model and its structural components in relation to my context. Apologies for the difficulty in reading the detail – I’ll work on improving that.

eMM Mindmap

Strengths

The dimensions contain a number of process categories that examine performance based on examples of normal practice, so that the capability of the institution as a whole is examined, rather than just best practice. That means it looks at what the majority are doing, not just the star innovators.
The model aims to focus on standard, reproducable e-Learning processes, regardless of the different technologies and teaching practices in individual institutions, so that meaningful comparisons can be made across the educational sector.

Weaknesses

It is a very complex set of processes for an institution to follow. It involves the examination of large amounts of detail which requires considerable time and resource.
There is still an element of personal judgement in determining capability so the evidence that supports the assessment is critical.

Recommendation for using the eMM in my personal context.

Yes, I think that, with appropriate resourcing, the eMM would provide a comprehensive detailed assessment of the eLearning capability in my tertiary context. For an institution in which change is the only constant, it would provide a ‘big picture’ understanding of how well the institution is achieving its goals, how they shape up against comparable institutions, and how and where they need to improve.

Marshall, S. (2007). E-Learning Maturity Model: Process descriptions [draft report]. Retrieved from http://www.utdc.vuw.ac.nz/research/emm/documents/versiontwothree/20070620ProcessDescriptions.pdf

Victoria University. (2008). E-Learning Maturity Model: Version two [Website]. Retrieved September 15, 2013, from http://www.utdc.vuw.ac.nz/research/emm/VersionTwo.shtml

eMM Process Intuitive Assessment

I have chosen an online distance postgraduate course from a New Zealand university for this activity. I conducted an E-Learning Maturity Model (eMM) intuitive assessment of the processes L1, L2 and L3 restricted to the Delivery dimension. The university’s current eLearning approach is characterised by blended and distance learning approaches that provide flexible learning opportunities through a Learning Management System (LMS), particularly at postgraduate level.

Assessments of the Delivery dimension aim to determine how effectively the process outcomes are delivered within the institution. I have used the eMM capability assessment scale of: Not Adequate (NA); Partially Adequate (PA); Largely Adequate (LA); or Fully Adequate (FA).

The intuitive assessment results for each process in the category for the delivery dimension are:

L1. Learning objectives are apparent in the design and implementation of courses

LA – Formally stated learning objectives provided to a limited extent, either as narrative descriptions of the course outcomes or only in documentation
provided after enrolment.
LA – Most, but not all, assessments and learning activities contain explicit linkages to course learning objectives or restate learning objectives using different
wording.
LA – Most, but not all, assessments and learning activities contain explicit linkages to course learning objectives or restate learning objectives using different wording.
FA – Learning objectives formally and systematically address a range of student outcomes beyond the recall of information.
FA – Learning objectives are formally and systematically linked with course workload and assessment design and development.

Comments: The course has clearly defined learning outcomes, which are available online in pre-course documentation. The learning objectives refer to the assessment dimensions of Masters programmes. They emphasise critique and reflection, while the learning outcomes systematically refer to outcomes such as the ability to analyse, critically evaluate and apply.

L2. Students are provided with mechanisms for interaction with teaching staff and other students

FA – Interaction between staff and students provided formally through multiple complementary communication channels.
FA – Course documentation contains clear and consistently presented lists of teaching staff email addresses repeated in suitable places.
LA – Technical support is provided to students to assist them in making effective use of the available communication channels, but support is not actively promoted or provided to all students.

Comments: Interaction between tutors and students occurs in multiple mediums such as email, discussion forums and blog comments. Staff details appear on the LMS homepage, under course information and in list of participants. Technical support is available in terms of information on effective use of communication channels but support with technology issues is on an adhoc basis.

L3. Students are provided with e-learning skill development

FA – The relationships between all key course components and activities are conveyed to students formally and consistently.
LA – Formal opportunities for students to practice with e-learning technologies and pedagogies provided after commencement of courses, or only cover some technologies and pedagogies or some courses.
PA – E-learning skills support and training is provided informally and depends on the teaching staff skills and availability.
FA – Formal opportunities for feedback beyond the marks assigned for assessed work provided during all major course activities to all students.

Comments: There is a clear progression and link from teaching components to writing activities. Opportunities to practice with e-learning technologies are embedded formally in the course but support and training is provided informally online. There are formal opportunities to provide feedback on all activities via discussion forums and personal reflection.

Value judgement: The accuracy of the intuitive assessment is enhanced by clear and explicit documentation that is easily accessible on the LMS. However, the assessment is based on one course only and may not be representative of institutional practice. Some issues around accuracy may arise from a limited understanding of the characteristic statements for the Delivery dimension.

Disclaimer: This eMM assessment is based entirely on an intuitive assessment constructed on limited evidence and knowledge of the eLearning course in question, and completed purely as part of a formal course activity for EDEM630.

Reference:
Marshall, S. (2007). E-Learning Maturity Model: Process descriptions [draft report]. Retrieved from http://learn.canterbury.ac.nz/mod/page/view.php?id=186287

More Reading around the Virtual Classroom Environment

Karaman, S., Aydemir, M., Kucuk, S., & Yildirim, G. (2013). Virtual classroom participants’ views for effective synchronous education process. Turkish Online Journal of Distance Education, 14(1), 290-301. Retrieved from http://tojde.anadolu.edu.tr/tojde50/index.htm

Description:

In this case study, 20 participants associated with a theology degree programme were interviewed about their virtual classroom (VC) experiences. The aim of the study was to identify the key components that make the VC environment and teaching and learning methods effective.

In this study the common features of the VC included file presentation and screen sharing, chat, audio and video conversation, and whiteboard capabilities. All the VC sessions were supported by technical staff, who installed video and audio materials prior to the sessions and supported the instructor throughout the lesson. All sessions were recorded and subsequently published for students by technical staff.

Analysis of the interviews identified four aspects of the VC environment that were essential to success. Different communication formats were important for interaction between instructor and student, especially for motivation and when clarification was required. The lack of technical stability and technical problems diminished effectiveness, and motivation was linked to the amount and immediacy of technical support. Scheduling of classes must suit students and instructors noted that the typical one-hour classes were not long enough. Finally, students were more motivated when different learning materials from those on their Learning Management System were used, and in different formats. Students wanted video summaries and problems to solve.

The other key component to success is teaching method. Active participation, through questioning techniques as well as problem solving, was an effective motivator. However, some participants wanted specific times for questions so as not to disrupt flow. Even though online, the instructor should express enthusiasm through voice and body movements but should not engage students in ways that could be distracting to others. The material should be related to real-life issues and situations wherever possible to increase motivation. The degree to which the students are prepared before the lesson (ie. previewed the reading material) greatly affects engagement in the VC session.

Evaluation:

Semi-structured interviews were conducted with only 20 participants (8 instructors, 10 students and 2 technical staff), which reduces the ability to generalise the findings. Another limitation was that participants all came from one programme. The focus on the VC teaching environment and methods has identified keys concerns for both instructors and students, as well as highlighting the importance of comprehensive and timely technical support. While the article was light on detail, the qualitative interview approach gives me some insight into the thoughts and feelings of those involved in VC instruction.

Personal Reflection #2

InsomniaThis reflection is emerging in the wee small hours out of an inability to shut my mind off and let sleep embrace me. That, and having earlier read Lyn’s comments on procrastination…

The SP4Ed mOOC was a mixed experience for me; I found the topic of scenario planning fascinating, but the timeline challenging. For the 2 weeks I was part of it I felt like I was in a parallel world, not quite in sync with this one. For unavoidable personal reasons, I missed the first 4 days of the mOOC. I didn’t follow advice NOT to do every activity, choosing instead to work my way through the variety of activities I felt were giving me a scaffolded understanding of scenario planning. As a result, I was always a couple of days behind the timetable of activities, always wanting to catch up and contribute but never quite able to. The result was my microblogging was sporadic and I feel I missed timely opportunities to connect and network.

It wasn’t that there was too much to do (though the workload was intensive over such a short period) but rather that the selection of readings and videos were interesting enough to invite prolonged exploration. I compliment Wayne and Niki on the variety of stimulating resources and activities.

I really enjoyed delving into the processes and ‘worlds’ of scenario planning. I’ve learnt not to call scenario planning future prediction (although I’ve used the phrase in an early blog post) but rather a tool for planning for possible futures. It has already affected my perspective on planning in the Learning Technologies Steering Group I’m on at work. It’s reassuring that we have been looking at documents like the Horizon Reports in our strategic planning but there is also value in extending the scope and looking beyond the next five years.

However, I did not find my own scenario planning easy. I don’t think I’m a naturally creative thinker or writer and creating my matrix was a difficult process. I can see why scenario planning is undertaken mostly by groups of creative experts rather than individuals. I took on board what Wayne said about needing to test the scenarios against your drivers and found that I did indeed need to change one of my original key drivers. That was a lightbulb moment for me. I approached the article with trepidation (and a lot of procrastination). What really helped was identifying major events on a timeline. Once I had the bones sorted, I found that I enjoyed the writing process a lot.

Also interesting, and very instructive, are the blogs of my peers whose posts I find creative, insightful and often amusing. It’s great to have that resource at hand to provide an interesting range of perspectives and they contribute significantly to my own understanding. They can also be a bit daunting; the high quality challenging me to maintain a certain level of writing. I see that as a good thing to strive for!

It’s been difficult this week to switch focus back to my research topic and review essay. Getting a couple more annotated bibliographies under my belt has helped. I find I’m looking forward to the challenge, happy in the knowledge that I’ve completed part of the course assessment, and ecstatic about the extended deadline!
Right, off to bed and, hopefully, to sweet dreams!

New CE Shows Reliance on Kiwi Innovation

Luke Brenton, originally from Rangiora, New Zealand, is set to become the highest paid company employee in history. Sources suggest that a salary in excess of $500 million has guaranteed the services of Reliance Group’s new Chief Executive. Brenton was previously the managing director of Reliance’s Technology Division and masterminded the volatile takeover of Microsoft seven years ago, Brenton has made his mark by aggressively taking on competitors in the technology industry like China Mobile and Apple.

Brenton expressed ironic satisfaction at the appointment, stating “Of course I’m thrilled as the one chosen to put his head on the chopping block.” He didn’t elaborate but this was probably a reference to his predecessor, Max Mengstrom, who ‘resigned’ earlier this month after a record US 27 billion dollar loss last year for the world’s third largest multinational conglomerate. His successor, Brenton, can claim to have headed the only Reliance division that posted a profit in the last financial year.

Much of that success can be attributed to the technological innovations coming out of Brenton’s home city of Christchurch, New Zealand. The Indian-owned Reliance Group have owned 87% of the technology and entertainment precincts in Christchurch since 2017, when the then National government was forced to accept foreign investment after the 2011 earthquake rebuild stalled. Reliance now have a 69% stake in the technology, textiles and retail industries across New Zealand, compared to a 43% stake worldwide.

Brenton attributes his own success to his education and early opportunities in Christchurch. He completed a degree in Business and Technological Innovation at the Christchurch Polytechnic Institute of Technology (CPIT) in 2024. This was four years after Reliance first started funding CPIT to create courses specifically designed to educate and train current and prospective employees for their business enterprises in Christchurch and elsewhere in New Zealand.

“I think it was a no-brainer for CPIT at the time,” remarks Brenton. “The rebuild was pretty much finished and they were looking for other stakeholders and areas of revenue to replace trades courses.”

Brenton reminisces on his time in New Zealand with fondness. “Reliance really looked after me…looked after all its employees actually. They paid for me to continue to study courses specific to my work at the time. I did short, block courses at CPIT. Later, when I moved to the New Zealand head office in Wellington, I did online courses through CPIT. The learning materials were very focused on the Reliance business model, but I was able to do some activities on the job, which suited me. They even gave me one of their first branded mobiles to use. I’ve had an Individual Learning Plan all the way through and access to personal mentors. They’re a great company!”

Of course, not everyone agrees that the Reliance Group is a great company. The announcement of Brenton’s promotion to CE has initiated another wave of protests both here in Dehli and in New Zealand. Protesters are angry at Reliance’s latest takeover of that colossus online shopping company Trade Me.

One protester in Dehli, who did not want to be named, complained, “They already have a monopoly on manufacturing and retail shopping – look at how many malls they own! Now they’ll control online shopping too. They’ll shut out the competition so they can keep prices high.”

Thousands of miles away in Christchurch, Juan Rodriguez, was also protesting against Reliance’s use of technology. He worked in Reliance’s technology innovation division in New Mexico before being relocated to Christchurch two years ago. Ten months ago he lost his job after complaining about breaches of the Privacy Act as a result of the information gathered from learning analytics used at CPIT.

“I worked on the learning analytic programme Reliance uses to collect data to support educational progress. It’s a great way to analyse learner interactions with content and identify issues early. However, I believe that Reliance were being unethical and exploiting the data for commercial purposes.” Rodriguez took a personal grievance to court but unfortunately lost his case.

Back in Dehli, Brenton makes no apologies for using analytics and sees the potential of data mining as the key to turning around the fortunes of the Reliance Group. He also sees an opportunity to help his native city and learning institution.

“When I was last home, I talked to the Chief Exec at CPIT about the future of the institution. They are now developing approved programmes of study for Reliance in a wide range of subjects. I’m thinking of developing that aspect, getting expert educators to collaborate to produce resources that can be taught in blended or online courses. CPIT could become a hub of educational innovation… within the values and control of the corporation of course.”

imagesCAX9HHOJ