Tanya M. Joosten
University of Wisconsin-Milwaukee
This article provides a series of practical considerations for planning an evaluation of blended learning. In order to provide a clear and meaningful guide to evaluating blended learning, this article uses the why, who, what, how, and when of the evaluation process as a planning framework. Why is blended learning being evaluated? What variables will be examined in the evaluation? How will the data be collected, analyzed and presented? Who will be responsible for gathering and analyzing the data? Who will participate in the study and be the primary source of data? When will the evaluation be conducted and completed? Examples will be provided from the University of Wisconsin-Milwaukee’s evaluation planning as part of its Sloan-C Localness initiative. Finally, this article identifies potential challenges in the evaluation of blended learning and recommends strategies to overcome these challenges.
Practical Considerations in Planning the Evaluation of Blended Learning
Blended learning is growing rapidly and becoming increasing popular on campuses. According to Allen, Seaman, and Garrett (2007), “overall, 36 percent of schools offer at least one blended program” (p. 36). As the demand for blended learning opportunities increases, so does the need to evaluate the impact of blended courses and programs in order to provide a greater understanding of blended learning.
Blended learning, which is sometimes called hybrid learning, is a combination of face-to-face and online learning activities. These courses “blend” the two mediums in order to be able to find the most effective method of teaching dependent on the characteristics of the medium. As Garnham and Kaleta illustrate (March 20, 2002), “Hybrid courses are courses in which a significant portion of the learning activities have been moved online, and time traditionally spent in the classroom is reduced but not eliminated” (para 1). More specifically, Picciano (2006) describes the accepted definition of blended learning developed and adopted by the participants of the 2005 Sloan-C Workshop on Blended Learning as:
1) courses that integrate online with traditional face-to-face class activities in a planned, pedagogically valuable manner; and,
2) where a portion (institutionally defined) of face-to-face time is replaced by online activity (p. 97).
The benefits of the blended model lie in its potential to provide flexibility in learning and additional opportunities for instructors to meet his or her pedagogical goals.
There are several reasons for the growing interest in blended learning and for the increasing number of blended learning initiatives undertaken on campuses. Faculty implement the blended model in order to take advantage of the pedagogical rewards in using two mediums, online and face-to-face (Goambe, Picciano, Schroeder, and Schweber, 2004), which includes the opportunity to make student learning more active. For example, Kaleta, Skibba, and Joosten (2007) describe that “faculty decided to try the hybrid model because of the many teaching and learning benefits…including the ability to provide more ‘active learning’ and ‘engage’ students by using technology” (p. 136). Other often cited reasons for the increased interest in blended learning relates to opportunities for improving student learning (Dziuban, Hartman, & Moskal, 2004), increasing student satisfaction (Dziuban and Moskal, 2001), and increasing retention and access (Picciano, 2006). For instance, Picciano (2006) explains that “well-designed blended learning environments have the potential of increasing access to a higher education because they improve retention” (p.100). Due to the potential of the blended model to improve student engagement, learning outcomes, student satisfaction, retention, and access to courses and programs, it is not surprising to see the increasing acceptance and adoption of blended learning.
The purpose of this article is to help individuals with the planning of their evaluation by discussing the fundamental issues that need to be considered when developing an evaluation plan for blended learning. The ideas presented in this article are an extension of a presentation at the Sloan-C Workshop on Blended Learning in Higher Education hosted by the
When any new pedagogical or technological implementation is integrated into a course or program, an evaluation of its impact is necessary. As faculty change their pedagogical model to include online activities and design a blend of the online and face-to-face environments, educators are looking to examine the impact of blended learning. They want to be able to determine whether or not blended learning is having the expected impact at the course, program, and campus levels. Due to the need across campuses, evaluation planning is promptly taking precedence.
In planning for the evaluation of blended initiatives, there are some broad principles that are key to successful preparation. First, evaluation should be an integral part of any planning for a blended learning initiative. It should was be integrated into the initiative plan from the beginning to give it the focus needed to measure the changes taking place and to understand the impact that blended learning is having on students, faculty, and the campus. Next, keep the evaluation plan simple. If not controlled, focused, and organized, an evaluation can quickly evolve into an unmanageable project. Try to focus on the high priority goals and needs reflected in the initiative’s plan. Finally, the implementation of the evaluation needs to commence early. Evaluation can be intimidating and, therefore, people procrastinate. Also, because institutions are initially focused on preparing faculty and students for blended learning, they often fail to consider evaluation until the first semester or first year has passed. Early planning and implementation result in effective and efficient evaluation. Evaluation does not lend itself to a “just in time” approach.
This article will now address the specific issues that should be considered when developing an evaluation plan. In order to provide a useful guide to evaluating blended learning efforts, this article uses the why, what, how, who, and when of the evaluation process as a planning framework. Further, to make the process more clear and meaningful, the article will provide insight from one institution’s evaluation of blended learning.
Why is blended learning being evaluated?
In developing the evaluation plan, the purpose of the evaluation is whether or not the goal of the initiative was achieved. What were the intended outcomes of the initiative? One common intended outcome of blended learning initiatives is to increase access to students. Specifically, Lorenzo and Moore (2002) identified Sloan’s pillars of excellence, which are goals for blended learning, as learning effectiveness, student satisfaction, faculty satisfaction, cost effectiveness, and access. Also, the intended audience or who will receive and use the evaluation findings impacts the purpose of the evaluation. The audience can be students, faculty, or administrators on campus. It can also be outside grantors who funded the initiative. What does the audience wants to know about the blended learning initiative? For example, does the audience want to know how the blended model impacted student learning, attracted new students (remote, non-traditional, minority students), or affected retention rates? The needs of the audience will also be impacted other the administration’s support of blended programs, other initiatives on campus, the campus culture surrounding blended programs, and the general climate in education surrounding the pedagogical model or technology.
At the University of Wisconsin-Milwaukee (UWM), we are evaluating blended learning as part of our localness project. The project, Blending Life and Learning (http://blended.uwm.edu), was funded by the Alfred P. Sloan Foundation Anytime, Anyplace Learning, Trustee grant with the goal of changing the institution and its relationship with the community by increasing working adults’ access to academic programs and reaching new markets of students in the broader metropolitan area. Mayadas and Picciano (April, 2007) describe the concept of localness as “focusing on connections of higher education institutions to their local communities and/or radii of influence” (para 6). Specifically, they describe the idea of localness as one where “educational institutions [strengthen] their positions within their local regions by expanding their [asynchronous learning network] ALN and blended programs…A strong ALN or blended effort [permits] institutions to extend and expand their effective core constituent bases” (para 6). One goal of our initiative at UWM is to expand our access to local residents through our blended learning initiative.
UWM is not new to blended learning due to their 1999 to 2001 Wisconsin System Curricular Redesign Grant program that focused on the implementation of blended courses and evaluation of instructors’ and students’ experiences (see Garnham and Kaleta, March, 2002) resulting in the internationally recognized web repository for hybrid and blended learning (http://hybrid.uwm.edu). Since UWM is a doctoral institution primarily serving the population of over two million in the seven-county Milwaukee metropolitan area with a current enrollment of over 29,000 students and a 93-acre campus with little options for physical expansion, UWM implemented the Blending Life and Learning initiative to increase the possibility of adding new enrollments for its local population by offering additional blended courses and programs. Therefore, the purpose of the UWM evaluation of blended learning is to determine whether an increase in the availability of blended programs and courses attracted local, non-traditional students.
What variables will be examined?
In addition to defining the purpose of the evaluation based on the goals of the initiative and the audience, the variables that will be examined to address the purpose of the evaluation need to be isolated. Variables are a “characteristic or attribute…that researchers can measure or observe” (Creswell, 2008, p. 123). This will be motivated in a noteworthy way by the why question in determining what factors, such as grades, satisfaction, performance, retention, or student status will be measured. In the
At UWM where we are evaluating blended learning in order to determine whether there is an increase the availability or programs and courses for students who are local, non-traditional students, we must determine which variables will assist us in answering this question. First, there are characteristics about the curriculum that will assist us in addressing our evaluation (e.g., number of new courses, new programs, new enrollments). Next, there is student demographic information that will assist us (e.g., number of commuter, part-time, working, local students, time to degree, locale) as well as student satisfaction rates. Finally, there is supporting services information surrounding the program that will help us answer our question (e.g., marketing efforts, student support services). Remember, each of the variables selected as part of the evaluation needs to be able to be isolated and measured.
Once the variables are defined, a clear research question can be written. The evaluation may have multiple research questions. As illustrated previously, the purpose may be to examine how student achievement was impacted by blended programming. The grade point of students who completed the program before it was blended can be compared to those students who completed the blended program to illustrate a relationship (increase, decrease, or no change). The research question needs to be specific and concise. It will identify the variables and a relationship. For example, do students who complete blended programs on our campus have similar student achievement than those who completed traditional programs? At UWM, we developed questions, such as: What percentage of new programs, new courses, and new enrollments were the results of the BLL initiative? What is the increase in student “local” enrollments? To what degree did the radius of student enrollments expand? Examining the purpose of the evaluation and the audience can assist in defining the variables that are measureable. Then, incorporate the variables into a clear and concise statement, or research question, which shows a relationship between the variables to be measured. By completing these steps, a clear evaluation plan will develop.
How will the data be collected, analyzed, and presented?
The collection, analysis and presentation of the data will be important in effectively answering the research questions and clearly communicating those answers to the intended audience illustrating the achievement (or lack of) of the initiative’s goals. Each campus will have different reasons for evaluating blended learning and the audience will be unique to each evaluation. The audience greatly influences the data collected, the method of analysis, and the form in which the results are presented. For instance, a faculty member wishing to evaluate the effectiveness in his or her class will have a different approach than an evaluation of a campus-wide blended learning initiative funded by the provost’s office. A faculty member might desire to show his or her colleagues that implementing blended learning resulted in a more effective means of achieving their learning objectives through a case study approach. This qualitative approach is very detailed and time consuming. It also only gives a glimpse into one contained phenomenon, one course, so its generalizability can be limited. The audience, other faculty members, can use this case study in supporting their own decisions in their courses; however, it will not be particularly useful to show programmatic impacts on a campus, which would be necessary in the latter scenario.
At UWM our current evaluation focuses on whether our recent implementation of blended learning has increased our ability to attract local, non-traditional students as part of our Blended Life and Learning initiative. We will demonstrate to our funders that their investment made an impact on our campus. Specifically, the objective of our evaluation is to show whether or not the BLL initiative resulted in the predicted outcomes, increasing the access for these students. The funding for our project was received from the Alfred P. Sloan Foundation and a matching donation from the University of Wisconsin-Milwaukee. Our specific audience is the project investors. We need to provide quantitative evidence that the funded project resulted in an increase in localness.
Although our stakeholders might find the case study approach interesting, our audience’s questions could not be fully addressed using this methodology. UWM’s evaluation focuses on collecting and analyzing data surrounding the change in student characteristics after our new blended programming was implemented. This is our necessary data. In our evaluation we may decide that it is nice to know how the faculty perceive the impact of blended learning on their course, but that is not a requirement for this evaluation. In order to effectively manage the timeline and resources, an evaluation must stay centered on the initial questions: why is blended learning being evaluated and who is the intended audience?
As we briefly described previously, a case study analysis of an individual blended course would not be appropriate to demonstrate the institutional impact of a localness initiative. Also, a multivariate analysis of several variables would not necessarily be understood by a general audience, so those statistical methods would be reserved for audiences of faculty and scholars. Therefore, in planning an evaluation, how the data needs to be presented, to who, and what question it is answering will greatly impact how the data is collected and analyzed.
In determining the research method, it is important to revisit the evaluation’s purpose and the audience. What is the knowledge base of the audience when it comes to understanding the reporting of the findings? In most cases when it comes to evaluating the impact of blended learning, descriptive statistics can be used to answer the outlined research questions. Descriptive statistics can be understood by most general audiences. For instance, in one study we ran a multiple regression looking at student engagement, performance, learning, and some other variables only to realize later that the audience was not familiar with the method nor did the method address the primary research question. In addition to descriptive statistics, qualitative findings can provide rich meaning to accompany the statistics or can provide webs of significance when implemented independently. So, once the audience is considered and how the findings could be presented to them, then the methodology can be determined. Individual perceptions can be gathered using Likert surveys, narratives, or focus groups. Also, data mining can be used to gather archived institutional data (e.g., grade and retention data, course evaluations). In determining the methodology, it is important to consider ease of administration (e.g., web-based surveys) as well. Collecting richer qualitative data is a possibility, but will greatly depend on size of implementation and resources.
Who will be responsible for gathering and analyzing the data? Who will participate in the study and be the primary source of data?
The data resources and research support available will impact the type of methodology that is feasible to accomplish in a given time frame. For instance, the evaluation may be conducted or supported by a campus research unit or may be the responsibility of an individual faculty member. Some evaluations will be at the course level and others will be at the program level. At UWM, the evaluation of the BLL project is being conducted by those responsible for the management and coordination of the initiative rather than by an institutional teaching research unit. Because the responsibility is not that of any one unit or even one individual, the resources for completing the project are scarcer than some other institutions making it even more important that the evaluation stay focused and precise.
In addition, it is important to harness campus resources to assist in collecting and analyzing the data. For example, the tasks were outlined and each individual is responsible for a step in the process in the UWM evaluation. One person is responsible for gathering certain programmatic information from the unit contacts (e.g., courses offered blended). Another person is responsible for contacting the data warehouse and gathering the needed student information from the newly offered programs and courses as well as harnessing campus resources (e.g., institutional research) to assist in the analysis of the data. A third person is gathering campus service information (marketing, tutoring, library) that is needed to illustrate the implementation efforts. If there is no teaching research unit, discover which units on campus can assist in gathering and analyzing data.
Ask the following questions in discovering these resources:
What unit (e.g., data warehouse) houses the student data (e.g., grades, gender, etc.)?
What unit traditionally analyzes and reports student outcomes (e.g., institutional research unit)?
What data are available through the institutional research unit (e.g., teacher evaluations, grade performance, student characteristics, retention rates)?
What faculty or graduate students would be interested in assisting in the gathering (conducting focus groups) or analyzing (running SPSS descriptive tables or bar charts) of the data?
What unit on campus can assist in web-based survey administration?
What quantitative and/or qualitative research methods training does the unit’s staff have that can be used in gathering and analyzing data?
Meaningful data can be gathered through existing institutional data (e.g., data mining) or can be collected using various quantitative or qualitative methods (e.g., surveys, focus groups, narratives), which are conditional upon the purpose of the research. Again, the approach used will be greatly determined by the resources that are available for the evaluation.
Along with determining who will conduct the evaluation, it is important to determine who will be the subjects of the study. Most frequently, only student data is considered in evaluating the impact of blended learning. However, in developing an evaluation plan, it is important to emphasize multiple perspectives (students, faculty, institutional data, support staff, administration) when appropriate. For example, when it comes to student achievement, students and faculty may have different perspectives on the impact of the blended model on their learning and performance in a course. The institutional grade data may show yet another perspective on the same research question regarding student achievement. Depending on the time and resources, consider what perspectives are important to fully understanding the outcomes to the research questions.
When will the evaluation be conducted and completed?
It is imperative that a realistic timeline for the design, data collection, and analysis be developed. This timeline will be affected by a number of factors including the size and scope of the study, the methodologies used, and the campus resources available. First, the evaluation needs to contain a detailed timeline with much flexibility for unexpected delays. In creating this timeline, the tasks need to be as detailed as possible. For example, these tasks could include the following:
1.) Completing IRB forms and Receiving IRB approval.
2.) Contacting individuals that will be gathering and analyzing data (research support).
3.) Developing instruments (surveys, interview schedules, focus group schedules).
4.) Gathering course and program level data from contacts (advisors, chairs, deans, instructors, student support services, faculty support services).
5.) Developing a complete list of courses and programs and date of delivery.
6.) Gathering data of the blended courses from the institution’s data warehouse.
7.) Administering data collection (survey, focus groups).
8.) Cleaning up the data, recoding variables from the data warehouse and survey, coding qualitative data.
9.) Analyzing data
10.) Develop results
11.) Develop presentable written form of results
12.) Develop graphic representations of the results
Next, once you have each specific task list, then identify the person or unit that is responsible for the task. Finally, identify the date by which the task will be completed. There will be many unexpected challenges, so make sure to build in some flexibility in the timeline.
Beyond the creation of the timeline itself, there are a couple tips to completing an evaluation on time. The evaluation needs to stay focused on the purpose and variables identified. Clearly identify the individuals that can assist in the evaluation and obtain their buy-in early. Have a point person or project manager that can receive updated status reports from the responsible parties on an interval basis (e.g., weekly). Highlight potential weaknesses in the evaluation process and begin troubleshooting early. Task completion will be impacted by the academic schedule. For instance, if collecting student perception data (e.g., course or program satisfaction) is part of the evaluation, it is best to accomplish this task prior to the students completing the semester. During the summer, many individuals leave campus for vacation making it challenging to gather information and data from them. However, since some faculty and graduate students have time off during the summer, more human resources may be available to assist in gathering and analyzing data. In planning the timeline for the evaluation, define all tasks and responsible parties carefully, build in extra time, and consider academic calendars and workload.
We have discussed why, what, how, who, and when of evaluation from the UWM perspective; we would now like to discuss potential challenges in the evaluation of blended learning and strategies previously used to overcome these challenges. Time is one of the most valuable resources. Create a timeline that is realistic and has some flexibility. At UWM, a timeline was developed and before it was approved by all parties, due dates of tasks had already passed. Being realistic about the time it takes to complete tasks and allowing flexibility as to when those tasks will be completed will be important to have a feasible timeline and will be important managing the expectations of the evaluation and the audience’s expectations as to when they can receive the findings. Also, if requesting information from other units, let the contacts know beforehand that an information request will be sent. At the time of request, allow additional time for that information to be received. Everyone has extremely demanding schedules and it may take them several weeks to get the data and information. If individuals are prepared for the request, they are more likely to complete it in a timely manner when the official request is received.
Next, it is easy to conjure endless ideas for evaluation. Develop a clear purpose and stay on track. As a scholars, researchers, and practitioners, we can start to consider more variables and more relationships, which can stall an evaluation. There are many ideas that will arise that are interesting and thought provoking, nevertheless continue pursuing the initial evaluation plan. Once that is complete, then a new evaluation can plan to explore the other relationships can be developed.
Finally, anticipate challenges in the evaluation process. There are a lot of unknown variables when trying to complete an evaluation. IRB could take longer than anticipated. The subject data (students, instructors) could fail to participate. The course and student data could be incomplete when received from the database. The survey instruments could turn out to be unreliable. There could be no effects or unintended effects. And, these are just to name a few.
Allen, I.E., Seaman, J., Garrett, R. (2007), Blending in: The extent and promise of blended education in the
Creswell (2008). Educational research: Planning, conducting, and evaluating quantitative and qualitative research.
Dziuban, C., Hartman, J., & Moskal, P. (2004, March 30). Blended learning.
Dziuban, C. and Moskal, P (2001). Emerging research issues in distributed learning. Presented at the 7th Sloan-C International Conference on Asynchronous Learning Networks in
Kaleta, R., Skibba, K. A., & Joosten, T. (2007). Discovering, designing, and delivering hybrid courses. In A. Picciano & C. Dziuban (Eds.), Blended learning: Research perspectives.
Garnham, C., & Kaleta, R. (2002, March). Introduction to hybrid courses. Teaching with Technology Today, 8 (6). Retrieved from http://www.uwsa.edu/ttt/articles/garnham.htm.
Godambe, D., A. G. Picciano, R. Schroeder and C. Schweber. Faculty perspectives. Presented at the 2004 Sloan-C Workshop on Blended Learning,
Laster, S., G. Otte, A. G. Picciano and S. Sorg. Redefining blended learning. Presentation at the 2005 Sloan-C Workshop on Blended Learning,
Lorenzo, J.,and Moore, J. (November, 2002), The Sloan Consortium Report to the Nation: Five Pillars of Quality Online Education, Retrieved from www.sloan-c.org/effectivepractices/pillarreport1.pdf.
Mayadas, A.F., and Picciano, A.G. (April, 2007). Blended learning and localness: The Means and the end. Journal of Asynchronous Learning Networks. 11, 1.
Picciano, A. G. (2006). Blended learning: Implications for growth and access. Journal of Asynchronous Learning Networks, 10(3), 95-102. Retrieved from http://www.sloan-c.org/publications/jaln/v10n3/pdf/v10n3_8picciano.pdf
Picciano, A. G., & Dziuban, C. D. (Eds.) (2007). Blended learning: Research perspectives.