首页
杂志网
当前位置:首页>>计算机网络>A Comparative Analysis of Learning Experience in A Tradition>正文

A Comparative Analysis of Learning Experience in A Tradition

来源:杂志发表网时间:2015-12-20 所属栏目:计算机网络

  

Research Design:

To create a similar learning environment for both the control (traditional) and experimental (online) groups, similar requirements were instituted. All the materials for the Financial Management course were placed on two different Web sites. The course material for the online group was placed on the WebCt (commercially available courseware) platform. Additionally a separate Web site, containing the same course material as the WebCt site, was created from scratch on the University server. The course requires two proctored tests. Therefore, the online students were compelled to come to campus at specified dates to take those tests with the traditional students. The only difference between the two sites was that the WebCt site allowed asynchronous communication between the students, and the students/instructor, while this feature (online interaction) was not available on the site accessed by the traditional students.
For the traditional group most of the interaction was done face-to-face, while the instructional interaction for the online students was all through the asynchronous bulletin board and email. The two groups were required to interact with the course material over the Internet (read or download lecture material, assigned readings, solutions to homework problems, etc.). As a result, the factor of Novelty effect (students working with something different: Internet) was eliminated (Merisotis & Phipps, 1999). Thus, except for the method of delivery of the subject matter, the two groups were exposed to identical course materials, instruction, and examinations.

Data and Research Methodology:

Beginning in Spring 1999, a separate section of the Financial Management course was offered online for the first time. A series of questions (Appendix A) were developed to explain the following four major criteria:

1. Learning environment [Web utility; (three questions, numbered 1 - 3)];
2. Interactivity (three questions, numbered 4 - 6);
3. Contribution of course materials and requirements to learning (nine questions, numbered 7 - 15); and
4. Students' overall satisfaction with the course (four questions, numbered 16 - 19).

Appendix A was distributed to both groups (online and traditional) over the Spring, Summer, and Fall semesters of 1999. The class sizes for online sections were comparatively smaller as indicated in Exhibit I.

Exhibit IClass Size
Traditional Students Online Students
Spring 1999 18 8
Summer 1999 22 7
Fall 1999 28 6

The results of the students' surveys are shown in Tables 1 - 6. This data then was reorganized and compiled (Exhibit II) based on the four major criteria explained above. Exhibit II, shows the aggregate responses for the variables (questions) under each criterion. For example in Spring 1999, eight students took the Financial Management course online. The first and second criteria, "Web utility" and "interactivity" are each explained by three variables (questions). Therefore, there are 3 x 8 = 24 potential responses for each of those two criteria.

Using a Likert-Scale-based survey, each criterion will be quantitatively measured. In other words using all the variables under each criterion, an index, which represents students' opinions regarding that criterion, will be computed. If both groups generate similar indices for a given criterion, it could be inferred that the mode of course content delivery has no significant impact on that criterion. The Likert technique presents a set of attitude statements. Each student is asked to express agreement or disagreement on each question using the scales shown in Appendix A. For example if all online students respond to the three questions under the "Web utility" criterion by choosing the "strongly agree" scale, then 100% of the respondents strongly agree that the Web as a learning environment is a useful component of the course. Exhibit III, shows the compilation of such data for the three semesters and the four major criteria. The score for each criterion, as an index, represents an overall opinion of the students about how each mode of course delivery has effectively met the four major criteria. By analyzing and comparing the data in Exhibit III, the study will draw some conclusion regarding the stated three hypotheses on pages 2 and 3.

Results:

Exhibit III reports the computed indices representing students' opinions regarding each of the four major criteria. From this Exhibit, it can be shown that the majority of students (80% - 90%) in both groups and over the three semesters positively eva luated the course and its different attributes.
Specifically, in reference to the first hypothesis, the results in Exhibit III clearly demonstrate that both groups (over the three semesters) believed that the Web-based information (Web as a learning environment) is a valuable component of the course. Based on these results, a case could be made in favor of providing Web-based materials for courses being taught in a traditional format.
The students' learning experience (subject of the second hypothesis) captured by the "course materials and requirements" criterion (index), as reported in Exhibit III, also indicates a majority in both groups (over the three semesters) thought that both mediums provided sufficient materials and requirements to enhance and contribute to their learning.
With regard to the third hypothesis, in Exhibit III, again a majority of students in both groups (over the three semesters) agreed that both modes of course delivery allowed for effective interactivity among students and students/instructor.
Finally, the overall satisfaction criterion reflects the students' experience and impression of the course. This criterion, similar to other criteria, was computed and tabulated in Exhibit III. Over the three semesters a majority of students in both groups indicated that they were equally satisfied with the vigor and usefulness of the course.
Furthermore, this study compared the final grades of both groups (online and traditional) for each semester. The average final grade for both groups in the Spring and Summer semesters was B. However, in the Fall semester the online students' average final grade was A, while that of the traditional students was B. Based on this information, this study confirms the findings of other studies such as Schulman & Sims (1999), and Smeaton & Keogh (1999) who used grades as measures of performance, and found that there were no significant differences between the performance of the two groups who took their courses in different modes of course delivery. Therefore, online courses have the potential of providing comparable learning experiences for students regardless of the mode of course delivery.


Conclusions:

This study collected data on four major criteria (Web utility, interactivity, course material and requirements, and overall satisfaction) representing nineteen attributes (variables) that addressed different concerns of two groups of students (online and traditional) taking a graduate Financial Management course over three semesters. Using the proposed research methodology, this study calculated four indices as measures of the four major criteria. These indices were compared in a pair-wise fashion for the two groups and for each semester. From this comparison it was concluded that there were no significant differences between the two groups' opinions regarding their feelings about the Web utility, interactivity (students/students, and students/instructor), learning experience, and overall satisfaction for the Financial Management course delivered on-site or online.

References

Barr, D. (1990), A Solution in Search of a Problem: The Role of Technology in Educational Reform, Journal for Education of the Gifted, 14(1), 79-95.

Clarke, D., Getting Results with Distance Education, The American Journal of Distance Education, Vol. 12, No.1, 1999.

Dobrin, J., Who's Teaching Online, ITPE News - Vol. 2, Issue 12, June 22, 1999.

Dutton, J., Dutton, M., & Perry, J., Do Online Students Perform As Well As Traditional Students?, submitted for publication, North Carolina State University, 1999.

Hoffman, K. M., What Are Faculty Saying? eCollege.com - May 1999.

Keegan, D., Foundations of Distance Education. New York: Routledge, 1990.

Linke, R., et al. (1984), Report of a Study Group on the Measurement of Quality and Efficiency in Australian Higher Education. Canberra: CTEC, p. 19.

Merisotis, J. P., & Phipps, R. A., What's The Difference? Outcomes of Distance vs. Traditional Classroom-Based Learning. The Institute of Higher Education Policy, April 1999.

Navarro, P. & Shoemaker, J., The Power of Cyber Learning: An Empirical Test, Journal of Computing in Higher Education, 1999.

Russell, Thomas L., The No Significant Difference phenomenon. Chapel Hill, NC: Office of Instructional Telecommunications, North Carolina State University, 1999.

Schulman, A. H. & Sims, R. L., T.H.E. Journal, Vol. 26, No.11 June 1999.

Sherry, L., (1996), Issues in Distance Learning, International Journal of Distance Education.

Smeaton, A. & Keogh, G., An Analysis of the Use of Virtual Delivery of Undergraduate Lectures, Computers and Education, Vol. 32, 1999.

U.S. Department of Education, National Center for Education Statistics, Distance Education at Postsecondary Education Institutions: 1997-1998, December 1999.

Wade, W., Assessment in Distance Learning: What Do Students Know and How Do We Know that They Know It?, T.H.E. Journal, Vol. 27, No. 3, October 1999.

Wagner, E. D. (1997), Interactivity: From Agents to Outcomes, New Directions for Teaching and Learning, 71, 19-26.


点此咨询学术顾问 快人一步得到答案

SCI期刊问答

回到顶部