Thuis
Contacten

    Hoofdpagina


Supporting Data Interpretation and Model Evaluation during Scientific Discovery Learning

Dovnload 1.29 Mb.

Supporting Data Interpretation and Model Evaluation during Scientific Discovery Learning



Pagina1/3
Datum13.11.2017
Grootte1.29 Mb.

Dovnload 1.29 Mb.
  1   2   3

Enschede, 29 augustus 2008

Supporting Data Interpretation and Model Evaluation during Scientific Discovery Learning

Bachelorthese Frank Leenaars

Eerste begeleider: Dr. W.R. van Joolingen

Tweede begeleider: Dr. A.H. Gijlers

Universiteit Twente



Supporting Data Interpretation and Model Evaluation during Scientific Discovery Learning

Frank Leenaars

f.a.j.leenaars@student.utwente.nl

ABSTRACT

Scientific discovery learning has proven to be a useful application of constructivist theory to science education, but students encounter many difficulties in the process and need support. This paper describes research concerning the problems learners experience during the data interpretation and model evaluation phases of scientific discovery learning. Students tried to solve a number of problems that required both data interpretation and model evaluation. Their reasoning steps were recorded using both verbal reports and written answers. The results of this experiment gave an indication of the sort of reasoning steps with which students had the most difficulties. These results were used to provide suggestions for cognitive tools that could support the learner during these tasks.

Keywords


Data Interpretation, Model Evaluation, Scientific Discovery Learning, Cognitive Tools.
  1. INTRODUCTION

Constructivism


Since the second half of the twentieth century the constructivist view of knowledge has become popular among cognitive scientists. The essence of the constructivist model is that knowledge is constructed in the mind of the learner (Bodner, 1986). Knowledge, as the term is used by constructivists, does not refer to a representation of the real world. Instead it is seen as a “collection of conceptual structures that […] are viable within the knowing subject’s range of experience” (Von Glasersfeld, 1989, p.125). Knowledge in this sense does not have to match reality, but it has to fit it; in the way a key fits a lock (Bodner, 1986); it has to be compatible with it.

An acceptance of the constructivist view of knowledge has important implications for education. Von Glasersfeld (1989) differentiates between training and teaching. Examples of the former are such activities as getting students to throw a ball in a specific way or to perform a multiplication algorithm, whereas the latter has to do with getting students to understand a certain concept. Von Glasersfeld argues that classical instruction methods such as rote learning and repeated practice are useful in training, but will not bring about the understanding in students that teaching aims to effect. While critics have much to say about the problems of constructivism as a theory, even they acknowledge the value of its applications in education. Staver (1998) presents a summary of the criticisms of constructivist theory and counters the critics’ arguments.

The constructivist approach to education seems especially well suited to science education. Bodner (1986) shows how the misconceptions that students bring with them to new science classes can be explained by the constructivist model. This paper deals with a specific application of constructivist theory to science education: scientific discovery learning.

Scientific discovery learning


Scientific discovery learning, also known as inquiry learning, refers to a form of learning where students are “exposed to particular questions and experiences in such a way that they “discover” for themselves the intended concepts” (Hammer, 1997, p.489). De Jong & Van Joolingen (1998) describe several problems students encounter when engaging in scientific discovery learning. They categorize these problems based on the stage of the discovery learning process in which they occur: hypothesis generation, design of experiments, interpretation of data or regulation of learning. De Jong & Van Joolingen subsequently discuss a number of methods of supporting the learners that have been researched. Possible support methods are discussed for all of the stages in the discovery learning process, except for the data interpretation stage. Learners have however been found to experience difficulty in this stage, e.g. while interpreting and comparing graphs (Linn, Layman, and Nachmias, 1987 cited in De Jong & Van Joolingen, 1998). This paper will describe research concerning the difficulties students have during the closely related tasks of data interpretation and model evaluation.

Computer modeling and simulations


Computers can be used by students during scientific discovery learning to enable them to create models of the situation or concept they are learning about. Students can then use these models to run simulations and compare the results with “real world” data either gathered from experiments or provided by teachers. Used in this way, computers enable students to create models and test their validity relatively easily. Van Joolingen, De Jong, and Dimitrakopoulout (2007) discuss several ways in which computers can be used to assist students during the process of discovery learning. One of these ways is to offer tools that help the learner analyze the data, specifically graphs, generated by running simulations. Based on the results of the research concerning students’ difficulties with data interpretation and model evaluation, suggestions for computer assistance during these tasks will be done.

Data interpretation


The data to be interpreted in scientific discovery learning tasks is usually available in tables and graphs, e.g. in the Co-Lab environment, Van Joolingen, De Jong, Lazonder, Savelsberg, and Manlove (2005). This article will focus on data that is displayed in the form of graphs.

Quite a lot of research on graph interpretation has already been done. Friel, Curcio, and Bright (2001) describe the critical factors that influence graph comprehension: the purpose for using graphs, task characteristics, discipline characteristics, and reader characteristics. Shah, Mayer, and Hegarty (1999) and Shah & Hoeffner (2002) discuss problems students have with the interpretation of graphs in texts, particularly in social textbooks. The result of their research is a list of implications for the design of textbook graphs and data displays, such as whether to use line or bar graphs, which colors to use and what scales to use for the axes. These findings are useful for designing the layout and style of graphs generated by computer simulations during the scientific discovery learning process. However, students also face other problems when interpreting graphs, e.g. when contrasting the results of an experiment with predictions based on simulations of a model.

Research by Leinhardt, Zalavsky, and Stein (1990) might be more relevant for the sorts of graph interpretations tasks students have to do during inquiry learning. They describe four typical tasks when working with graphs:


  • Prediction (e.g. where will other points, not explicitly plotted, be located?)

  • Classification (e.g. what sort of function does a graph describe?)

  • Translation (e.g. how would a time-distance graph of a function described by a time-speed graph look?)

  • Scaling (e.g. what does a unit on each axes represent?)

These tasks are then classified based on four properties:

  • Action (interpretation or construction of the graph)

  • Situation (setting and context of the graph)

  • Variables used (categorical, ordinal or interval)

  • Focus (either on local or global features of the graph)

Furthermore, they discuss the problems that students experience during graph interpretation and find that the most important problems fall into three categories: (1) a desire for regularity; (2) a point wise focus (as opposed to a more global focus); and (3) difficulty with abstractions of the graphical world. These are the sort of problems students are likely to run into during graph interpretation in the context of discovery learning.

Beichner (1994) has developed a test that assesses students’ proficiency in working with kinematics graphs and has used it to find a list of common difficulties students experience when working with this sort of graphs. These difficulties were classified as follows:



  • Graph as picture errors (the graph is considered to be a photograph of the situation)

  • Variable confusion (no distinction is made between distance, velocity and acceleration)

  • Nonorigin slope errors (students have difficulty determining the slope of a line that does not pass through the origin)

  • Area ignorance (the meaning of areas under kinematic graph curves is not recognized)

  • Area/slope/height confusion (axis values are used or slopes are calculated when the area under the graph is relevant)

It is worthy of note that all of the research done so far on graphs of functions has focused on interpreting a single graph at a time. No research seems to have been done yet on comparing two different graphs, an action that is integral in scientific discovery learning. This means that such a test will have to be developed specifically for this research.

Model evaluation


The value of dynamic modeling in education is widely recognized. Sins, Savelsbergh & Van Joolingen (2005) give an overview of the many different ways in which models are considered useful in an educational setting.

In a scientific discovery learning setting, model evaluation is closely related to data interpretation. Sins et al. (2005) describe students as engaging in model evaluation when they “determine whether their model is consistent with their own beliefs, with data obtained from experiments and/or with descriptions of behavior about the phenomenon being modeled. The second part of this description, determining whether their model is consistent with data obtained from experiments, requires data interpretation. Hogan & Thomas (2001) describe this part of model evaluation as model interpretation.

Although the value of dynamic models in education is recognized by many, until recently no research had been done which examines how students would systematically test their models (Doerr, 1996). Since then Löhner, Van Joolingen, Savelsbergh, and Van Hout-Wolters (2005) have analyzed students’ reasoning during inquiry modeling tasks. However, even in recent research the amount of attention the model evaluation part of a modeling task has received, has been rather low.

Scaffolding or distributed intelligence


Before design of a supporting tool can begin, an important question has to be answered: is this tool intended to be a scaffold or a part of a distributed intelligence network? Or as Salomon, Perkins & Globerson (1991) state this distinction: are the effects of the tool or with the tool most important? If the former is the case, the support the tool offers should fade as the learners become more skilled at interpreting and comparing graphs. The goal of this tool is primarily to help teach students the skill of interpreting graphs and evaluating models. If the tool is regarded as part of a distributed intelligence network, consisting of the student and a number of cognitive tools, its primary goal is different. In this situation the most important task of the tool is to help the learner with the task of data interpretation and model evaluation. This should result in the learner being more skilled at interpreting graphs and evaluating models with the tool’s assistance. The student does not necessarily become better at these tasks in the absence of the tool’s support.

Because graph interpretation and model evaluation are useful skills in many different contexts, the primary goal of the tool should eventually be to teach these skills to the learner. Therefore the tool will be regarded a scaffold; its assistance fading as the learner becomes more proficient at the skills of interpreting graphs and evaluating models. However, before a lot of time is invested in the design of such a tool, an experiment should be done to find out what sort of difficulties are experienced by students during graph interpretation and model evaluation. This is the goal of the experiment described in this paper.


Research questions


The aim of the research described in this paper is to answer two questions:

  1. What sort of problems do students experience when interpreting graphs and evaluating models during an inquiry learning task?

  2. What sort of assistance can a cognitive tool offer students during the tasks of graph interpretation and model evaluation?
  1. METHOD

Participants


Eighteen students (average age 21.2 years; 11 men and 7 women), following 15 different academic majors, participated in the experiment. All participants completed physics courses in high school, which was a requirement for participation in the experiment, because some of the tasks required a basic knowledge of physics.

Materials


The participants used a laptop to view web pages. The experiment was divided into two parts. The first part consisted of 21 multiple choice questions in the mechanics domain. The second part of the experiment introduced a simple modeling language that students used to solve a number of problems similar to those encountered during inquiry learning situations. This is the same modeling language that is used in the Co-Lab learning environment discussed in Van Joolingen et al. (2005). The materials used in both parts of the experiment are discussed below and can be found in the appendices.

Multiple choice questions


The multiple choice questions used in the experiment were those created by Beichner (1994) for the Test of Understanding Graphs in Kinematics (TUG-K). All questions were translated from English to Dutch and presented on a single webpage, this allowed participants to easily go back to previously answered questions and change their answers if they wished. When they were done, their answers were saved in a database. The web page with these translated questions can be found in appendix A.

Models and simulations


The second part of the experiment began with an introduction which makes clear to the participant what can be expected. Next the simple modeling language used during the second part was introduced and the different symbols were shown in the context of an example and their meanings were explained. After this explanation two example cases were shown, which were structured in the same way as the real cases. Finally the five real cases were shown to the participant. Participants were asked to think aloud during the cases and audio recording software was used to record their speech.

All cases dealt with the data interpretation and model evaluation phase of the scientific discovery learning process and were structured in the same basic way: (1) a situation was described; (2) a (correct or incorrect) model of this situation was presented; (3) the results of a simulation with this model were shown in a graph; and (4) a specific task was given. In the two example cases this task was replaced with a conclusion. All of the web pages used in the second part of the experiment can be found in appendix B. Answer forms, used by participants for drawing model implementations and doing calculations, are included in appendix C.

The following subsections discuss design considerations and goals of the model explanation and the different cases.

Explanation


The different symbols and arrows that make up the modeling language were introduced in a simple example. A legend provided the names and meanings of these symbols and shortly described their use in the example model.

A second model, and the results of a simulation with this model, showed how a constant acceleration leads to linearly increasing speed and quadratically growing distance.


First example case


A simple situation and an extremely simple, but incorrect, model of this situation were described. The error in the model was obvious and the goal of this example was mainly to reiterate the meanings of all the different symbols and arrows in the modeling language.

Second example case


The main goal of the second example case was to show that ‘invisible’ errors can exist in a model. This is possible when constants have wrong values (as is the case in the example) or when variables are incorrectly calculated from correct inputs.

Both example cases also served to familiarize the participants with thinking aloud and the situation-model-simulation-task structure used in the ‘real’ cases.


First case


The model used in this first real case contains one fairly obvious error. This error could be discovered by comparing the model with the described situation or by comparing the results of a simulation with this model to data from the described situation.

Second case


Participants were asked to answer three questions about the results of a simulation, which required them to read and interpret a graph with two different plots. An important goal of this case was to familiarize participants with different properties of a rubber band (length, elasticity constant and length when in a relaxed position), because the same rubber bands were used in the third and fourth cases.

Third case


Although the model in this case was more complicated than those the participants had encountered thus far, it was expected the participants would be able to understand it, because they had already worked with parts of the model in previous cases. To successfully complete the task in this case, participants had to combine their knowledge of the situation, the model and the results of a simulation with the model.

Fourth case


The situation and model used were the same as those in the third case. The only difference was in the results of the simulation.

Fifth case


The last case was unique, because no situation was described. Therefore participants would have to reason about the model without being able to use specific domain knowledge about the situation being modeled.

Procedure


Participants sat down in front of a laptop and were given a short explanation of what they could expect during the experiment. Before the experiment started they answered questions about their age, their sex and their academic major.

During the first part of the experiment, the multiple choice questions, there was no interaction between the participant and the experimenter. Participants were asked to notify the experimenter when they had answered all the questions and were ready to continue.

The experimenter sat next to the participant during the second part of the experiment and monitored his/her progress. Before the participant began with the first example case, the audio recording software was turned on. To stimulate the participant to think out loud, the experimenter repeated the suggestion in the introduction to read the texts out loud and further suggested that participants described the things they noticed or looked for when studying models or graphs. When participants were silent for more than a minute, the experimenter reminded them to think out loud by asking what they were thinking. When participants indicated they did not know how to proceed with a task or were not making progress for more than a few minutes, the experimenter provided helpful hints to help them complete the task. A list of these hints was available for each case and can be found in appendix D.

Analysis


For each case, in the second part of the experiment, a list of observations and reasoning steps necessary to complete the task was created. The necessity of most of these steps followed from the available information and the nature of the task. For instance, to calculate the (constant) speed of an object based on its distance-time graph, it is necessary to find the slope of this graph. During the experiment some participants found unexpected ways to successfully complete certain tasks. In these cases their solutions were added to the lists as alternative steps. These lists can be found in appendix E.

Each observation or reasoning step was also classified according to the scientific reasoning activities distinguished by Löhner et al. (2005). Because participants did not have to hypothesize or design their own experiments during this experiment, the only categories used were data interpretation, model evaluation and model implementation. To be able to classify steps more precisely, subcategories were created. For data interpretation these subcategories were observation, calculation and conclusion. For model evaluation, they were based on domain knowledge, based on modeling knowledge and based on mathematical knowledge.

The audio recordings of the participants, in combination with the calculations and answers they wrote down on their answer sheets, were used to score each participant on each reasoning step. There were 4 possible scores:


  • 1 point, if they used a step correctly;

  • 0.5 points, if they used a step incorrectly (e.g. made a mistake in a calculation);

  • 0.25 points, if they used a step, but only after the experimenter gave a hint (e.g. ‘Which forces are in balance when the car has come to a halt?’ in case 3);

  • 0 points, if they completely failed to use a step or the experimenter had to explicitly inform them of a step (e.g. ‘When the car is no longer moving, the force exerted by its motor and by the rubber band are equal.’).

It was hypothesized that participants’ score on the multiple choice questions (Test of Understanding Graphs in Kinematics) would be a better predictor of their score on the reasoning steps that were classified as ‘data interpretation’ than of their score on the other items. To test this hypothesis a ‘data interpretation score’ was calculated for each participant by summing the scores for all the data interpretation items and a ‘modeling score’ was calculated by summing the scores for the model evaluation and model implementation steps. The Pearson correlation of both of these scores and the score on the multiple choice test was then calculated.

To get an indication of the type of steps participants had the most difficulty with, the score of each participant was summed per step to generate a ‘step score’. The ten steps with the lowest scores were then examined.


  1. RESULTS

Predictive power of the TUG-K


The mean score of the participants on the Test of Understanding Graphs in Kinematics was 17 out of a maximum of 21. The Pearson correlation between participants’ score on the TUG-K and their data interpretation score was ρ = 0.555 (p < 0.05). The Pearson correlation between their score on the TUG-K and their modeling score was ρ = 0.508 (p < 0.05).

The most problematic steps


Table 1 shows the ten steps with the lowest scores and their classification. The maximum step score is 18, which would mean every participant correctly used the step. A description of these steps can be found in appendix E. A closer look at these ten steps is taken in the following sections.
Table 1. Scores and classifications of the most difficult steps.

Step

Score

Classification

3.3

4.0

Data interpretation (observation)

3.4

7.75

Model evaluation (domain knowledge)

3.6

5.5

Model evaluation (domain knowledge)

4.5

2.0

Data interpretation (observation)

4.6a

2.5

Model evaluation (domain knowledge)

4.6b

3.25

Model evaluation (domain knowledge)

5.1

2.25

Model evaluation (modeling knowledge)

5.2

6.75

Model evaluation (modeling knowledge)

5.3

8.5

Data interpretation (observation)

5.5

6.5

Model evaluation (mathematical knowledge)


Data interpretation steps


All of the data interpretation steps that scored in the ‘bottom 10’ were sub classified as observation steps. Looking at these steps (see appendix E), it is clear that the steps are not difficult per se. Step 3.3 consists of noticing that two graphs have equal values for the first few seconds of a simulation, step 4.5 consists of noticing that one graph has an asymptote above another graph and step 5.3 consists of noticing that the decrease of a graph accelerates at the start of the simulation, then slows down and the graph eventually starts to increase.

The problem with these steps seems to be that it was not apparent to most participants that these features of the graphs were important for the problem they were solving. This is in stark contrast with data interpretation steps with higher scores, such as those in the second case. It was clear to almost all participants what features of a graph were relevant when a slope had to be calculated or a value of a graph at a certain time had to be found. These findings indicate that knowing which features of a graph are important during a certain task is an important skill in scientific discovery learning.


Model evaluation steps


Most of the difficult model evaluation steps were sub classified as model evaluation based on domain knowledge. A closer look at these steps (appendix E) reveals that the domain knowledge required is quite modest for students that have completed physics courses in high school. During the experiment, when these steps were explained to participants that did not successfully use them on their own, almost all responded with phrases like ‘ah yes’ and ‘of course’. This indicates that most participants did in fact possess the necessary domain knowledge, but were not able to use it at the appropriate time during the task.

Step 5.1 and 5.2 were classified as model evaluation using modeling knowledge. Very few participants successfully used these steps on their own. A likely explanation for the problems participants had with these steps is their inexperience with dynamic modeling in general and the specific modeling language used in particular. The majority of the participants reasoned at first that M (step 5.2, also see appendix B) remained either constant or increased linearly, because it was only dependent on a constant. This was somewhat surprising, because participants had no problem recognizing in earlier models that distance increased quadratically, even though it was only dependent on a constant acceleration. The lack of a domain in which to place the different parts of the model seems to be partially responsible for this mistake. After participants were asked to draw diagrams for a, K, L & M, all except one figured out the effects of the integration steps in the model. It seems likely that most participants would not have had as much difficulty with steps 5.1 and 5.2 if they had had more experience with this dynamic modeling language.

The final step that many participants found difficult, step 5.5, is sub classified as model evaluation based on mathematical knowledge. This step is not straightforward and requires some mathematical intuition, so it is not surprising that many participants struggled with this step.

  1. CONCLUSION

More than understanding graphs


The average score of participants on the TUG-K, 17 out of 21, was very high. The most common difficulties Beichner (1994) describes (graphs as picture errors, variable confusion, nonorigin slope errors, area ignorance and area/slope/height confusion) seemed to cause few problems for most participants. These findings show that most participants were quite good at reading and understanding graphs in the kinematics domain. Despite these high scores on the TUG-K, many participants encountered problems during the data interpretation steps in the second part of the experiment.

It was found that the correlation of participants’ TUG-K scores with their data interpretation scores was hardly stronger than the correlation between their TUG-K scores and their modeling scores. This rather surprising finding indicates that the data interpretation steps in the scientific discovery learning process are not particularly similar to the reasoning steps required during the more classical graph comprehension tasks in the TUG-K.

These results suggest that more is required from students during the data interpretation stage of scientific discovery learning than ‘just’ normal graph comprehension abilities.

Difficulties during data interpretation


The most difficult data interpretation steps were those sub classified as observation steps. These consisted of noticing and recognizing as important a certain feature of a graph or a certain difference between two graphs. The steps in the other two subclasses of data interpretation, calculation and conclusion, caused very few problems for the participants. It seems clear that for many students the most difficult aspect of data interpretation is knowing which features of a graph are relevant for the task at hand.

Difficulties during model evaluation


Model evaluation based on domain knowledge was the subclass that caused the most difficulties. A common theme in these difficulties was that participants did not lack the domain knowledge necessary to successfully complete the steps, but failed to use this knowledge. They did not recognize this knowledge as being relevant for the task they were performing. It is understandable that the ‘based on domain knowledge’ subclass caused the most problems, because it required participants to integrate knowledge about the model, knowledge about the results and knowledge about the domain during a reasoning step.

Most of the problems participants experienced with the ‘based on modeling knowledge’ steps seem to be due to inexperience with the dynamic modeling language used. It is likely that these problems would occur less frequently as students become more familiar with the modeling language.


Suggestions for cognitive tools


The research described in this paper clearly shows two aspects of data interpretation and model evaluation that students find difficult: (1) recognizing which features of graphs are important during data interpretation; and (2) using domain knowledge at the right time during model evaluation. The following sections offer suggestions for support during these tasks that could help students with these two problems.

Support for data interpretation


One of the most common data interpretation tasks during scientific discovery learning is comparing results from a simulation with experimental results, usually both displayed in a graph. Tools could be developed that recognize differences between the two graphs and point these out to the student, but as models become more complicated it will be very hard for automated tools to give meaningful feedback about the differences between two graphs. Moreover, this would only help students interpret data with the tools support; there would be an effect with the cognitive tool’s support, but no effect of the tool as Salomon et al. (1991) describe it.

Quintana et al. (2004) provide a number of scaffolding guidelines, of which the fourth, ‘provide structure for complex tasks and functionality’, is especially relevant for the task of data interpretation. A cognitive tool could structure the task of comparing two graphs, for example by asking the learner a set of questions about the differences between the graphs. This could teach learners to think of graph comparison as a structured process and help them find the relevant differences between two graphs.


Support for model evaluation


The biggest problem during model evaluation was that many participants did not use the domain knowledge they possessed at the appropriate times. This result is in line with the finding by Sins et al. (2005) that students constructed better models when they used their own knowledge and that scaffolds should encourage learners to activate their prior knowledge, both before and during the modeling task.

Although it would be difficult to develop a cognitive tool that could assist learners in activating domain knowledge exactly when needed, it seems plausible that a tool could at least activate some prior knowledge. Such a tool would encourage the learner to think more deeply about the different components and relations in their model and the relation between their model and the real world.


  1. DISCUSSION

Explorative nature of this research


Because little prior research had been done regarding data interpretation and model evaluation in the context of scientific discovery learning, the nature of this research was very explorative. Therefore the results of the experiment, a number of difficulties students experience in a specific scientific discovery learning setting, should be regarded as pointers in the direction of possibly valuable further research rather than a definitive list of difficult aspects of data interpretation and model evaluation.

Ceiling effect on TUG-K scores


The average score of the participants on the TUG-K was so close to the maximum possible score, that the spread of scores was likely restricted by the ceiling effect. It is possible that this had consequences for the correlations between participants’ TUG-K score and data interpretation and modeling scores. In further research with equally skilled participants a more challenging test for the understanding of graphs should be used.

Possible problems with verbal reports


Although verbal reports were used as an indication of the reasoning steps participants used when trying to solve the cases in the second part of the experiment, strict guidelines were not used for participant-experimenter interaction. Ericsson & Simon (1980) caution that verbalizing can change the cognitive processes of participants, when verbalization of information that would not otherwise be attended to is required. During the experiment participants were specifically asked to mention the features of models or graphs they noticed and it is possible that this affected their reasoning.

The other side of this problem is that some features of graphs or models were perhaps noticed by participants but not explicitly verbalized. This could cause the scores on the data interpretation (observation) steps to be lower than they should be.


Suggestions for further research


The research described in this paper gives an indication of some of the problems experienced by many learners during data interpretation and model evaluation. Further research could focus on characteristics of learners that successfully overcome these problems and the methods they use. This could give insight into new ways of supporting students during these tasks.

The suggestions for cognitive tools in this paper are very general and abstract. It would be interesting to further specify the requirements and goals of these tools and develop prototypes. These could be used with the material created for this experiment and their effectiveness could be examined.

Perhaps most important of all, more quantitative research should be done to study the cause of the difficulties the learners experience. This will hopefully confirm that the causes for the difficulties found in the explorative, qualitative research described here are in fact important and significant.

REFERENCES


Beichner, R. (1994). Testing student interpretation of kinematics graphs. American Journal Of Physics, 62, 750-762.

Bodner, G.M. (1986). Constructivism: a theory of knowledge. Journal of Chemical Education, 63 (10), 873-878.

De Jong, T. & Van Joolingen, W.R. (1998). Scientific discovery learning with computer simulations of conceptual domains. Review of Educational Research, 68 (2), 179-201.

Doerr, H.M., (1996). STELLA ten years later: a review of the literature. International Journal of Computers for Mathematical Learning, 1, 201-224.

Ericsson, K.A. & Simon, H.A. (1980). Verbal reports as data. Psychological Review, 87 (3), 215-251.

Friel, S.N., Curcio, F.R. & Bright, G.W. (2001). Making sense of graphs: critical factors influencing comprehension and instructional implications. Journal for Research in Mathematics Education, 32 (2), 124-158.

Hammer, D. (1997). Discovery learning and discovery teaching. Cognition and Instruction, 15 (4), 485-523.

Hogan, K. & Thomas, D. (2001). Cognitive comparisons of students’ systems modeling in ecology. Journal of Science Education and Technology, 10, (4), 319-345.

Leinhardt, G., Zalavsky, O. & Stein, M.K. (1990). Functions, graphs, and graphing: tasks, learning, and teaching. Review of Educational Research, 60 (1), 1-64.

Linn, M.C., Layman, J. & Nachmias, R. (1987). Cognitive consequences of micro-computer-based laboratories: Graphing skills development. Journal of Contemporary Educational Psychology, 12, 244-253.

Löhner, S., Van Joolingen, W.R., Savelsbergh, E.R., & Van Hout-Wolters, B. (2005). Student’s reasoning during modeling in an inquiry learning environment. Computers in human Behavior, 21, 441-461.

Quintana, C., Reiser, B.J., Davis, E.A., Krajcik, J., Fretz, E., Duncan, R.G., Kyza, E., Edelson D. & Soloway, E. (2004). A scaffolding design framework for software to support science inquiry. Journal of the Learning Sciences, 13 (3), 337-386.

Salomon, G., Perkins, D.N. & Globerson, T. (1991). Partners in Cognition: Extending Human Intelligence with Intelligent Technologies. Educational Researcher, 20 (3), 2-9.

Shah, P. & Hoeffner, J. (2002). Review of graph comprehension research: implications for instruction. Educational Psychology Review, 14 (1), 47-69.

Shah, P., Mayer, R.E. & Hegarty, M. (1999). Graphs as aids to knowledge construction: signaling techniques for guiding the process of graph comprehension. Journal of Educational Psychology, 91 (4), 690-702.

Sins, P.H.M., Savelsbergh, E.R. & Van Joolingen, W.R. (2005). The difficult process of scientific modelling: an analysis of novices’ reasoning during computer-based modelling. International Journal of Science Education, 27 (14), 1695-1721.

Staver, J.R. (1998). Constructivism: sound theory for explicating the practice of science and science teaching. Journal of Research in Science Teaching, 35 (5), 501-520.

Van Joolingen, W.R., De Jong, T., Lazonder, A.W., Savelsbergh, E.R. & Manlove, S. (2005). Co-Lab: research and development of an online learning environment for collaborative scientific discovery learning. Computers in Human Behavior, 21 (21), 671-688.

Van Joolingen, W.R., De Jong, T. & Dimitrakopoulou, A. (2007). Issues in computer supported inquiry learning in science. Journal of Computer Assisted Learning, 23 (2), 111-119.

Von Glasersfeld (1989). Cognition, construction of knowledge, and teaching. Synthese, 80 (1), 121-140.




Appendix A


a

aaaa


  1   2   3

  • Keywords Data Interpretation, Model Evaluation, Scientific Discovery Learning, Cognitive Tools. INTRODUCTION Constructivism
  • Scientific discovery learning
  • Computer modeling and simulations
  • Scaffolding or distributed intelligence
  • METHOD Participants
  • Multiple choice questions
  • Fourth case The situation and model used were the same as those in the third case. The only difference was in the results of the simulation. Fifth case
  • RESULTS Predictive power of the TUG-K
  • The most problematic steps
  • Data interpretation steps
  • CONCLUSION More than understanding graphs
  • Difficulties during data interpretation
  • Difficulties during model evaluation
  • Suggestions for cognitive tools
  • Support for data interpretation
  • Support for model evaluation
  • DISCUSSION Explorative nature of this research
  • Ceiling effect on TUG-K scores
  • Possible problems with verbal reports
  • Suggestions for further research

  • Dovnload 1.29 Mb.