I
really enjoyed both readings this week as they highlighted the communicative
element of scientific practice, a factor which I think is integral and often
under-valued. I think that both
Reiser et al and Sampson and Gleim offered powerful examples of learning
communities and the importance of developing and revising explanations with the
help of peers, the teacher, and diverse resources. The ability to create sound explanations, which articulate
causal mechanisms and account for observed data, is a powerful tool in all
subjects. By supporting our
students in developing strong logic in the sciences, we can help them become
better writers and more critical consumers of information, goals which I think
we all have as teachers. To
develop this logic, Reiser et al suggest highlighting the distinction between
observation and interpretation. I
think this is an astute decision as here you can get to the root of student’s
misunderstandings of what is ‘knowable’ from an observation, and gently steer
them away from assuming, projecting or anthropomorphizing by encouraging
metacognition. The
ExplanationConstructor journal is a valuable tool in this metacognition as it
allows both the student and the teacher to trace the student’s thought process
through notes and the decision tree.
This journal is reminiscent to me of my lab notebooks working in marine
science labs, and I hope to offer my students a similar space, either through
BGuILE or through written observation journals, where they can work through
inquiry and data and be able to visualize the experience later, from which they
could glean more sophisticated and personal correlations and theories.
I
was also very excited that Reiser et al envisioned their program working
seamlessly into existing curriculum and side by side with physical
representations and experiences. I
think that traditional and computational modeling and experiences can and
should be complementary, to utilize the power and scope of computational models
as well as the familiarity and experiential nature of physical models. Reiser et al make an excellent point
about unfamiliarity, in that you have to be aware of the level of transfer your
student is going to experience when using a new tool and optimally prepare them
or scaffold with the familiar to help make the experience less dissonant.
I
am concerned that it is very challenging to know what to pull from a large dataset,
even for professional scientists, and that could be a limitation for student’s
experience. Reiser et al suggest
doing one exercise beforehand with a smaller data set and then ‘strategic
conversations with students about what they are trying to achieve and what they
will learn from particular queries of the data’. However, I think these conversations would need to be
heavily scaffolded as I can imagine the blank stares when asking students what
they would learn from different statistical queries, potentially with a whole
class lesson or discussion on what is possible to ask.
I
am disappointed with teaching practice section of Reiser et al, as did not
really indicate good practices for ‘creating and sustaining a climate of
inquiry’, but rather just said to ‘augment’ software, which I think minimizes
the role the teacher plays in prompting inquiry.
Questions:
Is
the timeline presented by Reiser et al realistic? With 36 total school weeks, and 6-7 week units, you could
cover max 6 concepts.
What
is the best way to scaffold students before/while introducing them to large
data sets where correlations may not be readily apparent?
Is Sampson and Gleim’s
suggestion to allow students to invent their own method too open ended?
I think the timeline would probably not be very realistic if it were an introductory class with many new concepts. I think 6 concepts might be a little too much depth and not enough breadth (but better than too much breadth and no depth). However, if it were a more specific class that focused on biostatistics or something of that nature you could use 6-7 week units pretty effectively. The best way to scaffold students before/while introducing them to large data sets where correlations may not be readily apparent might be through frontloading or some modeling (by the teacher) of what things might be good to look for. Or a brief lesson on biostatistics considering all of the histograms the paper mentioned. The amount of possible graphs that could be made and all of the data from the plethora of variables could be very overwhelming to someone who has never really analyzed data. Some group activities and explicit modeling of how to analyze and interpret graphs would probably go a long way in my opinion.
ReplyDelete