For me,
one of the most consistently confusing things about reading empirical science journal
articles is the abundance of references to statistical methods and models with
which I have little familiarity. This is
generally more a product of my relatively elementary understanding of
statistics than any fault of the researchers—though I expect that if they were
in error, I probably wouldn’t be able to tell unless it were especially
egregious. I’ve only taken an AP
Statistics class in high school, so once the subject matter goes beyond
p-values, Bayes’ Theorem, and the like, I’m mostly lost; I find it difficult to
understand a lot of the models proposed in the articles this week, for example. I feel as if I have a good grasp of the conceptual
aspects of basic statistics, but it’s certain that there’s more for me to
learn.
Furthermore,
the manner in which most journal articles are written can occasionally
interfere with my initial understanding of technical points, even if I am
usually adept at grasping the main points of experimental design. Now, I understand that the purpose of such
papers is information rather than entertainment, but it often seems that the
pervasiveness of passive voice and the density of field-specific jargon do much
to obscure the more technical and subtle points in my mind. Again, this has less to do with the authors—these
characteristics are largely matters of journalistic convention—and more to do
with the way I am accustomed to reading.
The disconnect between the thrill of scientific discovery and the
occasional drudgery of reading journal articles is somewhat disappointing,
however: I often wish that most science articles would read as excitingly as
the content within, though this fancy is hardly important in light of the enlightenment
these papers can provide.

No comments:
Post a Comment