Learners everywhere make extensive use of the Internet. Their success is threatened in two ways. First, providers of information rarely apply learning science to help learners learn. Thus, learners acutely need skills for learning. Second, too many learners have too few learning skills; moreover, they use poor judgment to regulate skills they have. Our novel and interdisciplinary project will remedy this dilemma.
Today’s gold standard for research on effective learning skills is a rigorous experiment that yields an effect “on average” in a randomly sampled group of learners. On finding a positive effect, researchers recommend that individuals adopt the study’s intervention. This logic falters in four ways. First, almost no one is “average.” What works on average has a tiny statistical probability of working the same way for every individual. Second, an experiment usually researches one skill or one factor that affects how a learner uses a skill. What learners experience or effects that emerge when learners use multiple skills is unknown. Third, hardly any research has studied the fact that, with experience, learners modify the skill researched in a highly controlled study. These personal adaptations of a skill further soften generalizability of a study’s finding. Finally, materials used in studies are brief and chosen to fit an experiment’s needs, e.g., being clear. Resources learners find in real life are much more variable.
The project used two state-of-the-art toolkits to tackle these issues. (1) nStudy is software learners use to study online. It has multiple “tools” for tagging content and taking notes, interlinking content and viewing the network of that information, concept mapping, and more. As a learner works, nStudy logs fine-grained, time-stamped data that trace all the content a learner operates on and every operation applied to content. These data trace in detail how a learner goes about learning. (2) Learning analytics are various quantitative methods for discovering patterns in trace data, testing how well a pattern matches a model or another pattern, and identifying how patterns differ.