Upgrade Algorithm to SM-??
As a recent user pointed out, SM-2 has been used in Mnemosyne for a while now. I'm not sure how much of the data is academically usable (read publishable) but I'd be surprised if there wasn't enough to draw at least a few conclusions. Is it time to upgrade to a more recent algorithm that more accurately predicts the time interval between repetitions?
Personally, what initially drew me away from SM and toward Mnemosyne was the user interface. At this point, I've become quite comfortable with the concept of creating and reviewing cards. As a result, SM's algorithm is starting to draw more of my attention and the complicated user interface is starting to appear less daunting. Being in academia myself, I like the idea of contributing to academic endeavors so, I'd prefer to stick with Mnemosyne given the option.
Please correct me if I'm wrong but, I believe the focus of this academic endeavor - the reason Mnemosyne exists - is to academically evaluate spaced repetition algorithms. It sounds like many of the subsequent algorithms after SM-2 were more tweaks than complete revamps. Does this mean we might be able to learn from SM's mistakes, skip a number of those iterations, and jump ahead to one of the most recent algorithms? From an academic perspective, would there be enough data in a year or two to statistically compare SM-2 with SM-15 if the algorithm was updated in the near future?
The usability of Mnemosyne 2.0 is pretty fantastic. It's clean and simple yet it has what I would consider to be most the important functionality. Is it time to tune up the engine under the hood?
Food for thought.
I agree that the mnemosyne database may be used for observations. But I think its dangerous to draw conclusions on it and changing the algorithm based on it. There are so many variables that are not controlled. Here are some thoughts: (1) after some moths the user will probably adapt the answer to the given algorithm, (2) the type of information learned may vary greatly and (3) the way of learning (card type) varies, (5) the amount of information a user was in one card may vary greatly, (6) users may grade a card 5, just because there is no way of skipping a card that is very hard or not that important to learn, (7) no clear definition what is considered "not memorized" - in example should I grade it as 2 if I recall 2/3 of the information in the card?
Instead changes should be based on quality studies, like controlled studies in an student setting or for a group of enrolled participants over the internet studying the same cards.
Not really, no. SM-11 seems to take the view that this is a well-posed computational problem which you can solve. I'm of the opinion that the optimal scheduling is inherently very fuzzy and depending on lots of external factors which you can't capture in an algorithm, so a simple heuristic like SM-2 is probably the best you can do.
Valid point regarding the monetary incentive to boost version numbers. Having said that, are there (in your opinion) any aspects of the new algorithms that seem potentially beneficial? Parts that may have caught your attention for future versions of Mnemosyne?
Personally, I'm not convinced a higher series number is a true improvement in this case. Don't forget that SM is a commercial endeavour, so they might have other incentives to boost version numbers beyond a true improvement.
We could certainly learn something from studying the statistical data, but at the moment I believe there are other features which would be of more benefit.