There is a common misunderstanding about how probabilities (dice, coin tossing, roulette) work. Randomness with equal probabilities does not guarantee that we will see an even distribution in a game. In the REALLY LONG run (many thousands and thousands) of trials, we would expect to see virtually even distribution, but not in the short run.
The dice have no memory. They do not know what numbers came up on the previous toss. They don't take notes and monitor the history of tosses over time so they can tell #1 to show up more. They just get what they get. You can count cards at poker, but that's because there is a fixed number (52) and the probabilities change based on what cards have been dealt and which remain in the deck. In cards, this is "sampling without replacement". In dice, it's "sampling with replacement". If you roll a 1, that doesn't change the probability of getting a 1 on the next roll. It's still 17%.
It's like you draw a king of hearts, then put it back and reshuffle. The probability of drawing a king of hearts the first time was 2% (1 out of 52). If you put the cards back in, then the chance of drawing the king of hearts remains 2%. The odds don't change.
Thanks for the life lesson Justin.
My post was in response to Richard's post immediately before mine. Has nothing to do with your demand to see the algorithm. There is more than one issue here. But I appreciate the feedback.
The only idiots here are the ones "explaining" that probabilities don't preclude long-tail scenarios… You also going to tell us the sky is blue and poop smells bad?
The issue is that SMG changed the algorithm (hence this thread) and they should explain how and why, and open-source the code. I have actual data demonstrating that the "fix" does indeed produce unrealistic results:
I agree with Glenn's observation that the "fix" is likely a modification that intentionally disadvantages rolls for small armies being attacked by dramatically larger ones, much more so than the attacker's advantage inherent in the natural probabilities. When I have seen large armies attack post update they almost always win, and usually by a gigantic margin. You would expect to see large armies have a completely failed attack on occasion, even to much smaller forces.
This was probably done to shut-up people complaining about the algorithm being broken in the first place (which my data suggests was never the case, as the statistics were perfectly even pre-update) just because they had large armies destroyed by blitz-attacking smaller ones as on a few occasions.
Also, any programmer knows that the pseudo-random generation of numbers is a function built into the framework of the programming language itself. Therefore, the "roll algorithm" is a very simple feat of building out the 1/6 per die probabilities, there should be absolutely no way for such a simple thing to be flawed in the first place. Ergo, no "fix" should have been required, suggesting that the "fix" was actually a modification that made the probabilities far LESS realistic so that people wouldn't get as upset when chance did not favor them.
So there you have it, I have presented very strong empirical, logical, and anecdotal evidence that the algorithm was never flawed to begin with, and is now flawed as of the "fix." People should stop arguing about probabilities and demand that SMG explain what they have done, why they have done it, and make that portion of the code open source already so people can stop with the asinine complaining.
Good news guys. I have been in contact with support about the heavily biased 1's issue and they have acknowledged that this was a glitch in the 1.9.36 release. They say they have remedied it as of the latest release that came out today, so hopefully this should no longer be a concern.
Steve Clements
HOW TO VOTE FOR THIS FEATURE? Tap the 'Do you like this idea?' below
91 people like this idea