Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
What is _wrong_ with Python? a.k.a. PoR probabilities
#11
That didn't do it.
Getting me free admission into gaming conventions for a decade
Reply
#12
Good news/bad news.

Good news: I got it to work. Kind of. I got an index out of range error when on an 11 die pool, meaning a chain came up with six successes where at most only five should be possible.

Bad news: That skewing downward of successes is bad. At only 5 dice, a shift of 5% is seen, i.e. should be a 50% chance for two successes but only 45% is calculated. Also, as predicted, this really sucks up the RAM. On my little laptop w/3gb it used up 95% as well as 50% swap, which I don't need to tell you, slows processing to a crawl. Well, the RAM suckage happens at ten- and eleven-die pools and at eight-die pools when it starts filling with 10- and 12-siders.

So, what can be done? First, dump the remainder method and go back to the recursive method of determining the number of successes. This will add processing time, so what can be cut to make up the difference? Something that comes to mind is the repetitive rebuilding of die rolls for each target number. How about building a die roll once then applying each target number to that?

Another issue is this RAM issue. One thought I have is: let's say we have a pool of [8,8,6,4,4]. Right now, we're storing [8], [8,8], [8,8,6], [8,8,6,4], and [8,8,6,4,4] in memory, which includes all the 8+64+384+1536+6144 different permutations of die rolls. Now, obviously, we need to keep the 6144; it's what being processed currently and it's what will be passed to [8,8,6,4,4,4]. But do we need the 1536? It's information has already been processed, passed on, and won't be passed on anymore. We definitely need the 384 because after all the 4s have been added the pool will go to [8,8,6,6] and that 384 will be passed on. But we won't need it after that. So if oldpool.pool[-1] == newdie we can oldpool.rolls.empty() in Pools. This isn't a panacea. In this example we've only freed 1544 "units" of the 8136, but when we're dealing with the swap partition, every little bit helps.


Attached Files
.txt   iterative additive.py.txt (Size: 3.16 KB / Downloads: 467)
Getting me free admission into gaming conventions for a decade
Reply
#13
Finally got it right. All it took was to stop being an idiot amongst the pillocks.


Attached Files
.txt   iterative additive.2.py.txt (Size: 3.77 KB / Downloads: 450)
Getting me free admission into gaming conventions for a decade
Reply
#14
What did I get right? Glad I pretended that someone asked. Speed combined with low memory usage. Well, depending on your definitions. The script runs in terms of hours, but that's better than the days it would take for my first efforts, and the RAM usage was ~750 mb with a pool of 11 10-siders, but that's better than bleeding into the swap partition that was going on. I've had open browsers sucking up more RAM.

Improved speed was established by using the iterative method of building a pool, then applying all target numbers to it, so it's built only once, and only running calculations for the target numbers that are applicable: if a pool has a d6 or larger, there's no need to process TN = 5.

Another change improved both the processing time and the memory usage. Let's say we have {2d4,3d6} as a pool. One result of a roll would be [2,3,4,1,4], another [3,4,1,5,2], and a third as [4,2,3,4,1]. Each roll each die had a different result, but when we look at just the results and not which die rolled it, they are all equivalent to [1,2,3,4,4] (or [1,0,1,1,2,0,0,0,0,0,0,0,0] when stored as the number of dice that rolled a particular number). Why keep each different roll result in memory for a particular pool when I can store it once then keep track of how many times it occurs, cutting down on memory. This also cuts down on time; [1,0,1,1,2,0,0,0,0,0,0,0,0] has only one success for TN 7, times the number of equivalent rolls.

I also cut down memory usage by getting rid of a list variable that had been hanging around since the first attempts at this using list comprehensions.

I can think of a couple things that could improve the process in minor ways, but why? It works (the first criteria on the quality of a program), it does so reasonably well, and it's a single purpose program that took me too long to get right enough.

Oh god, I hope I got it right this time.
Getting me free admission into gaming conventions for a decade
Reply
#15
And it all comes down to this. Using formatted output from the two previous efforts, we combine the probabilities to find out the odds of getting any number of possible successes for each possible dice pool. A note on 0 probability: usually it means it's impossible, like getting 4 successes with 3 dice; sometimes it means that the probability is so small (<10^-9) that it rounded out.

Hmm. .txt files cannot be larger than 200 kb and .pdfs no larger than 2048 kb. Well, here's the python script anyway.


Attached Files
.txt   PoR Probabilities.py.txt (Size: 4.76 KB / Downloads: 457)
Getting me free admission into gaming conventions for a decade
Reply
#16
Further note about the above script: it tells us the probability of getting [/i]precisely[/i] n successes for each possible die pool and target number in a single roll of the pool, not the probability of at least n successes. If the pool is {2d4}, there is a 50% chance of getting 1 success when TN=2, but there is a 75% chance of a success. A roll with 2 successes still has a success. So to calculate the probability of at least n successes (which is what is normally wanted -- I can't think of any dice pool systems that want an exact number of successes. [Well, one. But that uses a roll m keep n system, which this isn't. Also it's an additive pool system, not comparative (PoR is a hybrid) so it's not the number of successes, but the degree of success that mattered in a magical subsystem for Blood Magic, if I recall. To cast a spell, you want n dice to add to a target number, but if the sum exceeded some other number above that, you'd start accumulating dark energy, so you wanted to roll well but not too well lest bad stuff starts happening. But that's neither here nor there. We want at least n successes]) in a single roll, add the probabilities of all successes >= the desired number.

But what about situations that allow multiple attempts to achieve n successes? As a reminder, as long as at least one success is rolled, reroll the pool minus any dice that rolled 1 until the requisite number of successes is met (not matched). Simple. Multiply the probability of a single-roll win with the probability of n-1 successes times the reroll of every subset of the pool, times the probability of n-2 successes with all subsets, etc. Easy.

If you read sarcasm into that last statement, you shouldn't have. It really is easy. Tedious, definitely, but easy. One just needs to be careful and maybe a little bit clever. At least, I think using a binary matrix to create the subpools is a little clever. (Not that I've coded anything yet, but I've an idea of how to implement things.)

Small binary matrix:
Code:
1 1 1
1 1 0
1 0 1
0 1 1
1 0 0
0 1 0
0 0 1
Each row tells you which dice of the pool is used in the subpool; first row sends all dice, last row sends only the last die. The careful part is that not every subpool is equally likely. Also, if you're sending two dice as a subpool to be checked, there's no way the current roll could have had three successes.
Getting me free admission into gaming conventions for a decade
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)