• 🏆 Texturing Contest #33 is OPEN! Contestants must re-texture a SD unit model found in-game (Warcraft 3 Classic), recreating the unit into a peaceful NPC version. 🔗Click here to enter!
  • It's time for the first HD Modeling Contest of 2024. Join the theme discussion for Hive's HD Modeling Contest #6! Click here to post your idea!

Pseudo-Random Distribution

Status
Not open for further replies.
Hello, reader. This is about coding pseudo-random distribution. I would like to know how to further improve this feature; what are its' pros and cons, so to speak.

Attached is the code for the pseudo-random distribution:

(x * (1 - x)^(x)) / e ^ ((1 - x)^Sqrt(e))

Credits


JASS:
library PRD

    struct PRD
        readonly static constant real MAX_POINT = 0.612
        readonly static constant real AMPLITUDE = 4/3.
       
        readonly real chance
        readonly integer counts
       
        private real base
       
        private boolean absolute
        private boolean result
       
        method operator percent takes nothing returns real
            return chance
        endmethod
       
        method operator percent= takes real perc returns nothing
            set absolute = (perc <= 0.) or (perc >= 1.)
           
            if absolute then
                set result = (perc >= 1.)
            else
                set base = perc
               
                //  A weird formula that somehow gives me a table suitable for computation...
                set chance = AMPLITUDE * (perc * Pow(1 - perc, perc)) / Pow(bj_E, Pow(1 - perc, SquareRoot(bj_E)))
                set counts = 0
            endif
        endmethod
       
        method test takes nothing returns boolean
            if absolute then
                return result
            endif
            if (chance * I2R(counts + 1)) >= GetRandomReal(0., 1.) then
                set counts = 0
                return base <= MAX_POINT
            endif
            set counts = counts + 1
            return base > MAX_POINT
        endmethod
       
        static method create takes real perc returns thistype
            local thistype this = allocate()
            set this.percent = perc
            return this
        endmethod
    endstruct
endlibrary

- Zwiebelchen for the table of coefficients..
- The creators of Geogebra for visualizing the graph...


If only I knew to make a table for this...:

If percentage <= 0.5
set new_percentage to (1 - (1 - prev_percentage)*(1 - percentage))
Else
set new_percentage to (1 - prev_percentage)*(1 - percentage)

For real == 0.01

1st - 0.0058...
2nd - 0.0199
3rd - 0.029701
4th - 0.03940399


EDIT:

What joy! I learned how to use hooks. More handles to be cleared to be added.

EDIT 2:

I have learned over the past 6 months that hooks are to be avoided. Rewrote the code to use structs instead.

Made it so that it no longer uses a hashtable.

EDIT 3:

Can be made so that it can still support hashtables. (Request only)
 
Last edited:
Level 29
Joined
Jul 29, 2007
Messages
5,174
I didn't get the part where the function name is called GetPseudoRandomReal, but it returns a boolean.

I also can't read the algorithm at all, it looks like something random you made up.

If you want random distribution, use a random number generator, and use the random numbers to distribute your stuff however you want.
 
To GhostWolf
The reason why GetPseudoRandomReal returns boolean is because it takes the index of the probability that you give, and handles the comparison there.
The code handles percentages above 50% and at most 50% differently, that is to say, the probability of an event which occurs 2% of the time will expand multiplicatively, that is, it becomes (1 - (1 - 0.02)*(1 - 0.02)) = 0.0396. It continues the process until the event returns true; by that time, the CHANCE_DUR will reset to the original percentage upon which it was based.

To DracoL1ch
On a side note, I am not sure as how to attach datasheets, or PRD's or innate random stuff.

Anyways, thanks for the reply.

EDIT:

Here's the sample code I used to test the pseudo-random function.

JASS:
scope UnfairStrike initializer Init

    //rawcode 'A000' is based on Critical Strike. It registers on attack as to not complicate things further than they should.

    private function Cond takes nothing returns boolean
        if GetUnitAbilityLevel(GetAttacker(), 'A000') != 0 then
            if GetPseudoRandomReal(0.07) then
                call UnitDamageTarget(GetAttacker(), GetTriggerUnit(), 50, true, false, ATTACK_TYPE_CHAOS, DAMAGE_TYPE_UNIVERSAL, WEAPON_TYPE_WHOKNOWS)
                call BJDebugMsg("was enhanced")
            endif
        endif
        return false
    endfunction
 
    private function Init takes nothing returns nothing
        local trigger t = CreateTrigger()
        call TriggerRegisterAnyUnitEventBJ(t, EVENT_PLAYER_UNIT_ATTACKED)
        call TriggerAddCondition(t, Condition(function Cond))
        set t = null
    endfunction
 
endscope

On another note, I just thought that the percentage parameters will behave globally. That could mean that at a pseudo-random 15%, when the chance of occurrence happens a lot, it will not be equal to a truly random 15% because of the discrepancy of the random sample that it could generate.

EDIT: (2nd time)

Thanks for the tip, Zwiebelchen

I adjusted the code now so that it will compute for the percentages on initialization. (However, I fear that the introduction of the O(n^2) operation would be a cause of disdain.
Then again, it is on Initialization.) The code just updates the number of attempts made and resets it back to 0 when it succeeds.

P.S.

I coded it based on my observations on the Pseudo-random distribution used by the Dota 2 engine. (The added benefit of the pseudo-random code (take 2%) is that it is somewhat less on luck and more on competitive skill.)
(Too bad I don't know how to add a strikethrough tag)
It behaves exponentially now.
 
Last edited:
You should take a look into pseudo-random distribution how it is implemented in dota 2 and the critical strike mechanic in wc3 to make it more reliable.

The idea is basicly that you increase the probability of an event to happen every time it doesn't happen until it is almost guaranteed to occur. When it occurs, this will reset the counter and probability.
The added probability with each roll can be taken from a table that you can find online.

Read this article for an example:
Pseudo-random distribution - Dota 2 Wiki

I suppose what you are doing is similar to that? If so, then the optimization in terms of performance is terrible. The probability modifiers should be stored in a table indexed by target probability and number of attempts, as those will be constants and dont have to be recalculated every time.
 
Last edited:
Level 24
Joined
Aug 1, 2013
Messages
4,657
//I'm sorry, this will be O[n^2]
Im sorry, but that one is O(n)... O(MAX_ATTEMPTS) to be exact.

I was actually hoping for an actual RNG.

One thing that crosses my mind though... how are you going to keep track of the number of failures?
Aka, if the critical strike fails, it should have a higher chance (if I understand it correctly), but if this is used by multiple effects, it wont work properly.
 
Here is how an effective pseudo-random algorithm works:

On Attacks:
- Increment integer "NumberOfAttacks" for that unit by one (use a unitindexer or table for that)
- multiply your proc percentage modifier (see table below) with the number of attacks and check if a GetRandomReal(0,1) is smaller than your result
- if yes: proc attack and reset NumberOfAttacks to 0 for that unit

-> You can get the proc percentages based on the average proc percentage from this table:

TargetPercentagesAndRandomMultipliers.png

The c-value is the number you multiply with the number of consecutive attacks without a proc. The Max N column tells you after how many attacks a proc is guaranteed for that target percentage.


C-values between two data-points can be calculated by linear regression.


This is how you write a pseudo-random algorithm in O(1) complexity.
 
Last edited:
Here is how an effective pseudo-random algorithm works:

On Attacks:
- Increment integer "NumberOfAttacks" for that unit by one (use a unitindexer or table for that)
- multiply your proc percentage modifier (see table below) with the number of attacks and check if a GetRandomReal(0,1) is smaller than your result
- if yes: proc attack and reset NumberOfAttacks to 0 for that unit

-> You can get the proc percentages based on the average proc percentage from this table:

View attachment 271303

The c-value is the number you multiply with the number of consecutive attacks without a proc. The Max N column tells you after how many attacks a proc is guaranteed for that target percentage.


C-values between two data-points can be calculated by linear regression.


This is how you write a pseudo-random algorithm in O(1) complexity.

Thanks for that suggestion. For now, I'm reading the table and am trying to figure out the nature of those values so that I could incorporate it.

My implementation of the pseudo-random algorithm has these values when generated:

Values12345
0.010.010.01990.0297010.039403990.0490099501

EDIT:

What kind of algorithm did you use for the table above?

Appendix:
The system (now) treats each one as an instance, so they are independent of each other.
 
Last edited:
Depends on your percentages for the 5% target chance. You gave me one for 1%. But as far as I can tell, it looks similar to mine, except that yours is not entirely linear while mine is.

I'm not quite sure I understood your implementation because of the terrible variable and function naming, but from what I see, it looks unnecessarily complicated with a manual linked list implementation.


You just need a struct created for each random instance (that's what you're doing, so this step is fine) and increment the instance counter by 1 for each test. To succeed in the test, simply retrieve the c value from the table (which could be stored in an array) and multiply it with the instance counter and check against that.

That's 10 lines of code + the array data table. No linked lists, no node handling.
 
Depends on your percentages for the 5% target chance. You gave me one for 1%. But as far as I can tell, it looks similar to mine, except that yours is not entirely linear while mine is.

Ah, so the table of your algorithm behaves linearly. My guess is that it involves the normal distribution curve, doesn't it?
Good thing I have Geogebra to visualize that. Nevertheless, you have my thanks for the suggestion. I'll add you to the credits list, if I get this moved to Submissions.

I'm not quite sure I understood your implementation because of the terrible variable and function naming, but from what I see, it looks unnecessarily complicated with a manual linked list implementation.

You just need a struct created for each random instance (that's what you're doing, so this step is fine) and increment the instance counter by 1 for each test. To succeed in the test, simply retrieve the c value from the table (which could be stored in an array) and multiply it with the instance counter and check against that.

That's 10 lines of code + the array data table. No linked lists, no node handling.

Yeah, but I think it would be around 10 - 15 lines, if coded properly.
 
That would still be a lot shorter than your current system. ;)

And no, that table does not behave linearly, just the coefficient multiplied to your number of tests remains a constant.

So for example, if we take a 5% proc you would have:

1st attempt: 1 * 0.0038 = 0.38%
2nd attempt: 2 * 0.0038 = 0.76%
3rd attempt: 3 * 0.0038 = 1.14%
...

... which results in an average number of 20 attempts to succeed (which mimics the target percentage of 5%) and a theoretical worst-case of 264 attempts (264 * 0.0038 > 1).
 

Dr Super Good

Spell Reviewer
Level 64
Joined
Jan 18, 2005
Messages
27,198
I am not sure what the point of this is. I thought WC3 critical strike and evasion operated by limiting the number of consecutive rolls when not at extreme percentages, forcing a failure if the limit is reached. This is why with very high proc chances of critical strike or evasion (only applied to one of them, I forget which but possibly Evasion) the actual statistical proc chance is much lower as it is being forced down by the consecutive proc limit. At 95% one gets a proc rate of like 83% odd, but at 100% it is correctly 100% (limit disabled).
 
I am not sure what the point of this is. I thought WC3 critical strike and evasion operated by limiting the number of consecutive rolls when not at extreme percentages, forcing a failure if the limit is reached. This is why with very high proc chances of critical strike or evasion (only applied to one of them, I forget which but possibly Evasion) the actual statistical proc chance is much lower as it is being forced down by the consecutive proc limit. At 95% one gets a proc rate of like 83% odd, but at 100% it is correctly 100% (limit disabled).
The point is to make proc abilities with a low proc chance more reliable.

Pretty much every MOBA game with proc passives uses pseudo-randomness to get rid of absurd streaks of luck in which the proc happens 3 times in a row despite having only a low chance of happening.
 

Dr Super Good

Spell Reviewer
Level 64
Joined
Jan 18, 2005
Messages
27,198
Pretty much every MOBA game with proc passives uses pseudo-randomness to get rid of absurd streaks of luck in which the proc happens 3 times in a row despite having only a low chance of happening.
Most good MOBAs do not have randomness in their passives. Eg Critical Strike in Heroes of the Storm will always hit every 4 (3 with level 1 talent) attacks or on next attack for Sumaro and illusions after activating the ability.
 
Most good MOBAs do not have randomness in their passives. Eg Critical Strike in Heroes of the Storm will always hit every 4 (3 with level 1 talent) attacks or on next attack for Sumaro and illusions after activating the ability.
Not true. DOTA 2 uses pseudo randomness and it has advantages over fixed-count passives in the form that you can not exactly predict when it happens (= less opportunities to exploit the fixed count by attacking trash to stack up the count, then open with a guaranteed critical strike on a hero).
 

Dr Super Good

Spell Reviewer
Level 64
Joined
Jan 18, 2005
Messages
27,198
Not true. DOTA 2 uses...
That is why I said...
Most good MOBAs

it has advantages over fixed-count passives in the form that you can not exactly predict when it happens (= less opportunities to exploit the fixed count by attacking trash to stack up the count, then open with a guaranteed critical strike on a hero).
Which is actually a disadvantage in competitive games because you are then relying on RNG rather than skill. This is one of the reasons why Guldan's Rain of Destruction is one of the least chosen ultimate abilities in HotS.
 
Needless to say, some things that are expected to happen after a certain amount of strikes do not require randomness at all. In competitive games, this would matter a lot since the players are taken to be of monumental skill, thus your supposition.

Pseudo-Randomness in games would help even out the odds between all sorts of players, whether pro or casual by eliminating certainties, granting in the overview of things a more neutral game mechanic towards all chance-based abilities.

What I thought of as an approach was that in the long run, a small percentage that keeps on failing will have its' probability increase to 100% while a big percentage that keeps on succeeding will have its' probability go lower. By succeeding, the initially small percentage will reset, and by failing, the initially big percentage will reset as well.
 

Dr Super Good

Spell Reviewer
Level 64
Joined
Jan 18, 2005
Messages
27,198
Pseudo-Randomness in games would help even out the odds between all sorts of players, whether pro or casual by eliminating certainties, granting in the overview of things a more neutral game mechanic towards all chance-based abilities.
Randomness is mostly an artefact of old DnD games where dices were used to decide damage and if procs worked etc.

Problem with these systems is that it is hard to convey the chances to players. Technically any probability you mention in the tooltip is not correct.
 
That is why I said...
Are we done being an edgelord now?

Which is actually a disadvantage in competitive games because you are then relying on RNG rather than skill. This is one of the reasons why Guldan's Rain of Destruction is one of the least chosen ultimate abilities in HotS.
There are many reasons why a game designer might want a certain amount of randomness in their games, the most important ones being, that it sometimes creates exciting moments and that it makes sure that the better player will not always win.

I'm not a fan of chance based abilities either. In fact, I don't even like passives like critical strike - but that is not because of unreliability, but because an active skill is usually more fun to use.

But that is not the point of this topic, so back to discussing algorithms here, shall we?
 
@Zwiebelchen
Yes, we shall.

I thought of another way to deal with this, and it involves using the natural number e.

To compute for the probability, we would have a base percentage.
Let's take 0.01 for example (percentage, just in decimal form :p)

On the first instance, the chance of something occuring is 1%
When it fails, the chance goes up by e^(1%).

Right now, I haven't implemented yet a counter that counts how many instances must be present for something to fail multiple times before making it occur, but should I implement it, it will be quite heavy code-wise.

Back to the 0.01; when it fails about 99 times, it will have a maximum value of 1%*(e^99%). If it ever breaches the (e^100%), the system would then make it behave (individual instance) as an absolute, returning true by the next instance. However, when I get their average, it would be at least 50% greater than the original percentage (0.01) so I have to calculate the average beforehand.

To show what I mean, here is an example instead:

Percentage of Success:
For 1%

1% - 1
1% * (e^1%) - 2
1% * (e^2%) - 3
1% * (e^3%) - 4
...
...
1% * (e^99%) - 100
100% (override) - 101
 
Status
Not open for further replies.
Top