if I frequently change the input based on a deterministic but unknown method that seems like it would make the patterns more chaotic. if you can coherently explain why it wouldn't be like this please do.

So, I've actually thought about this for a bit and come up with an example.

Let's imagine a very bad RNG, that returns numbers in the range [0..2^32-1] (unsigned 32-bit integer):

1. Every 7th number is divisible by 5

2. Every 3rd number is divisible by 2

3. Other numbers have uniform distribution

It'll generate sequences of numbers like this:

x x 2 x x 2 5 x 2 x x 2 x 5 2 x x 2 x x 10

Where "10", "5", and "2" indicate numbers that are divisible by 10, 5, and 2 respectively.

Let's say we now reseed the RNG with itself 11 times (which is basically just generating a new random number and discarding it).

The pattern looks like this:

x 2 x x 2 5 x 2 x x 2 x 5 2 x x 2 x x 10 x

What about 13?

x 5 2 x x 2 x x 10 x x 2 x x 2 5 x 2 x x 2

What about 131?

x 2 x x 10 x x 2 x x 2 5 x 2 x x 2 x 5 2 x

What about 50123?

x 2 x x 10 x x 2 x x 2 5 x 2 x x 2 x 5 2 x

What about 21?

x x x x x x x x x x x x x x x x x x x x x x

Now we're getting somewhere. We've eliminated this statistical bias, but to do it we need to know the period of it - 21. Otherwise we're just throwing things at the wall to see what sticks.

This is a really simple, and perhaps unrealistic, example, but I think it illustrates

**why** reseeding an RNG with itself doesn't necessarily make it better. It

**can**, though, if you know the exact underlying faults, but its not straightforward.

The specific flaws I've used in this example aren't that important - the important part is that they're

**recurring**, and most RNG flaws are, in one way or another, some kind of pattern that repeats itself, thus reducing the actual randomness of the output. You can substitute these flaws I used for something else, and chances are, you'll see new patterns emerge anyway.

Closing thoughts:

The reason why re-seeding (or skipping) an RNG doesn't make the output more "chaotic" is because RNG output isn't really chaotic in the first place. You're just swapping one pattern for another, but you're not completely eliminating it (unless you know what the pattern is and specifically compensating for it, but without the source code for the RNG this is pretty hard).

EDIT:

I've kinda gotten interested in this myself, and decided to run some tests. I made a very simple, and very bad RNG:

(this is Rust, but should give the general idea)

Code:

```
fn generate(&mut self) -> u32 {
self.state = self.state % 16255027;
self.state = self.state.wrapping_mul(self.state % 4256233);
return self.state;
}
```

If asked to generate numbers in the [0..9] range, the result is abysmal:

Code:

```
0 - 14%
1 - 4%
2 - 15%
3 - 7%
4 - 14%
5 - 5%
6 - 15%
7 - 5%
8 - 15%
9 - 6%
```

Let's try fixing it.

Variant A: Skip 1000 numbers each time.

Code:

```
0 - 13%
1 - 5%
2 - 16%
3 - 5%
4 - 14%
5 - 6%
6 - 16%
7 - 5%
8 - 14%
9 - 4%
```

Still bad.

Let's try re-seeding it each step with "random() * 80123"

Code:

```
0 - 17%
1 - 3%
2 - 15%
3 - 5%
4 - 15%
5 - 5%
6 - 15%
7 - 6%
8 - 13%
9 - 5%
```

Re-seed with "random() + random() % 15485863" (15485863 is a prime number, and primes are good for this kinda stuff).

Code:

```
0 - 15%
1 - 6%
2 - 13%
3 - 5%
4 - 15%
5 - 5%
6 - 17%
7 - 5%
8 - 14%
9 - 5%
```

Still bad.

Finally, let's try reseeding it each step with a number from an

**actually good RNG.**
Code:

```
0 - 15%
1 - 5%
2 - 15%
3 - 5%
4 - 14%
5 - 5%
6 - 15%
7 - 5%
8 - 15%
9 - 5%
```

Doesn't help at all.

**It isn't fixable.**
Let's try a slightly better RNG:

Code:

```
fn generate(&mut self) -> u32 {
self.state = self.state % 16255025;
self.state = self.state.wrapping_mul(910385619);
return self.state;
}
```

This one, when seeded with a

**good **RNG at each step, produces numbers in uniform distribution. So that's a start.

But, let's remove the good RNG from the equation:

Code:

```
0 - 13%
1 - 9%
2 - 15%
3 - 13%
4 - 10%
5 - 11%
6 - 9%
7 - 9%
8 - 4%
9 - 9%
```

Welp.

Let's re-seed it with itself 100 times:

Code:

```
// first run
0 - 13%
1 - 9%
2 - 16%
3 - 13%
4 - 10%
5 - 11%
6 - 8%
7 - 8%
8 - 3%
9 - 8%
// second run
0 - 10%
1 - 10%
2 - 9%
3 - 9%
4 - 11%
5 - 10%
6 - 11%
7 - 10%
8 - 11%
9 - 10%
```

Why are the 2 runs different? They each started with a different initial seed value. In fact, this algorithm is highly dependent on having a quality seed available at the start.

Let's try re-seeding it with this each run: "((rng.generate() ^ 3279046249) ^ 273646861) ^ 1939583911" (all of these are primes, too)

Code:

```
0 - 9%
1 - 11%
2 - 10%
3 - 9%
4 - 11%
5 - 10%
6 - 10%
7 - 9%
8 - 9%
9 - 11%
```

Still some bias, and occasionally with some seeds some numbers have only a 7% chance of appearing. But it seems to be much less dependent on the initial seed now, so that's a plus.

So, what I think you can do is regularly re-seed the WC3 RNG with a better RNG and you'll actually have better output, though it won't be as good as if you used a good RNG to begin with, and it still might depend on how good/bad the initial seed is.

My own conclusion from this, is that this whole topic is rather complicated. Some RNGs marginally improved with the techniques you were proposing, others were just unfixable. It depends a lot on the technique you're using and the nature of the bias. But, you can try throwing a xorshift into WC3's RNG once in a while - might make it better?