• Listen to a special audio message from Bill Roper to the Hive Workshop community (Bill is a former Vice President of Blizzard Entertainment, Producer, Designer, Musician, Voice Actor) 🔗Click here to hear his message!
  • Read Evilhog's interview with Gregory Alper, the original composer of the music for WarCraft: Orcs & Humans 🔗Click here to read the full interview.

Best algorithm for timer system?

Status
Not open for further replies.
Skip everything and read this post instead

The timer system algorithm I came up with is a binary heap of lists.


The heads of the lists are stored in the binary heap, which include timestamps so that you know precisely how much time is left in a timer. The only running timer at any given time is the root of the heap.

The lists on the heap are based on remaining time: each one contains a constant timeout. For example, if a timer were to expire and repeat, it'd move to the back of the list it's on.



So.. when allocation occurs, first it takes the timeout of the timer to see if there is a timer queue already created for it. If there isn't, it creates that queue.

The timer queue is a segmented list (timers can be started at different timers). It looks at the last list element to see if it's timestamp matches the created timer's stamp. If they match, the created timer is added to that segment. If not, a new segment is made and the timer is added to that new segment.

Each segment contains a trigger. When a timer is added to a segment, it's boolean expression code is added to that segment's trigger. When a segment expires, the trigger for that segment is evaluated.

You don't loop through timers on a segment. Data retrieve is this-
JASS:
set currentNode=currentNode.next
return currentNode

What this means is that when a boolean expression on the trigger is fired and retrieves data, it will end up preparing the next node. When the trigger is finished being evaluated, currentNode will be the last node on the segment.

If a timer is destroyed, it is added to a destruction list (for recycling and removing from trigger). If a timer is paused or moved to another segment, it is added to a removal list (for removing itself from trigger only).


When a segment is finished firing (after trigger evaluation), it will always be on the last node for that segment (destruction/pause/movement will handle updating the node to the previous if that node was the current node for expiration). If that last node happened to be a segment head, then the expired segment is empty and can be deallocated. If not, then a range operation can be performed using the head of the segment and that last node to push it behind the segment's head (node inside of the heap). The heap node's timestamp is then updated and it is cycled through heap (new position).

In the case of list removal, the heap node's last node is then checked to see if it's a head. If it is a head, the heap node is empty and can be deallocated. In this case, the heap node is removed from the timer heap.

Triggers continue to be evaluated until the remaining time of the current root of the heap is not 0 (accurate to 6 decimal places).



So... this makes it so that there is only 1 node in the heap for each timeout, 1 trigger for each set of expiring timers (segments), minimal loops and minimal building/clearing of triggers and trigger destruction. And ofc, all of this would run on 1 timer.


In most cases, there will only be 1 binary heap operation (cycle/delete) and 1 list operation ^^.



So, anyone know of any better algorithms, or is this the best one for timers?


There is a timer wheel algorithm as well, but that'd only seem useful for other languages where you are depending on sleep command and ticks. Really, I almost think the algorithm I came up with is better than the timer wheel algorithm as it takes up way less memory and only minimally slower (binary heap operations).

edit
The one thing that I don't like in this design is the use of timestamps. If anyone can think up a way to do this w/o the use of timestamps, that'd be amazing ; D.
 
Last edited:
Level 17
Joined
Apr 3, 2010
Messages
1,101
No idea probably is the best.

Hear are some Worse alternatives

1. Firing of 10 timers at once and taking an average after every 5 seconds and repeating

2. Doing a loop
Elapsed game time = 0.1 second
Action:
Set X=X+0.1
Add to this trigger even Elapsed game time =X seconds/(Or use -This Triggers Execution amount/value)thus removing variable


3. Changing game constants so that a day passes every 0.1 of a second--
Then checking ticks via that

4. Use your system but in a physical form via making unit casting seperate abillities at intervals the timers-Instead of a timer timer


As a side note wouldn't this be best suited for the Triggers and scripts forumn instead of War3 Helpzone?
 
why would anyone need timers which don't run every 0.03/0.025?
(except some very slow timers which would not run often enough to be a problem)
you could reduce microlags by spreading the actions a little so it's not like

0.01: 0 actions
0.02: 0 actions
0.03: 666 actions
...

but

0.01: 222 actions
0.02: 222 actions
0.03: 222 actions
...

instead
(or smaller intervals)
maybe it's easier for the cpu to have a constant load rather than working in pulsed mode
less data -> more of the total amount of data can be worked on in the faster but smaller caches
though I think the pc should be able to buffer 0.03 sec of data
at least it would reduce the electricity costs :p

and why would timers ever be a problem?
do they require so much performance compared to the code they execute?
 
Many people ask what the point of timer systems are. Timers perform well enough and they are light. Handle id retrieval and hashtable reads are also extremely fast. In fact, there are no existing stable timer systems that are submitted anywhere. They can easily crash wc3 when presented with certain scenarios like timers started at different periods and many timers of different timeouts. Most of them also perform very poorly for large timeout timers.

I now examine Merged Timers, KT2, and natives in a scenario that starts timers off at intervals of .00125 (to hurt KT2 and Merged Timers as much as possible) with timeouts at intervals of .00125 (again to hurt them both as much as possible).

Data Retrieval:
Merged Timers
JASS:
		static method operator data takes nothing returns integer
			set en=tn[en]
			return lv[en]
		endmethod

vs

Native
local thistype this=LoadInteger(h,GetHandleId(GetExpiredTimer()),0)

Code:
6500 Timers
Timeouts: 640
Range: .00125 to .8

KT2: Crash
Merged Timers 2: 33-37
Merged Timers 3: 39-42
Merged Timers 3 Data: 20-21
Native: 39-42
Native Data: 20-21

What can immediately be seen is that KT2 crashed, which was to be expected. It's only merges took place between .00125 and .3, which were from constant merges. This only made up about 2437 of the 6500 timers, meaning that the remaining 4063 timers ran solo with extra trigger evaluations and overhead, which caused KT2 to crash in a major way. Furthermore, it ran timers above .3 on triggers. 4063 expiring triggers are much slower than 4063 expiring timers, so this lent to its crash.

Merged timers performed better because it uses a different merging algorithm than KT2 and it does not use any timer trigger events.

The difference between merged timers 2 and merged timers 3 is the tick merging algorithm. 3 has an improved merge algorithm over 2 (relative to timeout rather than constant).

What can be seen in this example is that first, the timeout merge rate is 6500/640 or 10.15625. This means that there are only 640 active timer queues from .00125 to .8 in intervals of .00125. From here, of these, many of them share the same tick. It isn't until about .61 that timeouts can even have 2 ticks in merged timers 3. This means that there are only around 304 timers in the upper range (.61 to .8) and 487 timers in the lower range, making for about 791 timers total. The ratio between the native timers and merged timers is 1 merged timer for every 8.22 native timers in this scenario. Given that they are virtually identical in this case, this ratio of 1 merged timer to 8.22 native timers is about the minimum ratio for this timer system to compete with natives. Furthermore, all of the fast timers are merged together, meaning that the timers that expire the most often expire about as often as timers of larger timeouts (since everything here is in a focused area) (.61 has 2 ticks where .3 has 1 tick, so .3 expires 2x just like .61).

On to the next example
Code:
5000 Timers
Timeouts: 320
Range: .00125 to .4

KT2: Crash
Merged Timers 3: 20-21
Merged Timers 3 Data: 10-13
Native: 20-21
Native Data: 1

In this case, there are only 320 timeouts and all of these timeouts share the same ticks. In this example, the ratio is 1 to 15.625. While this isn't very noticable in the timer expirations, it becomes very noticable in the data retrieval.

Another intersting fact is that in the data retrieval, merged timers 3 slowly climbed from 1 fps to 10-13 fps. This suggests that globals run on a treap.

In another scenario
Code:
250 Timers
Timeouts: 8
Range: .00125 to .01

Merged Timers 3: 36-42
Native: 36-41

There are only 8 timeouts, meaning that the ratio is 1:31.25. In this case, merged timers perform slightly better than natives.

It is perhaps in a case of less than 1:2 that merged timers will perform quite a bit worse than natives.

In the next scenario, I put a max merge rate into the system to see the minimum number of merges before the system can actually perform as well as natives.

Recall that the merge rate is not constant across all timeouts (<.61 will auto merge everything), so the ratio required will actually be different from the ratios above. Also keep in mind that because the smallest timeouts always merged, Merged Timers could get away with a smaller ratio.

The system requires a ratio of at least 1:15 to perform relatively well (30 fps on merged timers vs 40 fps on natives). This merge is across all timers in the system. Again, recall that the cost of a timer that expires much more often is much higher than a timer that expires much less often. Merged Timers lowers the % leniancy as timer timeouts grow larger, meaning that less timers are merged across large timeouts from small timeouts. The leniancy at .3 is 100% and at .6 is 50%, but this 50% turns into 100% as a given timer is only ever at a most .3 seconds away from .6 (a case where .6 is right in the middle of its expiration, .3 seconds left). So the real leniancy for a timer is its % leniancy * 2, meaning that .3 is 200% (100% on each side) and .6 is 100% (50% on each side). 1 second is around 62% (31% on each side).

When examining timers further, the worst case scenario for natives is much more dire than that for merged timers and the best case scenario for merged timers is much better than that of natives.

For example, if 6500 timers were to all start at the exact same time with the same timeout, then that'd translate to 1 timer with 1 trigger evaluation on it for Merged Timers, which would of course drastically outdo natives.

If there were 6500 timers of all different timeouts dispersed, the timeouts would have to be far enough apart to not merge together (as timeouts grow, so does tolerance for merges) and they'd have to start at far enough intervals not to merge together (as timeouts grow, so does tolerance for tick merges). What this means is that it's pretty much impossible for merged timers to ever lag in a case where natives do not. In any scenario, merged timers will perform as well as natives or better (small timeouts or large). This means that merged timers will not lag in cases where natives do. This point was proven in the above scenarios that aimed at putting merged timers at the greatest disadvantage possible while making it noticeable in the fps.
 
Level 9
Joined
Apr 26, 2011
Messages
403
why would anyone need timers which don't run every 0.03/0.025?
(except some very slow timers which would not run often enough to be a problem)
you could reduce microlags by spreading the actions a little so it's not like

0.01: 0 actions
0.02: 0 actions
0.03: 666 actions
...

but

0.01: 222 actions
0.02: 222 actions
0.03: 222 actions
...

instead
(or smaller intervals)
maybe it's easier for the cpu to have a constant load rather than working in pulsed mode
less data -> more of the total amount of data can be worked on in the faster but smaller caches
though I think the pc should be able to buffer 0.03 sec of data
at least it would reduce the electricity costs :p

and why would timers ever be a problem?
do they require so much performance compared to the code they execute?


WOW, is it true ? so shorter timmer = less lag ? Nestharus have any comment on this ?


for example, I have a big period event:
Trigger: every 3 second
Condition : None
Action :
- select all xx_unit as temp_group
-- if healer-unit have full mana
-- then cast spell (eg, create dummy unit etc)

destory temp_group
- select all yy_unit as temp_group
.....blah blah again

should I change above trigger to every 0.3 second ?


I have another similar Period event :
trigger every 3 second
check every unit in Creep_Group (several hundred unit)
if they do nothing, send them to next check point
check every unit in Fighter_Group (few hundred unit)
if they do nothing, send them to next check point

should I change it to 0.3 second too ??
 
Actually HuanAk, that has nothing to do with what he said :/

D4RK_G4ND4LF was just saying that it's better to divide the "load" over time than to force the cpu to do everything at once.

For example:

- 10000 iterations
- 0 iterations
- 0 iterations

- 3333 iterations
- 3333 iterations
- 3333 iterations

The second one is better than the first (not saying it's any good).
 
Status
Not open for further replies.
Top