How you implement an AI all depends on what sort of map you are making.
If it is an AoS (which by you referencing DotA Allstars I am guessing it is) then you need a trigger system to control their hero.
Fundamentally there is a periodic "think" trigger which is run for the player. This should decide the overall goals of the player and update AI state and issue orders when appropriate.
A simple example one for an AoS might look at gold and health. If sate is "idle" and health >80% then order to lane and set state to "going to lane". If at lane and state is "going to lane" then set state to "laneing". If state is "laneing" and gold > 1,000 then set state to "going to shop" and order to shop. If state is "going to shop" and unit is at shop then run some shopping logic (AI could buy all the stuff instantly) and set state to "idle". If state is "laneing" and health <25% then set state to "retreating" and order to health fountain. If state is "retreating" and at fountain then set state to "idle".
As one can guess this thinking process is a simple state machine. It will certainly do something and be harder to kill than "beginner" AI in Heroes of the Storm but it will still be easy to kill and show absolutely no map awareness at all.
When in each state you make sure it progresses towards another state. For example if it is in the "retreating" state you make sure it is retreating every time the think process is run, and if it is not then you re-issue the order to run back to base.
One could expand it by adding more states to deal with more cases. For example you might add an "off-lane" state for dealing with creeps, or a "prowling" state for hunting down wandering or low health heroes. One could add states unique to each hero to represent capabilities of heroes, eg a pushing hero might not have a "prowling" state as they are terrible at chasing and killing other heroes. You could also add extra break out states, for example if "retreating" and it wanders across a hero which is as good as dead then you might want to switch to "prowling". You could also add additional state types such as a "victim" unit so that when in a "prowling" state you could focus and chase down a specific target.
At some stage you will have to deal with the possibility of deciding which state to move into from a current state. For example when "retreating" that low health hero could be an easy victim, but also it could still be strong enough to turn the tables on the player. In this case you might want to consider what hero it is, its current health and its item values and compare them with yours and work out a probability of "how successful" attacking it will be. If there is a good success rate then switch to "prowl", otherwise keep "retreating". If the success rate is very bad, to the point it is very likely to try and kill the hero, you might want to move into another state "evade" during which you use escape skills and avoid running into the hero. This is sort of fuzzy logic and is a simple for of decision network. You could adjust the results by adding even more factors such as enemy reinforcements, allied reinforcements, self-sacrifice bonuses (eg if it can stop them from killing 2 other heroes) etc.
The main "states" can be thought of as activities. In a true AI the states would be much more abstract and actuator commands issued as the result of the command choosen being the most favourable to send based on previous states and the current sensor input. Such an implementation will potentially create an AI with the capabilities of a human however it is extremely complex, to the point it is not viable with current technology (hence why terminators have not killed us all, yet). Instead you use the activity abstraction to make decision making much easier and simpler to implement. The entire AI then becomes selecting and configuring the right activities based on sensor input and current activity state, which is the sort of system I described above. It still makes intelligent decisions, but at a high level as opposed to a truly intelligent system such as yourself which has to make decisions ultimately at a very low level (such as move muscles to press hotkey, move mouse to move cursor to right place of screen etc).
Working with this global think loop you have a "tactical" think loop. Where as the global think loop is responsible for macro (overall game strategy), the tactical think loop is responsible for micro (individual unit use). The tactical think loop is responsible for getting the hero to interact with what is around it, eg casting abilities, avoiding enemy abilities or last hitting. This can be implemented a lot simpler than the global think loop as it might not even need persistent state, making all decisions at any given time based purely on the percepts available to it at that given tick.
An example tactical AI loop could be as follows. If the hero is closer to the enemy core than nearby minions, then order it towards friendly core a bit (stay behind minions). If there are 7 units bunched together or 2 heroes then cast Blizzard on them. If health is below 50% then cast Death Pact on the lowest health friendly minion. If a friendly minion has health less than 10% then attack it (kill deny). If an enemy minion has health less than 10% then attack it (last hit). If there is an AoE danger marker (you will need to make such a system yourself) and unit is inside the area then order it to move (or blink) outside the AoE danger marker area (avoid AoE abilities).
Your tactical decision loop can vary based on the current activity the global loop decides. So for example in the "retreating" state you might want to hit Windwalk if enemies are nearby or Blink over enemies if they walk at you. In the "Prowl" state you might want to save kill securing moves until the victim is low health. If in the "retreating" state you might be more liberal with ability use, casting expensive area moves on single targets chasing you, since your mana will be replenished when you heal.
One could make a simple list of tests, ordered by priority of execution, for the tactical AI. I think this is how Heroes of the Storm implements it and is how StarCraft II units work (so the idea is pretty solid and should work in WC3). Of course one could use a decision network to come to which action is the best to create as that could technically be more correct however it is also a lot more complicated and pretty solid results can be obtained from a simple sequential list of tests.
There are several reasons why it is good to divide the thinking process. The first is modularity, they can be developed separately from each other. Then there is reuse, since generally global think loops could be recycled between heroes where as each hero likely needs its own tactical think loop to use its abilities appropriately. Then there is reducing overhead, since global think loops can have a period of 1 to 10 seconds where as a tactical think loop might have a period of 0.1 to 1 second. Finally there is the object relationship, since there is only ever 1 global loop but depending on game there may be multiple tactical loops per player for example in a dual hero mode (one for each hero) or if a summon is involved (one for hero and one for summon).
In the end the AI will never be perfect. It will always be exploitable, and act stupid at times. However it can still be good enough to give newbies a run for their money which probably justifies all the work.