• 🏆 Texturing Contest #33 is OPEN! Contestants must re-texture a SD unit model found in-game (Warcraft 3 Classic), recreating the unit into a peaceful NPC version. 🔗Click here to enter!
  • It's time for the first HD Modeling Contest of 2024. Join the theme discussion for Hive's HD Modeling Contest #6! Click here to post your idea!

Feature view design decisions

Status
Not open for further replies.
Level 26
Joined
Aug 18, 2009
Messages
4,097
Pondering about some design decisions concerning my method of mapmaking. I have used the basic layout for my major project for some time now and deem it very handy. But when I try to generalize it, I have to think about what I can/still want to assume in order to find a balance between convenience and universality.

So first explaining the current situation:

In the World Editor you face a technical view. There is an Object Editor where you create objects, a Trigger Editor where you write triggers, an Import Manager where you import resources and so on. When you try to realize some feature for your map like a custom ability, it usually consists of multiple technical components. One or more script pages, corresponding objects, textures, sounds, models etc. which you need to integrate. A disadvantage to this paradigm is that you would normally work on one feature at a time and the technical view then makes it troublesome because the parts are mixed in with stuff of different features. There are also no strong references in the World Editor. Deleting your custom ability does not erase all the components, the concept of your custom ability is virtual to begin with, there is no technical mechanism tying the parts together, so you have to do it manually.

What I deem rather effective, although that also depends on the map, is an organization in features. You have a directory structure like:

  • featureA
  • featureB
    • featureB_sub

and each feature can possess multiple components or even a sub feature as depicted above:

  • featureA
    • some unit for featureA
    • some script for featureA
  • featureB
    • some item for featureB
    • some comment for featureB
    • featureB_sub
      • some whatever for featureB_sub

That renders your imaginary feature compounds real under a label and grants a scope view.

I display this structure on the operation system. Each feature corresponds to a folder, each component is a file. An IDE like Eclipse can import the file system by linking to the project folder. Using the operation system is good for integrity, too, you won't be able to corrupt everything at once and a main desire in general is to be able to edit multiple components in parallel.

A compiler with a couple of tools translates and merges the data for wc3. It currently looks like this that there are two relevant component types. One is for script code in textual shape (like vjass, wurst), the other are object definitions in the form of data tables. Latter provide additional script code @compiletime, which must be imported in the local scope. Ex:

featureA
  • unitA
    • id = 'PlGn'
    • name = "Placid Gnoll"
    • life = 20
    • moveSpeed = 270
  • scriptA
    • Code:
      scopeA
          import unitA_gen.script
      
          func createLocalUnit(player owner, real x, real y, real ang)
              CreateUnit(owner, unitA.id, x, y, ang)

Vice-versa is not necessary. Object definitions do not need to know scripts. If they depend on some variable, that variable is exported into an object, too. Objects can reference each other. The current syntax for that is '<pathToObject:field>'.

issue 1:
How to integrate the object data in the script? The problem is that the script files span an own network of scopes, which may not be aligned to the one I described above. However, unless you want the highest level of explicitness, the object data should be bound to a script scope.

  • option A:
    state the target file/script scope in the object as a special field -> bad for copy-paste, adds a coupling 'object -> script', speaking against what I just said, that the object would not need to know the script
  • option B:
    write the import line into the script like in the example above -> kinda inconvenient, redundancy, more explicit to show what objects are available, not too good for copy-paste, there are usually multiple objects that need to be imported
  • option C:
    auto-collect all the objects' scripts to a single one/under a single fixed label and import that one by a single line -> more convenient, less explicit, better for copy-paste, does not work with script files which are on the same feature level unless all of them are targeted
  • option D:
    auto-import the objects into all of the local script files' scopes and have the compiler build wrappers to not have duplicates of the imports because e.g. initializers must be unique -> most convenient and copy-paste friendly, least explicit, additional language dependency and requirements on the compiler (need to transform all the references or create a clusterf*ck of wrappers)
  • option E:
    force the condition having only one script scope per folder and auto-import everything therein -> less flexibility, highest implicitness, requirement on the scripts
  • option F:
    do not use the script scopes as target receiver of the object data directly but instead invent some extra syntax, so the script can call the object outputs while maintaining the short, local paths of the feature the script file is in -> the special syntax on every reference (probably just some $ prefix + relative path add in case of subfeatures), the severe contestation here is that, because the object data is not strongly tied to the script scope, foreign script scopes have to call the object data directly or use the special syntax, too, therefore they have to have additional information about the implementation
  • option G (currently in use):
    have an extra dedicated folder per script scope inside the feature folder, so all the belonging object data for the script scope gets dumped there -> very implicit, redundancy, folder must be specifically marked and it adjudicates script files a special stance, raises another question whether objects referencing each other should consider the supplementary encapsulation in the path declaration this script scoping induced or not

That's enough for starters. More questions maybe later on.
 
Level 15
Joined
Aug 7, 2013
Messages
1,337
Hello WaterKnight

In the World Editor you face a technical view. There is an Object Editor where you create objects, a Trigger Editor where you write triggers, an Import Manager where you import resources and so on. When you try to realize some feature for your map like a custom ability, it usually consists of multiple technical components.

This is a very good point and weakness in the standard style for making maps. I handle this by doing all scripting, importing, and object generation in a single sandbox in .j files. That way the Jass has access to the objects to reference and vice versa. This is done via a collection of (ad hoc) Python scripts, but it does the job, hiding the low level issues.

I think you are a discussing a more "professional" way to resolve this issue. It would be nice!
 
Level 26
Joined
Aug 18, 2009
Messages
4,097
Yeah, we had this with a combination of textmacros and grimext external calls before. The issue is that this is first very slow and 2nd very ugly. You would not want to edit every kind of information as script code. Also stuffing everything into a single file is messy and non-modular. A solution would be to generate your python scripts/macro lines from external files @compiletime. wurst does have compiletime funcs but it can only create wc3 object data at the moment it seems.

Anyway, when you do not embed the data in the code directly, you need some interface (or implicit mechanism) where to place what external data. And that is one question I raise in this thread. Though I would also like to know what is the right way to organize data. Imagine for example that your project is divided into different chapters, each chapter can have a set of spells. Now do you pick this structure:

  • ChapterA
    • Spells
      • SpellA1
      • SpellA2
    • Units
      • UnitA1
      • UnitA2
  • ChapterB
    • Spells
      • SpellB1
      • SpellB2
    • Units
      • UnitB1
      • UnitB2
  • ChapterC
    • Spells
      • SpellC1
      • SpellC2
    • Units
      • UnitC1
      • UnitC2

or do you go with:

  • Spells
    • ChapterA
      • SpellA1
      • SpellA2
    • ChapterB
      • SpellB1
      • SpellB2
    • ChapterC
      • SpellC1
      • SpellC2
  • Units
    • ChapterA
      • UnitA1
      • UnitA2
    • ChapterB
      • UnitB1
      • UnitB2
    • ChapterC
      • UnitC1
      • UnitC2

That tree structure type is ambigiuous to design once there is more than one classification type. There are times when it's better to see a chapter as a scope and other times it's better to see it from the spells/units perspective.

Furthermore, a question is exactly how well you can encapsulate the individual atomic data (file). Since part of it may be used by multiple operating places, you probably do not want to tie it to a single, deeper directory nor would you copy the file to all directories using it. So one obvious idea would be to create a new abstract label (directory) and dump the data therein, yet this must be classified as well and it would be a problem to know when all the references to this shared object have vanished.
 
Status
Not open for further replies.
Top