• 🏆 Texturing Contest #33 is OPEN! Contestants must re-texture a SD unit model found in-game (Warcraft 3 Classic), recreating the unit into a peaceful NPC version. 🔗Click here to enter!
  • It's time for the first HD Modeling Contest of 2024. Join the theme discussion for Hive's HD Modeling Contest #6! Click here to post your idea!

wc3 models broken

Status
Not open for further replies.
Level 4
Joined
Aug 2, 2015
Messages
63
Anyone knows why my models like this after i update patch to 1.30.2 ?
 

Attachments

  • Untitled.png
    Untitled.png
    1.3 MB · Views: 159
I have written some MDX model editing software in the past and might be able to help. Can you point me to which particular model that is?
I'm not aware of any particular changes between 1.29 and 1.30.2 that would have affected the way the models are loaded and rendered, but I have seen this kind of issue commonly with custom models over the last 15 years. Certain models, when using geometry attached to a nonexistent "Bone" node (or to a node that is temporarily un-loaded by the game for performance optimizations) will then stretch out and appear to attach to nearby models.

Nothing in the original game does this because all of their "Bone" nodes are correctly tagged with the correct performance optimization settings (GeosetId, GeosetAnimId) but when custom models make mistakes in these tags then the model can have this issue, because the geometry doesn't notify the game that it requires that particular node (so the game temporarily "forgets" the node, to save space, and this causes the geometry to load the next nearest node from other nearby models).


Edit: Or is that red stuff a ribbon emitter? Can you point me to the file we are actually looking at?
 
I am not sure why I did not come back and reply here before. Somehow I missed the notification.

Did you solve this yet?

This model works on Patch 1.26 but does not work on Patch 1.30, as you say. I do not know why. I would like to find out why, because I work on my model editor and it would be good to be able to have a warning for models that only work on Patch 1.26, or something like that.

Edit:

This video explains what is wrong with your model. The groups limit was reduced from 256 to 255, so you need to delete a group.

 
Last edited:
Level 4
Joined
Aug 2, 2015
Messages
63
Did you solve this yet?
yes, i did by delete some geoset in the model
This video explains what is wrong with your model. The groups limit was reduced from 256 to 255, so you need to delete a group.
i interested to how delete a group or reduce group like the video you provide. but i don't get it how to do it
i think can you explain with a text if you are pleased? or i should watch it carefully
anyway, thank for your reply
 
Basically I just went into the list of groups in the MDL in a text based format and selected one and pushed delete, then changed references to it where it was using 255 to be 254 instead.

upload_2019-2-22_0-49-12.png


Does this one look any good? It's really hard for me to test without the texture:
 

Attachments

  • BadTower255Groups.mdx
    974.6 KB · Views: 36
Level 4
Joined
Aug 2, 2015
Messages
63
Does this one look any good? It's really hard for me to test without the texture
yes,that look good but i think it is darker than before?
do you want the texture ? cause i doesn't very understanding about modeling.
and i think you are a good modeller, so i want to ask for help to fix the model if there is another problem ?
to prevent the problem that occurs in the future, if you are pleased
anyway, thank you very much
 

Attachments

  • bad_tower_effect_diff.blp
    22.9 KB · Views: 45
  • bad_tower_glow_diff.blp
    8.7 KB · Views: 39
  • bad_tower002.blp
    413.7 KB · Views: 34
  • CloudSingle.blp
    6.2 KB · Views: 34
  • dire_tower002_Alpha.blp
    2.5 MB · Views: 31
  • dire_tower002_color.blp
    1.5 MB · Views: 47
  • f0006.BLP
    50.8 KB · Views: 37
  • PNbattlehunger_fx1.blp
    40.4 KB · Views: 31
Yeah, when I thought about it further, I'm more skeptical of what I said. There was something I forgot to do while making the video. After the video was over, I opened the same test map that I showed in 1.26 WE and 1.30 WE, but I opened it using my experimental build of HiveWE that includes unit animation. Notably, in that build and even where I add +1 to all vertex group IDs and use 0 as a performant way of discarding IDs from a fixed sized array of IDs from which only some are valid -- even then, the renderings of the white generated models look exactly like the 1.26 version of the game and work fine.

But I was using a dynamic length texture buffer to pass the matrices. Maybe if Blizzard uses a fixed size memory structure of size 256 then that would explain it.
 

Dr Super Good

Spell Reviewer
Level 64
Joined
Jan 18, 2005
Messages
27,198
But I was using a dynamic length texture buffer to pass the matrices. Maybe if Blizzard uses a fixed size memory structure of size 256 then that would explain it.
I am more interested in the sub optimal approach and how it differed from the optimal one.

In any case this likely has something to do with Warcraft Reforged. Obviously they are heavily updating the graphic code for that since they need to move from the fixed function pipeline of old D3D7/8 APIs to the shader based D3D11 API.
 
So, I am very uncertain if I am correct that there used to be a sub optimal approach and that it was replaced. But if we assume there was, the sub optimal approach would be to have a jagged array of matrix information like we see in an MDL file.

Code:
        Matrices { 29, 39 },
        Matrices { 14, 29 },
        Matrices { 34, 40 },
        Matrices { 40, 41 },
        Matrices { 34 },
        Matrices { 33, 38 },
        Matrices { 38, 39 },
        Matrices { 33 },
        Matrices { 35, 42 },

Because these are stored as a jagged array, when the graphics pipeline wants to render vertices attached to "matrices[4]", for example, things get complicated. We can make a large array of 16-float chunks where each chunk represents a 4x4 matrix for the computed position of each bone at the current point in the animation timeline. The index into this array of matrices will match the "ObjectId" that we see on the bones in the MDL. At this point, our graphics code needs to know the offset in the smaller jagged array of group information (the one pictured above), and the number of elements. Then it will jump to that offset, loop a number of times equal to the number of items, and sum the 4x4 matrices that it gets by resolving the indices listed in that block as a list of indices into the array of per-node 4x4 matrices.

Ghostwolf realized it was weird and inefficient to allow any number of possible matrices in a single one of the "Matrices" chunks in the jagged array shown above. It may also be very difficult to accomplish in modern shader pipelines that prefer array data to be fixed width. So he hardcoded his stuff to always add 4 of them together, discarding the rest, regardless of circumstance. From what I understood (and in my code in my experimental fork of HiveWE), we made index 0 resolve to a 4x4 matrix filled entirely with zeroes. That way, this process of always adding up 4 elements will always work, since a matrix (like the ones shown above) with only 1 or 2 elements will simply add 0 to them a few times, then we divide by the size to get our average (and we just do Min(size,4) instead of pure size).

Now, I'm not actually certain about performance since this was my first time coding a vertex shader (other than an online tutorial I did for practice), even though I have played with modifying the MDL format for years and generally understood humanly what all the parts of the file are supposed to do. I mentioned that the inefficient design, what I have not tried, would require indexing into the jagged array with something that provides offset and size for a given entry in the jagged array. Now, Ghostwolf, from what I could tell in his model viewer source, does all of this index resolution outside of the shader code. This is really the most ideal way to go if you want performance and don't care about memory. He does this resolution on the CPU, but that's OK because he does it all when the model loads and constructs a wrapper scaffolding object on top of the Geoset that is also named Geoset but is in a folder called handler, so you could think of it as a "handler geoset" for rendering:
upload_2019-2-24_11-1-1.png

This excerpt is from:
flowtsohg/mdx-m3-viewer

After this, his vertex shader that uses this information already know "a list of bone indices per vertex" that he can use to fetch matrices.
Code:
    ${shaders.instanceId}
    ${shaders.boneTexture}
    uniform mat4 u_mvp;
    attribute vec3 a_position;
    attribute vec3 a_normal;
    attribute vec2 a_uv;
    attribute vec4 a_bones;
    attribute float a_boneNumber;
    attribute float a_teamColor;
    attribute vec4 a_vertexColor;
    attribute vec4 a_geosetColor;
    attribute float a_layerAlpha;
    attribute vec4 a_uvTransRot;
    attribute vec3 a_uvScaleSprite;
    varying vec3 v_normal;
    varying vec2 v_uv;
    varying float v_teamColor;
    varying vec4 v_vertexColor;
    varying vec4 v_geosetColor;
    varying vec4 v_uvTransRot;
    varying vec3 v_uvScaleSprite;
    void transform(inout vec3 position, inout vec3 normal, float boneNumber, vec4 bones) {
      // For the broken models out there, since the game supports this.
      if (boneNumber > 0.0) {
        mat4 b0 = fetchMatrix(bones[0], a_InstanceID);
        mat4 b1 = fetchMatrix(bones[1], a_InstanceID);
        mat4 b2 = fetchMatrix(bones[2], a_InstanceID);
        mat4 b3 = fetchMatrix(bones[3], a_InstanceID);
        vec4 p = vec4(position, 1);
        vec4 n = vec4(normal, 0);
        position = vec3(b0 * p + b1 * p + b2 * p + b3 * p) / boneNumber;
        normal = normalize(vec3(b0 * n + b1 * n + b2 * n + b3 * n));
      }
    }
    void main() {
      vec2 uv = a_uv;
      vec3 position = a_position;
      vec3 normal = a_normal;
      transform(position, normal, a_boneNumber, a_bones);
      v_uv = a_uv;
      v_uvTransRot = a_uvTransRot;
      v_uvScaleSprite = a_uvScaleSprite;
      v_normal = normal;
      v_teamColor = a_teamColor;
      v_vertexColor = a_vertexColor;
      // Is the alpha here even correct?
      v_geosetColor = vec4(a_geosetColor.rgb, a_layerAlpha);
      // Definitely not correct, but the best I could figure so far.
      if (a_geosetColor.a < 0.75 || a_layerAlpha < 0.1) {
        gl_Position = vec4(0.0);
      } else {
        gl_Position = u_mvp * vec4(position, 1);
      }
    }

This excerpt is from:
flowtsohg/mdx-m3-viewer

My code in the Matrix Eater, when I show people that it is capable of rendering animations, is right out and should be ignored. It does all operations on the CPU and has no shader code, then calls glVertex3f a bunch, which is basically just like taking performance and throwing it out the window.

My experimental code in HiveWE was written to use a naive approach that performed these index resolutions while rendering, but using the GPU, just to see if I could get it working.
In that code, the setup is minimal.
upload_2019-2-24_11-7-15.png

This excerpt is from code that hasn't been pushed up to the public fork of HiveWE yet but should be soon.

I basically just build a giant array of all the needed information, and then I use the information while rendering, instead of this caching stuff in a geoset scaffolding/handler type of object.

To solve the jagged array problem, I use 4 unsigned ints where the first one is how many of the following 3 to use. This is already probably way more inefficient than Ghostwolf's stuff.
upload_2019-2-24_11-10-29.png

This changes the shader code to need to do lookups on every vertex, simply because I did not store an array of indices for every vertex.

Here in this shader code file from my experimental build of HiveWE, we can see that I copied a lot of the general flow from Ghostwolf's shader code (above) but that I have to do an extra array lookup that I called groupIndexData.
Code:
#version 450 core

//in int gl_VertexID;
//in int gl_InstanceID;

layout (location = 0) in vec3 vPosition;
layout (location = 1) in vec2 vUV;
layout (location = 2) in mat4 vInstance;
layout (location = 6) in uint vVertexGroup; // use to index into matrices list

layout (location = 7) uniform mat4 VP; // changed from 4, DO NOT forget

layout (location = 8) uniform usamplerBuffer u_groupIndexing; // index by VertexGroup
layout (location = 9) uniform samplerBuffer u_nodeMatrices; // index from groupIndexing
layout (location = 10) uniform int u_nodeCount;

layout (location = 11) in float layer_alpha;
layout (location = 12) in vec3 geoset_color;

out vec2 UV;
out vec4 vertexColor;

mat4 fetchMatrix(int nodeIndex) {
    return mat4(
        texelFetch(u_nodeMatrices, int(gl_InstanceID*u_nodeCount*4 + nodeIndex*4)),
        texelFetch(u_nodeMatrices, int(gl_InstanceID*u_nodeCount*4 + nodeIndex*4 + 1)),
        texelFetch(u_nodeMatrices, int(gl_InstanceID*u_nodeCount*4 + nodeIndex*4 + 2)),
        texelFetch(u_nodeMatrices, int(gl_InstanceID*u_nodeCount*4 + nodeIndex*4 + 3)));
}

void main() {
    uvec4 groupIndexData = texelFetch(u_groupIndexing, int(vVertexGroup));
    uint boneNumber = groupIndexData[0];
    vec3 position = vPosition;
    vec4 p = vec4(position, 1);
    if( boneNumber > 0 ) {
        mat4 b0 = fetchMatrix(int(groupIndexData[1]));
        mat4 b1 = fetchMatrix(int(groupIndexData[2]));
        mat4 b2 = fetchMatrix(int(groupIndexData[3]));
        // TODO handle N size groupIndexData, like war3, instead of size 3
      
        position = vec3(b0 * p + b1 * p + b2 * p) / float(boneNumber);
        // TODO normal
      
      
        // compute p again now after position is updated:
        p = vec4(position, 1);
    }
  
    gl_Position = VP * vInstance * p;
  
    UV = vUV;
  
    vertexColor = vec4(geoset_color, layer_alpha);
    if(vertexColor.a <= 0.75) {
        gl_Position = vec4(0); // got this idea from something ghostwolf did, it's super hack
    }
}

So, already we can see that there are multiple ways of solving this problem and that I chose the inefficient one. On the outside, my code has +1 in several places to allow the "fetchMatrix" function to run on index 0 from missing entries, and have this be different from a bone with "ObjectId 0", which I modify to "1" in my stuff. Scroll back up to the very first code block that I pasted, and notice the comment from Ghostwolf:

// 1 is added to every index for shader optimization (index 0 is a zero matrix)

Tangents in my thoughts here and anecdotal inefficiencies in my code aside, I read this line in Ghostwolf's code and we are both doing this -- keeping index 0 as the zero matrix. For my stuff, that doesn't matter, because I use a second texture buffer for my inefficient loading of the jagged matrices array and the second level of indirection in the shader. I did all this reading and learned texture buffers a few weeks ago expressly because I wanted a way to pass something to the shader that had a dynamic memory size. So, if you give my experimental HiveWE code something with 256 or more entries in the jagged matrices array, it doesn't matter.

When I open the test map that was featured in my video using that code, the computer generated disks match Patch 1.26 showing that the 256th entry in the jagged matrices array is legal in my current code:
hivewedisk-gif.316911

Above, we see something that was rendering using the C++ excerpts from my experimental HiveWE build that were shown above. This was not rendered using Warcraft 3 World Editor, nor the Warcraft 3 game.

So, to sum all of this up, my first guess at the reason why Blizzard changed this code in the newer patches (And I could be very incorrect about this!) was that they wanted the resolution of these indices to be allowed at the second layer to be using bone index zero as an optimization. However, the index with the problem is vertex group index 255. So, actually, I probably just don't know what I'm talking about.

Maybe Blizzard was always using bone index zero as an optimization since the year 2000. I don't really know.
 

Attachments

  • hivewedisk.gif
    hivewedisk.gif
    992.4 KB · Views: 294

Dr Super Good

Spell Reviewer
Level 64
Joined
Jan 18, 2005
Messages
27,198
Maybe Blizzard was always using bone index zero as an optimization since the year 2000. I don't really know.
Not possible because there was no concept of "Shader" during production of Warcraft III. Shaders were only added to graphic APIs while Warcraft III was already in development, so had extremely limited hardware support (latest GPUs only). Instead they used the old fixed function pipelines.

This is why at most 8 lights per batch/mesh drawn are allowed since the fixed function pipelines only supported at most 8 lights per render pass meaning that more than 8 lights would need multiple render passes and then blend passes (extremely slow).
 
Status
Not open for further replies.
Top