Major overhaul of JSON model importer for stability, texture resolution, and tweaks to UV
Significantly overhauled how mcmodel.py handles textures, materials, and UVs when importing Minecraft JSON models. It fixes several issues with texture lookup, # reference resolution, different resource packs, individual JSON model imports, incorrect material indices, etc.
Improves importer reliability across various resource packs.
Key Improvements
-
Reliable texture reference resolution Introduces
get_final_texture_keyto correctly follow chained#texture references and avoid circular recursion. -
Improved image lookup
locate_imagenow normalizes paths and searches in this order:- The model’s own resource pack
- The active resource pack
- The addon’s default fallback
This fixes many “missing textures” produced by the old version.
-
Correct material creation & indexing Materials are only created for valid, resolved textures. Faces now receive consistent
material_indexvalues, fixing "index out of bounds" errors and other material assignment bugs. -
Improved UV scaling for non-square textures UV adjustments/scaling only occur when the texture’s height < width, preventing distortions on models using 64×32, 128×64, or other non-1:1 textures. UV scaling is not needed when height > width.
-
Better error handling & logging
- Added better error handling using
try/exceptinside the element and face creation loops ofadd_model. This allows the script to skip problematic model faces or elements and continue importing the rest of the geometry, preventing the object from completely not importing.
- Added better error handling using
I tested on 12 different resource packs and custom mod assets.
If you want to quickly test the new importer. Here is a script that automatically imports and validates each JSON object in a directory. Auto Test Minecraft Model Importer.py
Meant to include the results from my automated test.
I tested 37 different mods, resource packs, and folder structures that I just had lying around on my computer, attempting to import 9,846 models (excluding the ones that threw “JSON model does not contain any actual geometry” warning).
| Old Importer | New Importer |
|---|---|
Due to some weird things I've noticed in this PR (especially in subsequent changes), I have to ask this question: was AI used in the creation of this PR? While we don't forbid use of AI in contributions, we do have rules in place on its use.
Just chiming in to say thanks @BolWaffy for the continued contributions, and appreciate you Standingpad for the review upkeep
To answer your question directly: Yes, I do use AI tools, but I keep usage within the guidelines. I use them primarily as a productivity aid for polishing commit messages, rewording docstrings/comments for clarity, and handling boilerplate or verifying API references (along with me checking the actual documentation). All code logic, decisions, and solutions are my own, and I verify everything I commit. Regarding the "weird things" you noticed in the subsequent changes: I realize now that I opened this Draft PR earlier than I should have. You likely saw some "scratchpad" commits where I was still working on the implementation and cleanup, rather than polished final code. As this is my first contribution to a shared project of this scale, I am still tuning my workflow regarding commits and PR timing. Going forward, I will make sure the final code is clean before marking it ready for review. Thanks for the heads-up!
Pushed a couple of commits and made some comments on a couple of LSP issues I encountered. Seems to be working fine with the vanilla resource pack, but of course I need to test this on some other models as well.
@zNightlord can you also review this for any issues, since you have more experience with JSON models than I do?
Could you clarify what the types of faces and d_face are supposed to be in this section? My LSP is stating that faces is either a VectorType, Dict[str, str], or None. I've added a check for Dict in the same check for None, but this leaves d_face as a str (thus making d_face.get(...) be invalid, since str does not have the get method), since faces would be narrowed down to a Dict[str, str], and faces.get(...) would, as such, return a str.
faces = e.get("faces")
for i in range(len(element[2])):
try:
f = element[2][i]
if not faces:
continue
face_name = FACE_DIRECTIONS[i]
d_face = faces.get(face_name)
if not d_face:
continue
face_mat = d_face.get("texture")
if not face_mat:
continue
It looks like the e that's being iterated in the for loop is assumed to be a Dict[str, Dict[str, str]] (aliased as TexFace). I'll try to refactor this so static analyzers and LSPs are aware of this assumption. @BolWaffy if you haven't already, I highly encourage using an LSP with type checking enabled when writing new code, as it highlights issues with types being assumed where those assumptions may not always hold true (assuming you're using VScode and Pyright, make sure python.analysis.typeCheckingMode is set to something higher than basic)
I've pushed a couple of type annotation fixes, as well as just general fixes based on what the LSP caught (such as an attempt to divide a variable that could either be a int or str, without first checking the type). I've also added a TODO for a later date (not now, maybe some time in the future, like next year) to go through and validate all the types being used in the algorithm (the main issue I've found is assuming variables are of a certain type, when they could be other types).
Alright testing with modded blocks, and this seems to work with both drag-and-drop and regular imports
https://github.com/user-attachments/assets/ebe0320b-3564-49be-8a46-9474ca2879b4