sh3redux
sh3redux copied to clipboard
Generic resource loading
To load resources like textures and models from the ARC files, a templated resource
class was proposed:
template<typename Header> // where Header is sh3_texture_header for example
class resource
{
...
Header& header() { return reinterpret_cast<Header&>(raw.front()); }
using iterator_range = std::pair<raw::iterator, raw::iterator>; // we could use boost::iterator_range instead
iterator_range data() { return std::make_pair(begin(raw) + header().data_offset(), end(raw)); }
...
std::vector<uint8_t> raw;
};
A texture
would then be a resource<sh3_texture_header>
, which provides us access to the header by simply aliasing the bytes in resource::raw
, as well as convenient access to the "body" (non-header bytes). Reading in the vector is then enough to load a resource from a file.
Some resource types, e.g. models, may have multiple sub-headers. This is what the data_offset()
function of the header is for.
For headers without sub-headers, the implementation is just return sizeof(*this);
, while for those with sub-headers, the implementation is return sizeof(*this) + num_sub_headers * sizeof(SubHeader);
to account for them.
The sub-headers in models refer to embedded textures. These resources can also be handled with an appropriate data_offset()
function.
This is in place of vfile
yes? So instead of having a buffer that we seek and read data from (similar to how an actual FILE
works), we just have a resource
with appropriate header passed in that we can more easily read bytes from the header (without a local copy in a Load( )
function). Is there any advantage to this besides the added abstraction of the header types? If this is in place of vfile
, would it work considering the Master File Table (arc.arc) is just a whole lot of folders and files shoved into one huge file??
EDIT: I've just reread #43 and I think I may have the wrong idea about this (it also midnight here and my reading comprehension isn't so good)
A vfile
provides access to a file in the MFT as bytes, while this would interpret the bytes as a resource.
So it doesn't necessarily make vfile
obsolete (at least in the short term), but would definitely be used in place of it.
Since the resource
includes a local buffer (raw
), just like the vfile
, I don't see why it would conflict with the workings of the MFT.
Okay, I think I see now. It's basically a better way to abstract the resource type and also saves a heap of code reuse (such as having heaps of calls to file.ReadData(...)
) as the data is already there for us to access.
Also, I assume data_offset()
would work with the convoluted model format (IIRC there's a header for each primitive/polygon) where we have to iterate over n primitives? If so that would definitely make loading models much easier and less complex.
It's basically a better way to abstract the resource type and also saves a heap of code reuse (such as having heaps of calls to file.ReadData(...)) as the data is already there for us to access.
Exactly.
Also, I assume data_offset() would work with the convoluted model format (IIRC there's a header for each primitive/polygon) where we have to iterate over n primitives? If so that would definitely make loading models much easier and less complex.
data_offset()
can use all fields in the header to calculate the appropriate offset, e.g. num_sub_headers
in the second example. If more information is required, we'll need to think if we can perhaps pass this to data_offset()
from the resource
via parameter.
This is only a concept so far. I have not started the implementation, this issue represents only the idea.
Excellent, this sounds like this could work really well. Hopefully you can find a workaround for some of the weird formats (i.e models). You should be able to see what I mean by looking at Mike's code I sent you.