'Error: undefined' if a second HDF is mounted
I cant seem to add a second (work or other) HDF file (all known working hdfs) and when attempting to mount into DH0 - DH3 it simply returns the error:
'Error: underfined'
DH0 contains a 128MB HDF Attempt to add a 'Work' 128MB HDF
I assume this error has something to do with memory (albeit a generic error msg but the cause seems to be data overflow?)
Now... this is the same error popup when I attempt mount a 512MB HDF file.
I just added the 210MB [classic wb] (http://classicwb.abime.net/classicweb/download68k.htm) twice one in in volume dh0: and another in volume dh1: which worked well. I also tried with smaller ones which worked well.
there could be different reasons why it fails for your hdf files....
1.reason: memory size
vAmigaWeb is compiled with emscripten AutoGrowth memory turned on. i.e. it starts with a small memory footprint 320MB and when it hits that boundary it will swap all its memory into a bigger memory object and release the old memory object. The advantage is on low mem devices you still could game on and send the Amiga into background ... do something other and come back to where you were ... because PWA is not killed by the OS due to insufficient system memory. Upper boundary in browsers was 2GB but has recently increased to 4GB at least in chrome according to https://v8.dev/blog/4gb-wasm-memory
what is missing though is a memory boundary check on the files which are inserted ... i.e. when you insert a very big one and it does not fit into wasm memory (possibly host system does not give you the autogrow) then things could become broken with those unrelated messages like you saw .... So one reason could be you mounted a big one which did not fit into memory
UPDATE: currently vAmigaWeb can only maximal address 2GB to make 4GB work it has to be compiled with the following setting
emcc -s ALLOW_MEMORY_GROWTH -s MAXIMUM_MEMORY=4GB
UPDATE2: found a memory allocation tester http://clb.confined.space/dump/mem_growth.html http://clb.confined.space/wasm_grow.html
UPDATE3: interesting conversation on wasm memory https://github.com/WebAssembly/design/issues/1397 https://bugs.webkit.org/show_bug.cgi?id=222097
2.reason: HD driver does not possibly handle some specific hdf files correctly ....
what also could be the case that the driver code in vAmigaWeb goes crazy on your specific hdf file. Driver in vAmigaWeb is written by @mras0 so we could ask him on that matter... can you upload that hdf file somewhere so that we could try on it?
Thanks for the detailed replies!
It seems to be any default hdf using 'Create HDF' in WinUAE (latest ver and previous ver created hdfs do the same) formatted / OS installed or not...
What do you use to create your Amiga hdfs? I need a better workflow using hdfs and if it is just my method of HDF creation then I can change that immediately!
Thanks for any tips!
short feedback on IPad I see the exception now too … even on a single 210MB hdf file … while on mac in contrast multiple big hdf files simultaneously are working fine
I think it is a mobile safari memory issue … safari mobile combined with auto grow mem … I will look into detail tonight… I tried it at an experimental web worker edition of vAmigaWeb on https://mithrendal.github.io/worker which is compiled with a fixed mem size of 1GB and there the 210MB hdf is working fine…
Note that vAmiga only supports HD sizes up to 504MB. Creating a blank 504MB disk, and mounting it in vAmigaWeb works (for me, in Firefox). Mounting another disk that also works on its own, at the same time brings up the dialog you observed (dev console shows: Uncaught (in promise) WebAssembly.Exception { }"), so I think @mithrendal is right that it's likely a memory issue. The same two disks can be mounted just fine at the same time in a desktop version.
I think it might be possible to limit the amount of memory used while inserting the HDF if that's necessary by making some changes to vAmiga (but will be easier to work around it in some other way obviously).
An aside: It seems like I can't reliably mount 3 disks at the same time? After inserting two HDFs the third one will replace DH0 even if I select DH2 as the insertion drive?
An aside: It seems like I can't reliably mount 3 disks at the same time? After inserting two HDFs the third one will replace DH0 even if I select DH2 as the insertion drive?
😬embarrassing ... a copy and paste error ... see
<div id="drive_select_choice">
<button type="button" class="btn btn-primary m-1 mb-2" style="width:20vw" onclick="insert_file(0);show_drive_select(false);">dh0:</button>
<button type="button" class="btn btn-primary m-1 mb-2" style="width:20vw" onclick="insert_file(1);show_drive_select(false);">dh1:</button>
<button type="button" class="btn btn-primary m-1 mb-2" style="width:20vw" onclick="insert_file(0);show_drive_select(false);">dh2:</button>
<button type="button" class="btn btn-primary m-1 mb-2" style="width:20vw" onclick="insert_file(1);show_drive_select(false);">dh3:</button>
</div>
dh2 goes in dh0 and dh3 goes in dh1 ... 🙈
I will correct this now...
EDIT: fixed version is online... tomorrow some research on the memory topic
I did some research and tests … seems like iOS/iPadOS Safari does not like to grow memory above a certain boundary (seems like boundary is near 1 GB) … on safari desktop there is no boundary … in contrast when initially grabbing a fixed memsize (instead of growing) safari mobile lets me get up to 1.4GB whereas safari desktop or chrome gives up to 4GB.
I precompiled a special version with fixed 1.4GB heap mem for further testing
https://vamigaweb.github.io/uat/
This version lets me at least mount two 210MB classicwb hdf files… in dh0: and dh1: a third one in dh2: is rejected… in desktop safari it lets me mount four 210MB hdf files in dh0: to dh3:
It seems to insert the drive with an extra '0' at the end ie "DH20" and "DH10" See image below:
[EDIT: I was trying DH2 in this screengrab but the same issue goes for DH1-DH3 (They all have extra 0 in 'Device Name') DH0 appears to be correct]
...typo?
But... :thumbsup: can mount 2 hdfs now though! Thanks! (I simply have to run an ASSIGN to link DF1 and DF10 until that is fixed)
Thanks heaps for the updates!
Yes here too 😳! @mras0 do you have an idea what I am doing wrong here ?
Will check that in vAmigaMac this evening…
Pretty sure it'll be the same in the "normal" vAmiga version. AFAIK partitions assigned to the first controller get named 'DH0', 'DH1', etc. Second controller 'DH10', 'DH11',.... third 'DH20' and so on. So to get 'DH0' and 'DH1' you actually have a single HDF with two partitions (i.e. a RDB style disk).
The thing is HDF's can be either "plain" containing a raw filesystem (this is basically like a huge ADF) or they can contain a dump of a real harddrive in which case they can have multiple partitions (RDB), so you can have a situation where the first controller has a RDB HDF attached with 4 partitions (which will then be named DH0,DH1,DH2 and DH3) and the second one has a "plain" HDF attached (DH10).
My (amiga side) code let's the emulator decide what the devices are called, so you'll have to take it up with Prof. Hoffmann if you want a different behavior, but be aware that you could end up with a "funny" situation either way. E.g. if you number partitions in order then mounting another disk in the first controller might renumber partitions in later ones.
Hmm, strange. The current code for assembling the partition names looks like this:
string
HardDrive::defaultName(isize partition)
{
if (nr >= 1) partition += amiga.hd0.numPartitions();
if (nr >= 2) partition += amiga.hd1.numPartitions();
if (nr >= 3) partition += amiga.hd2.numPartitions();
return "DH" + std::to_string(partition);
}
Hence, partitions should be numbered consecutively. @mitrendal: Are you using the latest code?
@dirkwhoffmann As far as I can see defaultName is only called when attaching a small default disk. I think the name is coming from assignDosName in HdController::processInit. Arguably for RDB disks the name saved in the partition table (PartitionDescriptor::name) should be used, which I don't think is the case at the moment. That's at least what I'm seeing with what I think is the latest code.
I think the name is coming from assignDosName in HdController::processInit
Oops, sorry, you're right. I looked at the code too briefly. The name comes from this function and does what you were saying.
UX issue is that what's labelled as "insert into DH1:" really amounts to stuffing another SCSI/IDE controller into your bigbox Amiga and attaching a harddrive to it, not actually assigning a simple HDF to DH1.
Using defaultName() for the partition is probably going to be right for >99% of users though, and the remaining ones probably don't want the current naming either (likely want it from the partition table) :)
@dirkwhoffmann yes latest vAmiga code behaves the same...
I see ... @mras0 implemented a SCSI/IDE controller board to be able to mount more than one HDF and these files go then to dh10 dh20 dh30 ... and when mounting a hdf with more partitions to UI slot dh0 then these additional partitions go to dh1, dh2, ...
@mras0 question 🙋🏻♀️ and when we put such a "RDB" hdf file with multiple partitions to the UI slot dh1 then these go to dh10, dh11 and d012 ?
pushed this fixed memory version with some other bugfixes to the official address ... see release notes https://vamigaweb.github.io/doc/index.html the 1.4GB version did show out of memory problems on the about page where we embedded some demos... with a 1.2GB setting these problems vanished ... really hope that apple is sorting this mobile safari bug/constraint out soon...
@aZtOcKdOg lets close this one or should we leave it open?
UPDATE: oh no 🙈1.2GB also did give me out of memory when vAmigaWeb was embedded on a website multiple times ... I revert that back to auto growing the memory for now ...
Yeah same errors for now in iPadOS safari... yes hopefully they do soon!
I guess we should... keep 'open' atleast group the same issue/s together if pops up for others?
Thanks!!
Just got a hint that the mem constraint is fixed in iPadOS / iOS 18
https://github.com/emscripten-core/emscripten/issues/19144#issuecomment-2368977691
It seems with autogrow memory can happily grow to the full 2GB in one chunk now with latest iOS / iPadOS release…
Testing now 😎
Update:
Mounted 4 HDF files with each 209MB in size. They work great… no problems… lets try to mount three 504Mb size volumes and see whether it eats that too