vscode-ibmi
vscode-ibmi copied to clipboard
Support QsysFs INB files for IBM i Notebooks
Changes
Added support for saving source type INB
files in Qsys. Before it would attempt to save the JSON from IBM i Notebooks up to the record length, causing malformed JSON and a loss of all data stored in your INB file.
Writing to Qsys
If extension is INB
and the sourceData array has 1 element of valid JSON then it will use regex to split on recordLength instead of \n
.
Reading from Qsys
If extension is INB
join srcdta using ``
instead of \n
back to the body
Checklist
- [✅] have tested my change
- [✅] updated relevant documentation vscode-ibmi-notebooks commit/future PR
- [✅] Remove any/all
console.log
s I added - [✅] eslint is not complaining
- [✅] have added myself to the contributors' list in CONTRIBUTING.md
- [✅] for feature PRs: PR only includes one feature enhancement.
@worksofliam the reason I marked this as a draft - this works when using qsys/complex, but nothing I can do for qsys/basic. In essence, you need to have config.enableSourceDates
turned on.
Should I make a new config option and let the code drop into using qsys/complex on config.enableSourceDates || config.enableINBSupport
or just update the description for Source Dates to mention IBM i Notebooks?
Or perhaps a different idea?
Here's a test showing INB loading correctly from Qsys (left) and the JSON that was split at 100 (right).
@Wright4i Neat! I've got to be careful with this, since it's affecting quite a serious bit of code. It makes sense that you're changing the lines to not go over the record lengths, but I imagine there is some impact on how this affects source dates.
Should I make a new config option and let the code drop into using qsys/complex
Let's not do this. I think something that needs to be investigated before we look into this is: changing the document content (the JSON) before it even touches writeFile
or readFile
.
I am not sure how I feel about checking for a specific extension in those methods either. Seems kind of hacky and would like to investigate if this is something that we can achieve alone in vscode-ibmi-notebooks before touching this filesystem implementation.
Let me know what you think.
Thanks!
@worksofliam the extension check was really to make sure this change doesn't affect anything else as I was also concerned about an impact on source dates and non-INB files. There's no reason this wouldn't work without the extension check and then any valid 1 line JSON document would save properly to Qsys.
I'll see what it would take to modify the JSON before the writeFile. This might be the better solution I'll report back.
I started here as this was the exact spot in the code making malformed JSON today. I made 2 notebooks and lost everything then found the documentation mentioning don't use source members 😅.
This bit of code I especially don't like in the current content.js. Just quietly truncating any source you have over the record length. It could use some improvement as well.
if (sourceData[i].length > recordLength) {
sourceData[i] = sourceData[i].substring(0, recordLength);
}
@worksofliam so following up on modifying the JSON before the writeFile.
To do that the change would need to be back in vscode-ibmi-notebooks
in the serializeNotebook
function.
There's a few issues though with modifying the serialize function I ran into:
- No access to recordLength
- Return type expected is
<Uint8Array>
which I can't seem to change asvscode.workspace.registerNotebookSerializer
needs to process the content in that type - JSON doesn't support multi-line values. You can add in
\n
newline characters, but the value itself will still be one long string - No way to detect we're trying to save to Qsys
So with those issues this is the best I could come up with after trying a half dozen ideas and it definitely seems worse than modifying writeFile. This is just a POC change, obviously it'd have to be conditioned not to always split (see Issue 4 above)...
async serializeNotebook(
data: vscode.NotebookData,
_token: vscode.CancellationToken
): Promise<Uint8Array> {
let contents: RawNotebookCell[] = [];
for (const cell of data.cells) {
let splitCell = cell.value.match(/.{1,87}/g);
if (splitCell === null) {
splitCell = [cell.value];
}
for (const split of splitCell) {
contents.push({
kind: cell.kind,
language: cell.languageId,
value: split
});
}
}
return new TextEncoder().encode(JSON.stringify(contents,null,1));
}
So I'm splitting the cell pushing a new object everytime a cell.value is over the recordLength (112). I also need to stringify with 1 whitespace to at least make the individual JSON key-value pairs on new lines before it gets encoded.
The result looks like this:
It's better than nothing.. but it still is 💩
Long story short
I'm open to any ideas on how to abstract it / make it less hacky, but I think the change will have to be somewhere in this repo.
-or-
Maybe we give up and instead make a QOL issue ticket to just block Notebooks from even trying to open from a Qsys .inb file and instead suggest an IFS/Local notebook. Then nobody else can fall into the same trap.
Yeah, this is tricky. IMO, I personally am ok with not supporting QSYS more than I am ok with updating our filesystem provider to support it. I think we are better off blocking notebooks for that filesystem.
No way to detect we're trying to save to Qsys
Yes, there is a way but I have no idea if you can access it from the notebook.
document.uri.scheme === 'member'
where document
is a TextDocument
.
@Wright4i Ok, here's an idea but think I found an issue in the VS Code API.
There is a onWillSaveTextDocument
event as part of workspace
, but not a onWillSaveNotebookDocument
. This is the event that might allow us to check the scheme, and if it is member
, then reformat the JSON before writing to the filesystem.
edit: see below.. I found the issue for it :)
Discussed with @Wright4i what the next steps are. Will close.