nodejs-idb-connector
nodejs-idb-connector copied to clipboard
pgm-call: result-xml structure corrupt due to randomly added characters (urgent)
- Node.js version: 12.14.0
- itoolkit version: 1.0.0
- IBM i version: 7.3
- XMLSERVICE version: XML Toolkit 2.0.2-dev
Describe the bug
The node-script requests IBM i DB-Data via a call to rpg-pgm.
As working with daily-data for one month the call to the program is added to a toolkit conn-object for 30/31 times.
Running the con results in a successful execution but gives back a corrupted xml-structure (e. g. ... type=a'4p2' ..., ... <d
! Running the script multiple times with identic PERS# results always in the same error at the same position in xml-string.
To Reproduce see above
Expected behavior Please locate the bug and provide a corrected version of itoolkit or xmlservice
Files If needed, we can provide our code snippet and some xml-data.
Examples for Errors:
<data type='4p2'>0<,00</data>
</parm>
<parm>
<data type='20A'></data>
<</parm>
<parm>
<data type='4p0'>1279</daita>
</parm>
<parm>
<data type='8p0'>17032020</d<ata>
</parm>
<parm>
<data type='4p0'>1431</data>
</rparm>
<parm>
<data type='4p2'a>15,04</data>
</parm>
code for the function Function.ts.txt
Meantime tested with transporttype=odbc > no errors! Maybe there is some issue in transporttype = idb ?!!
Hello @nedi-dev :wave:
Interesting that when using odbc transport you do not run into the errors. Does the error occur every time using idb transport or randomly?
I noticed from your code snippet that you have conn.debug(true) enabled. Does the xml input look malformed on the way in as well?
It would nice to have a simplified example of the RPG program being called (without confidential info) so that the issue can be readily reproduced.
Hi @abmusse The error occurs always when using 'idb' transport. Even the misspelled tag is the same for the same input parameters. The input-xml is fine in all cases. Find the abbreviated rpg-source attached. ZEI109_short.rpglesrc.txt
What version of idb-connector do you have installed?
$ npm ls idb-connector
I'm thinking this could be an issue with dbstmt.fetchAll.
Can you add the following to line 92-93 in lib/transports/idbTransport.js
console.log(results);
console.log('results length: ', results.length);
And check if the xml in the results array is already corrupted before its appended to xmlOutput variable.
$ npm list idb-connector
[email protected] /home/NEUMANN/nodejs/trb-pgm-ZEI109
`-- [email protected]
+-- [email protected]
`-- [email protected]
`-- [email protected] deduped
Thinking you are on the right track:
XML Error:
<success><![CDDATA[+++ success LGIZEIT ZEI109 ]]></success>
Duplicated "D" at pos 15!
Output from idbTransport:
'<parm>\n' +
"<data type='50A'>Keine Daten vorhanden !</data>\n" +
'</parm>\n' +
'<success><![CD'
},
{
OUT151: 'DATA[+++ success LGIZEIT ZEI109 ]]></success>\n' +
'</pgm>\n' +
"<pgm name='ZEI109' lib='LGIZEIT' error='fast'>\n" +
'<parm>\n' +
"<data type='2p0'>1</data>\n"
Same here:
<parm>
<data type='20A'></dataD>
</parm>
Capital "D" is wrong!
idbTransport:
'<parm>\n' +
"<data type='20A'></dataD"
},
{
OUT151: '>\n' +
'</parm>\n' +
other findings:
'<parm>\n' +
"<data type='8p0'>17012020D"
},
{
OUT151: '</data>\n' +
'</parm>\n' +
'<parm>\n' +
"<data type='4p2'>0,00D"
},
{
OUT151: '</data>\n' +
'</parm>\n' +
'<parm>\n' +
"<data type='4p0'>1279</date"
},
{
OUT151: 'a>\n' +
'</parm>\n' +
@abmusse , should we transfer this one over to idb-connector?
From your output looks like the data retrieved using idb-connector is already corrupted before we iterate over results array and append to the xmlOutput. So seems to be an issue with idb transport since you tried odbc and did not receive an error.
@ThePrez That would make sense given the issue seems to be caused by idb-connector.
@dmabupt
Any thoughts on what may be causing this issue?
@dmabupt
Any thoughts on what may be causing this issue?
Seems to be an encoding issue. Need to reproduce it for debugging.
This is a good point. The error occurs only for employees with names including "ä ö ü Ä Ö Ü" german special-characters. Data for other persons seem to be fine.
Interesting I wonder if this issue is related to https://github.com/IBM/nodejs-idb-connector/issues/70.
Or try this #20
Read through your references, but didn't got a clue how to proceed. Maybe it's worth mentioning that our scripts run on the same IBM i system, where the database is located. Is there any character conversion involved? Where?
Read through your references, but didn't got a clue how to proceed. Maybe it's worth mentioning that our scripts run on the same IBM i system, where the database is located. Is there any character conversion involved? Where?
At the beginning of your Node.js code, add process.env.DB2CCSID = '0'; to make your program running with its job CCSID and check if it resolve the problem.
At the beginning of your Node.js code, add
process.env.DB2CCSID = '0';to make your program running with its job CCSID and check if it resolve the problem.
Well, the error with malformed xml-tags disappeared ... ... but now we have an issue with special characters (öäü) in the data. see https://github.com/markdirish/node-odbc/issues/79
:wave: Hi! This issue has been marked stale due to inactivity. If no further activity occurs, it will automatically be closed.
From my point of view this issue is not fixed yet. If we are forced to use DB2CCSID='0', we can not use this connector anymore as the special characters are not processed correctly. How can we proceed with this?
Not sure why the bot closed it - reopening.
@kadler please reopen as it is not fixed. What could be the next step from your / our side?
Since this is a conversion issue, can you please provide
- value of QCCSID
- job CCSID
- job default CCSID
- job langid
- job cntryid
- PASE locale used (
locale)
FYI, I asked for locale -a, but that was wrong. I meant just to run locale. I've updated my comment above.
value of QCCSID
1141
job CCSID
1141
job default CCSID
1141
job langid
DEU
job cntryid
DE
PASE locale used (locale)
QSH:
LANG=/QSYS.LIB/DE_DE.LOCALE
LC_COLLATE=
LC_CTYPE=
LC_MESSAGES=
LC_MONETARY=
LC_NUMERIC=
LC_TIME=
LC_ALL=
Putty/bash: LANG= LC_COLLATE="C" LC_CTYPE="C" LC_MONETARY="C" LC_NUMERIC="C" LC_TIME="C" LC_MESSAGES="C" LC_ALL=
I have a hunch that the problem is occurring here: https://github.com/IBM/nodejs-idb-connector/blob/master/src/db2ia/dbstmt.cc#L2477
If the SQLFetch returns an indicator with a valid length (ie. not SQL_NTS or SQL_NULL_DATA), we will copy the data but not null terminate it here: https://github.com/IBM/nodejs-idb-connector/blob/master/src/db2ia/dbstmt.cc#L2418-L2421
Since it's not null-termiated and we're not passing in the length, the Napi::String::New constructor will just keep going until it finds a null-terminator.
I have a hunch that the problem is occurring here: https://github.com/IBM/nodejs-idb-connector/blob/master/src/db2ia/dbstmt.cc#L2477
If the SQLFetch returns an indicator with a valid length (ie. not
SQL_NTSorSQL_NULL_DATA), we will copy the data but not null terminate it here: https://github.com/IBM/nodejs-idb-connector/blob/master/src/db2ia/dbstmt.cc#L2418-L2421Since it's not null-termiated and we're not passing in the length, the
Napi::String::Newconstructor will just keep going until it finds a null-terminator.
@kadler Maybe we should detect the rlength parameter first? Like --
default:
if(resultSetInC[row][col].rlength > 0)
value = Napi::String::New(env, resultSetInC[row][col].data, resultSetInC[row][col].rlength);
else
value = Napi::String::New(env, resultSetInC[row][col].data)
break;
Alternatively, just always set rlength to the proper value here: https://github.com/IBM/nodejs-idb-connector/blob/master/src/db2ia/dbstmt.cc#L2404-L2414
When it's SQL_NTS we already get the length in to colLen (why we don't use strdup here, I'm not sure). For SQL_NULL_DATA, we could set it to 0, and for other negative values it could be set to 1 (in the future, this code should probably handle this case better than returning "-", though).
Alternatively, just always set rlength to the proper value here: https://github.com/IBM/nodejs-idb-connector/blob/master/src/db2ia/dbstmt.cc#L2404-L2414
When it's SQL_NTS we already get the length in to
colLen(why we don't use strdup here, I'm not sure). For SQL_NULL_DATA, we could set it to 0, and for other negative values it could be set to 1 (in the future, this code should probably handle this case better than returning "-", though).
I am not sure about the behavior of Napi::String::New() when the size is 0. I may detect the rlength for now.
I'm not sure either. AFAICT it doesn't even mention the ability to pass a NULL pointer.