BigInteger in Neo.Json
Close https://github.com/neo-project/neo/issues/3036
During this PR an error was found, a number greater than MAX_SAFE_INTEGER or less than MIN_SAFE_INTEGER could be parsed before.
Open question: We want this restriction only for serialize?
This will change the behavior of existing contracts as well.
I was expecting something other than the double type; for example all integer and floating point types. Why are we using double only anyways?
This will change the behavior of existing contracts as well.
Yes, we need a fork, but currently we allow integers out of the range
Yes, we need a fork, but currently we allow integers out of the range
Its not about a fork, it is that you have changed the behavior of existing contract. Which means the logic in existing contract may become invalid or different, and cause unexpected execution result.
This PR isnt going to work seeing how big integers can be 256-bits. double isn't that big.
Yes, we need a fork, but currently we allow integers out of the range
Its not about a fork, it is that you have changed the behavior of existing contract. Which means the logic in existing contract may become invalid or different, and cause unexpected execution result.
Any idea? currently we allow bigger values than expected
Any idea? currently we allow bigger values than expected
We need to track the version or the deploy or latest update block of contracts, if it is before your fix, it should run as before, otherwise, it can adopt new logic. Otherwise, a bug should be a bug,,,, we should not fix it.
we should not fix it
What do you mean not fix it? The bug?
we should not fix it
What do you mean not fix it? The bug?
We should not fix a bug that is already there in the chain, at least not in a way that can change the behavior of existing contract. Cause our update may cause unexpected behavior of those contracts. Take this one for example, it changed the behavior of the deserialize interface, if a contract uses it to deserialize a very large number, it works in the past, so he just write the contract that way and core logics rely on the deserialize operation, what will happen when this pr is patched? his contract may fail, may also have who knows behavior cause he might have wrapped it in a try block.
But take this in to count
Im sure no one has reached the max value yet, but see what can happen. so needs to be fixed.
Again, you dont fix it in a way that change the existing contract behavior. And you dont ensure someone has reached the maximum or not cause you dont know all contracts deployed. If you keep make changes like this, the neo project will be ruined one day cause you keep changing the existing contact behavior.
You playing with people's potential money they have invested. This could be bad if a contract is doing StdLib.JsonSerialize and StdLib.JsonDeserialize for objects that deal with real funds or NFTs. Now developers use string as the json output, mostly. This will break oracle transactions, which are now bugged. I know this isn't a good solution, hints I said This PR isn't going to work.
This should of been developed correctly from the begin, the whole reason why we don't use floating points on the blockchain.
@Jim8y is right, a fork don't solve that the contract was not updated, maybe we need to store in the contract the last updated/deploy height, and use it as a fork height
You playing with people's potential money they have invested. This could be bad if a contract is doing
StdLib.JsonSerializeandStdLib.JsonDeserializefor objects that deal with real funds orNFTs. Now developers usestringas the json output, mostly. This will breakoracletransactions, which are now bugged. I know this isn't a good solution, hints I saidThis PR isn't going to work.This should of been developed correctly from the begin, the whole reason why we don't use
floatingpoints on the blockchain.
We are not playing around, it is a very serious problem. The very fundamental bottom line is we do not change the behavior of existing contracts, we must defence it firmly. Determinastic is one of the key feature of the blockchain, you break it, you break everything. I am not saying we should ignore it, but we should not fix it in a way that can potentially destroy the whole project. Reputation is the key in blokchain world, not a single project will want to exist in a chain that can change its contract's behavior arbitarily.
I understand now. I didn't know what you meant by not fix it. Now I do. Thx for clearing that up.
@Jim8y is right, a fork don't solve that the contract was not updated, maybe we need to store in the contract the last updated/deploy height, and use it as a fork height
Perhaps this is a possible solution @shargon. If a field is added to all deployed contracts and they keep calling the "legacy" version. A mechanism can be created for those that want to remove this "legacy" flag. Maybe "ContractManagement" could handle that.
Let's not forget about oracle service as well. This would of affected any variable in the contract that they set a valid too.
@superboyiii, please check the state compatibility.
@shargon format not a thing i can update, please check it.
@shargon @Jim8y Something is incompatible. Blockchain stopped syncing at block 1528989 on mainnet.
@shargon @Jim8y Something is incompatible. Blockchain stopped syncing at block
1528989on mainnet.![]()
Please consider close this pr. @shargon
@shargon @Jim8y Something is incompatible. Blockchain stopped syncing at block
1528989on mainnet.![]()
Please consider close this pr. @shargon
But it can't be related to this pr
@shargon do you can start working on this again? If I remember we needed to change the vm as well. Since we are working on the vm now.
@shargon conflicts, lets add hardfork if possible and make it merged into the hardfork pr. Such that no more conflict to maintain.
@shargon @superboyiii can we add this in 3.9.0?
@shargon @superboyiii can we add this in
3.9.0?
First we need to
@shargon @superboyiii can we add this in
3.9.0?
If it's added I can move this PR to a new one, cleaner
@shargon @superboyiii can we add this in
3.9.0?First we need to
Is it confirmed we need this in v3.9.0? If so, I will add it.
@shargon @superboyiii can we add this in
3.9.0?First we need to
Is it confirmed we need this in v3.9.0? If so, I will add it.
In our meeting, it will be in 3.10.0 or whatever version after 3.9.0
